-
Notifications
You must be signed in to change notification settings - Fork 1
/
supplement-MAP-reduction.Rmd
266 lines (207 loc) · 19.3 KB
/
supplement-MAP-reduction.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
---
classoptions:
- sn-nature
- referee # Optional: Use double line spacing
- lineno # Optional: Add line numbers
# - iicol # Optional: Double column layout
title: "Supplement to: 'Meaningfully reducing consumption of meat and animal products is an unsolved problem: A meta-analysis'"
titlerunning: MAP-reduction-supplement
authors:
- firstname: Seth Ariel
lastname: Green
email: [email protected]
affiliation: 1
corresponding: TRUE
- firstname: Benny
lastname: Smith
affiliation: 2
- firstname: Maya B.
lastname: Mathur
affiliation: 1
affiliations:
- number: 1
info:
orgdiv: Quantitative Sciences Unit, Department of Medicine
orgname: Stanford University
- number: 2
info:
orgname: Allied Scholars for Animal Protection
keywords:
- meta-analysis
- meat
- plant-based
- randomized controlled trial
date: "`r Sys.Date()`"
output:
rticles::springer_article:
keep_tex: true
bibliography: "./vegan-refs.bib"
editor_options:
chunk_output_type: console
header-includes:
- \usepackage{comment}
- \usepackage{anyfontsize}
- \usepackage[style=default]{caption}
- \usepackage{float} # For precise float placement
- \usepackage{placeins} # Provides \FloatBarrier command
- \usepackage{adjustbox}
- \usepackage{threeparttablex}
---
```{r setup, include=FALSE}
# so that knitr labels figures
library(knitr)
opts_chunk$set(fig.path = "./results/figures/",
echo = TRUE,
out.extra = "",
fig.pos = "ht")
options(tinytex.clean = TRUE) # switch to FALSE to get the bbl file
# libraries, functions, data, and key results
source('./scripts/libraries.R')
source('./scripts/functions.R')
source('./scripts/load-data.R')
source('./scripts/models-and-constants.R')
source('./scripts/robustness-checks.R')
reviews_count <- nrow(read.csv('./data/review-of-reviews.csv'))
excluded_count <- nrow(read.csv('./data/excluded-studies.csv'))
```
# Search strategy
```{r prisma_numbers, include=F, echo=F}
source('./scripts/prisma-script.R')
```
Our search process was shaped by three features of our research project. First, our surveyed literature was highly interdisciplinary, with few shared terms to describe itself. For instance, the term 'MAP' is not universally agreed upon; other papers use animal-based protein, edible animal products, or just meat, while some studies focus on a particular, sometimes unusual category of MAP, such as fish sauce
[@kanchanachitra2020] or discussed their agenda mainly in terms of increasing plant-based alternatives. Coming up with an exhaustive list of terms to search for from first principles would have been very difficult or impossible.
Second, our methods-based inclusion criteria complicated screening on titles and abstracts. While it was sometimes possible to use solely that information to eliminate studies with no interventions (e.g. survey-based research), determining whether an intervention qualified almost always required some amount of full text screening. We also discovered that terms like field experiment have varying meanings across papers, and identifying whether a measured food choice was hypothetical or not often required a close reading. For these reasons, screening thousands or tens of thousands of papers struck us as prohibitively time-consuming.
Third, we found a very large number of prior reviews, typically aimed at one disciplinary strand or conceptual approach, touching on our research question. Reviewing tables and bibliographies of those papers proved fruitful for assembling our dataset.
For these reasons, we employed what could be called a 'prior-reviews-first' search strategy.
Of the `r nrow(all_papers)` papers we screened, a full `r round(prior_review_n / nrow(all_papers), 2) * 100`\% came from prior reviews, and `r round(included_review_count/num_papers, 2) * 100`\% of papers in our main dataset.
(See the next section of the supplement for notes on reviews that were especially informative.) Then, as detailed in the main text, we employed a multitude of other search strategies to fill in our dataset, one of which was systematic search.
In particular, we searched Google Scholar for the following list of terms, and checked ten pages of results for each:
- "dynamic" "norms" "meat"
- "dynamic" "norms" "meat" "consumption"
- "field" "experiment" "plant-based"
- "meat" "alternatives" "default" "nudge"
- "meat" "consumption" "reducing" "random"
- "meat" "purchases" "information" "nudge"
- "meat" "reduction" "randomized"
- "meat" "sustainable" "random"
- "nudge" "meat" "default"
- "nudge" "reduce" "meat" "consumption"
- "nudge" "sustainable" "consumption" "meat"
- "nudge" "theory" "meat" "purchasing"
- "norms" "animal" "products"
- "nudges" "norms" "meat"
- "random" "nudge" "meat"
- "randomized controlled trial" "meat" "consumption" "reduce"
- "sustainable" "meat" "nudge"
- "sustainable" "meat" "nudge" "random"
- "university" "meat" "default" "reduction"
Additionally, we searched the American Economic Association's registry of randomized controlled trials (\url{https://www.socialscienceregistry.org/}) for the the terms "meat" and "random" and reviewed all matching results in the relevant time frame.
Another innovative part of our search strategy was our use of of an AI-based search tool (\url{https://undermind.ai/}), to which we described our research question and then reviewed 100 results that it generated.
This yielded one paper that met our inclusion criteria [@mattson2020] that seems to have slipped past many other systematic search processes.
Finally, we benefited from two in-progress literature reviews at Rethink Priorities, "a think-and-do tank" that researches animal welfare as one of its four main priorities.
Both of these literature reviews are aimed at assessing interventions that reduce MAP consumption, but have broader inclusion criteria than our paper employed.
For more details on these two projects, see \url{<https://osf.io/74paj>} and \url{<https://meat-lime.vercel.app>}.
# Discussion of prior reviews
We turn now to an overview of prior reviews that were highly relevant to this one.
Among the reviews that found MAP reduction interventions to be effective, several focused exclusively on choice architecture.
[@arno2016] found that nudges led to an average increase of healthy dietary choices of 15.3%, while [@byerly2018] found that committing to reduce meat intake and making menus vegetarian by default were more effective than educational interventions.
However, the vast majority of vegetarian-default studies we analyzed for this paper did not qualify for our analysis because they lacked delayed outcomes.
In a similar vein, [@mertens2022] concludes that food choices are "particularly responsive to choice architecture interventions" (p. 1), but featured no studies that met our inclusion criteria.
[@bianchi2018restructuring] found that reducing meat portions, making alternatives available, moving meat products to be less conspicuous, and changing meat's sensory properties can all reduce meat demand.
[@pandey2023] found that changing the presentation and availability of sustainable products was effective in increasing demand for them.
In a meta-review, [@grundy2022] found environmental education to be especially promising, with substantial evidence also supporting health information, emphasizing social norms, and decreasing meat portions.
Some reviews have focused on particular settings for MAP reduction interventions.
[@hartmannboyce2018] found that grocery store interventions, such as price changes, suggested swaps, and changes to item availability, were effective at changing purchasing choices.
However, that review covered a wide variety of health interventions, such as reducing consumption of dietary fat and increasing fruit and vegetable purchases.
It is unclear how directly such findings translate to MAP reduction efforts.
[@chang2023] focused on university meat-reduction interventions and found more promising results than reviews that looked at the wider public typically did.
[@harguess2020] reviewed 22 studies on meat consumption and found promising results for educational interventions focused on the environment, health, and animal welfare.
That paper recommends using animal imagery to cause an emotional response and utilizing choice architecture interventions.
Our review, by contrast, found a miniscule pooled effect associated with animal welfare appeals.
Taking a different angle, [@adleberg2018] reviewed the literature on protests in a variety of movements and found mixed evidence of efficacy.
The authors recommend that animal advocacy protests have a specific target (e.g. a particular institution) and "ask."
Other reviews assesed which groups are most easily influenced by interventions to reduce MAP consumption.
For example, [@blackford2021] found that nudges focused on "system 1" thinking were more effective at encouraging sustainable choices than those focused on "system 2," and that interventions had greater effects on females than males.
[@rosenfeld2018] reports that meat avoidance is associated with liberal political views, feminine gender, and higher openness, agreeableness and neuroticism, and that issues such as recidivism and hostility from friends and family can act as barriers to adopting a vegetarian diet.
Several reviews have had mixed or inconclusive results.
For instance, [@bianchi2018conscious] found that health and environmental appeals appear to change dietary intentions in virtual environments, but did not find evidence of actual consumption changes.
Likewise, [@kwasny2022] notes that most existing research focuses on attitudes and intentions and lacks measures of actual meat consumption over an extended period of time.
[@taufik2019] reviewed many studies aimed at increasing fruit and vegetable intake, but found far fewer that looked at reducing MAP consumption.
[@benningstad2020] found that dissociation of meat from its source plays a role in meat consumption, but no extant research that included behavioral outcomes.
A few reviews have found evidence that seems to recommend against particular interventions.
[@greig2017] reviewed the literature on leafleting for vegan/animal advocacy outreach, and observed biases that may have led to overestimated impacts.
That paper concluded that leafleting does not seem cost-effective, though with significant uncertainty.
[@nisa2019] meta-analyzed interventions to improve household sustainability, of which reducing MAP consumption was one of several.
Although they found small effect sizes for most interventions, they concluded that nudges were comparatively effective, as did [@ensaff2021].
Similarly, [@rau2022] reviewed the literature on environmentally friendly behavior changes, including but not limited to diet change, and found small or nonexistent effects in most cases.
Only fifteen interventions in that paper were described as “very successful,” and none of these related to food.
Finally, we note a few papers that were helpful in filling out our supplementary datasets.
[@ronto2022] investigated interventions to move consumer to protein sources with lower ecological footprints, and was instrumental in filling out our RPM and robustness check datasets, as were [@kwasny2022] and [@grummon2023].
Taking these reviews in sum, we encourage researchers to be cautious in extrapolating impacts across outcome categories.
For instance, many reviews concluded that choice architecture approaches are effective at changing food choice, but these are typically aimed at foods with weak social and cultural associations. We also note that prior reviews vastly outnumber rigorous RCTs that meet our inclusion criteria.
# Description of code and data repository
Our code and data are shared on GitHub (\url{https://github.com/hsflabstanford/vegan-meta}), and Code Ocean (\url{https://doi.org/10.24433/CO.6020578.v3}). The Code Ocean repository allows for reproducing our papers' results without downloading any files or managing software versions.
Our main document, `MAP-reduction-meta.Rmd`, reproduces the paper, with every quantitative claim and the first two figures reproduced each time the document is knit.
The datasets are:
- `MAP-reduction-data.csv`, our primary meta-analytic dataset;
- `rpmc-data.csv`, our secondary dataset of studies aimed at reducing consumption of RPM;
- `robustness-data.csv`, another secondary dataset borderline studies for a robustness check (see section below);
- `review-of-reviews.csv`, which details the the `r reviews_count` reviews we consulted;
- `excluded-studies.csv`, which details the papers we screened but did not include, along with their reasons for exclusion and where we found them (the studies in `robustness-data.csv` are included in this dataset as well).
See `readme.md` for more details about the included scripts and supplementary materials (e.g. licenses).
# Additional robustness checks and alternative analyses
## Sensitivity to publication status, outcome recorded, and open science practices
Table S1 compares average differences by three potential sources of bias: publication status, data collection strategy, and use of open science practices.
\captionsetup[table]{labelformat=empty}
```{r sensitivity_table,echo=F, message=F, results='asis'}
source('./scripts/table-S1-sensitivity.R')
sensitivity_table
```
```{r sensitivty_constants, include=F, echo=F, message=F}
journal_delta <- pub_status_results |> filter(Moderator == 'Journal article') |> pull(Delta)
preprint_delta <- pub_status_results |> filter(Moderator == 'Preprint or thesis') |> pull(Delta)
whitepaper_delta <- pub_status_results |> filter(Moderator == 'Nonprofit white paper') |> pull(Delta)
self_report_delta <- data_collection_results |> filter(Moderator == 'Self-reported') |> pull(Delta)
objective_delta <- data_collection_results |> filter(Moderator == 'Objectively measured') |> pull(Delta)
no_os_delta <- open_science_results |> filter(Moderator == 'None') |> pull(Delta)
pap_delta <- open_science_results |> filter(Moderator == 'Pre-analysis plan only') |> pull(Delta)
open_data_delta <- open_science_results |> filter(Moderator == 'Open data only') |> pull(Delta)
both_delta <- open_science_results |> filter(Moderator == 'Pre-analysis plan \\& open data') |> pull(Delta)
```
Within different types of publication, we find broadly similar results for journal articles and preprints/theses (SMD = `r journal_delta` and SMD = `r preprint_delta`, respectively), and a small backlash associated with results published as nonprofits' white papers (SMD = `r whitepaper_delta`. Surprisingly, we do not detect meaningful differences between studies that recorded self-reported vs. objectively measured outcomes (SMD = `r self_report_delta` vs SMD = `r objective_delta`, respectively).
Looking at adherence to open science practices, the largest difference is between studies that have neither open data nor a pre-analysis plan (SMD = `r no_os_delta`) and those that have just open data (SMD = `r open_data_delta`).
## Details of marginal studies included in robustness check
While assembling our dataset, a handful of studies stuck out to us as yielding interesting findings and/or design features, but nevertheless did not meet all of our inclusion criteria.
We included these marginal cases in our main text as a robustness check.
Each of these studies measures MAP consumption and has a control group.
However, we excluded them from our main dataset for one of five reasons: 1) issues with random assignment or the control group (for instance, where the control group receives some aspect of treatment [@piazza2022], or where treatment was alternated weekly but not randomly [@garnett2020]); 2) underpowered (too few clusters [@reinders2017] or subjects [@lentz2020]); 3) immediate outcome measurement [@dannenberg2023; @sparkman2017; @griesoph2021; @hansen2021]; 4) actively encouraging substitution within categories of MAP, e.g. from red meat to fish [@celis2017; @johansen2009]; 5) uncertainty in calculating an effect size arising from missing information about the behavior of diners who opt out of treatment to avoid a vegetarian meal [@betterfoodfoundation2023].
We did not include any studies in this supplementary dataset that failed to meet more than one of our inclusion criteria.
We further caution that these `r robust_only_results$N_studies` studies are not an exhaustive or even representative list of studies that did not qualify for our main analysis.
Rather, they are intended to offer an exploration of how a meta-analysis building on different priors about sources of bias might have proceeded.
## Using an alternative meta-analytic estimation method
We initially planned to use a model from `metafor` to for our main analysis.
However, as we assembled the dataset, we noticed that many papers had non-independent observations across interventions, typically in the form of multiple treatments compared to a single control group.
Upon discussion, the team's statistician (MBM) suggested that the `CORR` model from the `robumeta` package, a robust variance estimation (RVE) method, would be a better fit.
```{r alt_models, include=F}
alt_model <- robust(metafor::rma.uni(yi = d, vi = var_d, data = dat),
cluster = dat$unique_study_id)
hier_model <- robumeta::robu(formula =d ~ 1, data = dat,
studynum = author,
var.eff.size = var_d, modelweights = 'HIER',
small = TRUE)
```
Using our original estimation strategy from `metafor`, we detect a pooled effect size of `r round(alt_model$beta, 2)` (95% CI: [`r paste0(round(alt_model$ci.lb, 2), ", ", round(alt_model$ci.ub, 2))`]), p = `r round(alt_model$pval, 3)`.
Although `metafor` also provides an RVE estimator, it applies the correction to the standard errors and not to the overall estimate, and we preferred a model that incorporates clustering information at the level of effect estimation.
## Using a stricter definition of delay
```{r strict_delay_model, include=F}
strict_delay_model <- dat |> filter(delay_post_endline > 0) |> extract_model_results()
```
Our delay-related inclusion criterion aimed to limit our analysis to enduring effects.
Upon reviewing studies, however, we found that numerous high-quality studies modified eating environments over multiple days and did not incorporate a delayed measure following the *final* day of treatment.
For example, [@andersson2021] included 50 combined days of treatment and control, but the interval between treatment and any individual outcome measurement was zero days.
In a sense, such studies lack delayed outcome measures.
However, the multi-day setup in a single environment allows for theoretical delay so long as participants can return to the site of treatment and have their meal choices evaluated multiple times.
We decided to include multi-day studies where delayed outcomes were possible through repeat visits by subjects to the intervention site.
Our delay variable therefore reflects the days elapsed from the beginning of treatment, rather than its conclusion, to measurement.
We also coded a secondary delay variable that corresponds to time elapsed between treatment *conclusion* to measurement.
Restricting our analysis to the `r strict_delay_model$N_studies` studies and `r strict_delay_model$N_estimates` point estimates where this secondary delay exceeds zero, we observe an effect of SMD = `r strict_delay_model$Delta` (95% CI: `r strict_delay_model$CI`), p = `r strict_delay_model$p_val`.