-
Notifications
You must be signed in to change notification settings - Fork 18
/
19-linear_mixed_effects_models3.Rmd
167 lines (125 loc) · 5.81 KB
/
19-linear_mixed_effects_models3.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
# Linear mixed effects models 3
## Learning goals
- Pitfalls in fitting `lmers()`s (and what to do about it).
- Understanding `lmer()` syntax even better.
- ANOVA vs. lmer
## Load packages and set plotting theme
```{r, message=FALSE}
library("knitr") # for knitting RMarkdown
library("kableExtra") # for making nice tables
library("janitor") # for cleaning column names
library("broom.mixed") # for tidying up linear mixed effects models
library("patchwork") # for making figure panels
library("lme4") # for linear mixed effects models
library("afex") # for ANOVAs
library("car") # for ANOVAs
library("datarium") # for ANOVA dataset
library("modelr") # for bootstrapping
library("boot") # also for bootstrapping
library("ggeffects") # for plotting marginal effects
library("emmeans") # for marginal effects
library("tidyverse") # for wrangling, plotting, etc.
```
```{r}
theme_set(theme_classic() + #set the theme
theme(text = element_text(size = 20))) #set the default text size
# knitr display options
opts_chunk$set(comment = "",
fig.show = "hold")
# # set contrasts to using sum contrasts
# options(contrasts = c("contr.sum", "contr.poly"))
# suppress grouping warning messages
options(dplyr.summarise.inform = F)
```
## Load data sets
### Reasoning data
```{r}
df.reasoning = sk2011.1
```
## Understanding the lmer() syntax
Here is an overview of how to specify different kinds of linear mixed effects models.
```{r, echo=F}
tibble(formula = c("`dv ~ x1 + (1 | g)`",
"`dv ~ x1 + (0 + x1 | g)`",
"`dv ~ x1 + (x1 | g)`",
"`dv ~ x1 + (x1 || g)`",
"`dv ~ x1 + (1 | school) + (1 | teacher)`",
"`dv ~ x1 + (1 | school) + (1 | school:teacher)`"),
description = c("Random intercept for each level of `g`",
"Random slope for each level of `g`",
"Correlated random slope and intercept for each level of `g`",
"Uncorrelated random slope and intercept for each level of `g`",
"Random intercept for each level of `school` and for each level of `teacher` (crossed)",
"Random intercept for each level of `school` and for each level of `teacher` in `school` (nested)")) %>%
kable()
```
Note that this `(1 | school/teacher)` is equivalent to `(1 | school) + (1 | teacher:school)` (see [here](https://stats.stackexchange.com/questions/228800/crossed-vs-nested-random-effects-how-do-they-differ-and-how-are-they-specified)).
## ANOVA vs. lmer
### Between subjects ANOVA
Let's start with a between subjects ANOVA (which means we are in `lm()` world). We'll take a look whether what type of `instruction` participants received made a difference to their `response`.
First, we use the `aov_ez()` function from the "afex" package to do so.
```{r}
aov_ez(id = "id",
dv = "response",
between = "instruction",
data = df.reasoning)
```
Looks like there was no main effect of `instruction` on participants' responses.
An alternative route for getting at the same test, would be via combining `lm()` with `joint_tests()` (as we've done before in class).
```{r}
lm(formula = response ~ instruction,
data = df.reasoning %>%
group_by(id, instruction) %>%
summarize(response = mean(response)) %>%
ungroup()) %>%
joint_tests()
```
The two routes yield the same result. Notice that for the `lm()` approach, I calculated the means for each participant in each condition first (using `group_by()` and `summarize()`).
### Repeated-measures ANOVA
Now let's take a look whether `validity` and `plausibility` affected participants' responses in the reasoning task. These two factors were varied within participants. Again, we'll use the `aov_ez()` function like so:
```{r}
aov_ez(id = "id",
dv = "response",
within = c("validity", "plausibility"),
data = df.reasoning %>%
filter(instruction == "probabilistic"))
```
For the linear model route, given that we have repeated observations from the same participants, we need to use `lmer()`. The repeated measures ANOVA has the random effect structure as shown below:
```{r}
lmer(formula = response ~ 1 + validity * plausibility + (1 | id) +
(1 | id:validity) + (1 | id:plausibility),
data = df.reasoning %>%
filter(instruction == "probabilistic") %>%
group_by(id, validity, plausibility) %>%
summarize(response = mean(response))) %>%
joint_tests()
```
Again, we get a similar result using the `joint_tests()` function.
Note though that the results of the ANOVA route and the `lmer()` route weren't identical here (although they were very close). For more information as to why this happens, see [this post](https://stats.stackexchange.com/questions/117660/what-is-the-lme4lmer-equivalent-of-a-three-way-repeated-measures-anova).
### Mixed ANOVA
Now let's take a look at both between- as well as within-subjects factors. Let's compare the `aov_ez()` route
```{r}
aov_ez(id = "id",
dv = "response",
between = "instruction",
within = c("validity", "plausibility"),
data = df.reasoning)
```
with the `lmer()` route:
```{r}
lmer(formula = response ~ instruction * validity * plausibility + (1 | id) +
(1 | id:validity) + (1 | id:plausibility),
data = df.reasoning %>%
group_by(id, validity, plausibility, instruction) %>%
summarize(response = mean(response))) %>%
joint_tests()
```
Here, both routes yield the same results.
## Additional resources
### Readings
- [Nested and crossed random effects in lme4](https://www.muscardinus.be/statistics/nested.html)
## Session info
Information about this R session including which version of R was used, and what packages were loaded.
```{r}
sessionInfo()
```