Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

automated reporting of interactions or pairwise tests #370

Closed
hcuve opened this issue May 4, 2023 Discussed in #369 · 5 comments
Closed

automated reporting of interactions or pairwise tests #370

hcuve opened this issue May 4, 2023 Discussed in #369 · 5 comments

Comments

@hcuve
Copy link

hcuve commented May 4, 2023

Discussed in #369

Originally posted by hcuve May 3, 2023
Thanks for such an amazing suite of tools.

In many cases when fitting regression models with categorical variables with more than two levels or when you have interactions,
it's desirable to probe the model results for pairwise comparisons or test interactions further, for example using means.
is there a way to get automated reporting for these? For example I have mixed models fitted in lmer where I first compute the anova on the model, and then run easystats::report on the anova results. However, if I probe pairwise or interactions (e.g. using emmeans, these don't work in easystats::report, is there a way around this.

many thanks

@rempsyc
Copy link
Member

rempsyc commented May 6, 2023

I don't know if that will solve exactly your needs, but modelbased has an estimate_contrasts function that compares means and can take a variety of models, and I've just submitted a PR to make the output compatible with report. See #372

@rempsyc
Copy link
Member

rempsyc commented Jul 2, 2023

@hcuve is that what you are after?

library(modelbased)
library(report)

# Using current development version
packageVersion("report")
#> [1] ‘0.5.7.4’

model <- lm(Sepal.Width ~ Species, data = iris)

contr <- estimate_contrasts(model)
#> No variable was specified for contrast estimation. Selecting `contrast = "Species"`.

report(contr)
#> The marginal contrasts analysis suggests the following. The difference between
#> setosa and versicolor is positive and statistically significant (difference =
#> 0.66, 95% CI [ 0.49, 0.82], t(147) = 9.69, p < .001). The difference between
#> setosa and virginica is positive and statistically significant (difference =
#> 0.45, 95% CI [ 0.29, 0.62], t(147) = 6.68, p < .001). The difference between
#> versicolor and virginica is negative and statistically significant (difference
#> = -0.20, 95% CI [-0.37, -0.04], t(147) = -3.00, p = 0.003)

report_table(contr)
#> Marginal Contrasts Analysis
#> 
#> Level1     |     Level2 | Difference |         95% CI |   SE | t(147) |      p
#> ------------------------------------------------------------------------------
#> setosa     | versicolor |       0.66 | [ 0.49,  0.82] | 0.07 |   9.69 | < .001
#> setosa     |  virginica |       0.45 | [ 0.29,  0.62] | 0.07 |   6.68 | < .001
#> versicolor |  virginica |      -0.20 | [-0.37, -0.04] | 0.07 |  -3.00 | 0.003 
#> 
#> Marginal contrasts estimated at Species
#> p-value adjustment method: Holm (1979)

Created on 2023-07-02 with reprex v2.0.2

Else can you provide a reprex?

@hcuve
Copy link
Author

hcuve commented Jul 3, 2023

@rempsyc Thanks for getting back to me, yes, this would be exactly it. I am assuming one can specify the p-value adjustment method, e.g. to Bonferroni.
in any case, this seems very useful

@rempsyc
Copy link
Member

rempsyc commented Jul 3, 2023

Yes, from the documentation (https://easystats.github.io/modelbased/reference/estimate_contrasts.html)

p_adjust
The p-values adjustment method for frequentist multiple comparisons. Can be one of "holm" (default), "tukey", "hochberg", "hommel", "bonferroni", "BH", "BY", "fdr" or "none". See the p-value adjustment section in the emmeans::test documentation.

FYI, we are working on implementing standardized effect sizes for estimate_contrasts too, in addition to raw mean differences, in easystats/modelbased#227

@hcuve
Copy link
Author

hcuve commented Jul 3, 2023

amazing, I will try this later in, but feel free to close the issue. many thanks!

@rempsyc rempsyc closed this as completed Jul 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants