Reporting of model comparison (from performance::compare_performance) #118
Replies: 4 comments 3 replies
-
If you are splitting the explanatory from predictive power, shouldn't the AIC/BIC be in the predictive power part? (Also, unlike R2, R2adj is also a predictive measure 🤷♂️) Also, I think you should add the F(df1,df2) to the p-value. (And maybe even |
Beta Was this translation helpful? Give feedback.
-
Note to myself: drop the R2s from the report of model_comparison, because it can be misleading:
And the R2s are reported anyway when the model is reported. |
Beta Was this translation helpful? Give feedback.
-
I asked this question related to |
Beta Was this translation helpful? Give feedback.
-
library(report)
#> report is in alpha - help us improve by reporting bugs on github.com/easystats/report/issues
m1 <- lm(Sepal.Length ~ Petal.Length * Species, data = iris)
m2 <- lm(Sepal.Length ~ Petal.Length + Species, data = iris)
m3 <- lm(Sepal.Length ~ Petal.Length, data = iris)
report(performance::compare_performance(m1, m2, m3))
#> We compared three linear models; m1 (R2 = 0.84, adj. R2 = 0.83, AIC = 106.77, BIC = 127.84, RMSE = 0.33, Sigma = 0.34), m2 (R2 = 0.84, adj. R2 = 0.83, AIC = 106.23, BIC = 121.29, RMSE = 0.33, Sigma = 0.34) and m3 (R2 = 0.76, adj. R2 = 0.76, AIC = 160.04, BIC = 169.07, RMSE = 0.40, Sigma = 0.41). Created on 2021-01-16 by the reprex package (v0.3.0) I simplified this now, as the tricky bits will go into |
Beta Was this translation helpful? Give feedback.
-
Reporting model comparisons indices textually has always been a pain in the a***, but I got an initial draft (only tested for LMs for now) that could be useful (at least it is for me, as I can copypasta the list of indices).
Created on 2020-12-22 by the reprex package (v0.3.0)
performance_lrt
and the BIC-based BF.Beta Was this translation helpful? Give feedback.
All reactions