Skip to content

Commit

Permalink
tweaks to Ch10 contrasts
Browse files Browse the repository at this point in the history
  • Loading branch information
friendly committed Nov 20, 2024
1 parent 496fbc4 commit 0d5dea3
Show file tree
Hide file tree
Showing 4 changed files with 83 additions and 41 deletions.
27 changes: 16 additions & 11 deletions 10-mlm-review.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -586,10 +586,12 @@ the following comparisons among the formulas Old, New, Major and Alps:
(c) Major vs. Alps. The contrasts that do this are:
\begin{align*}
L_1 & = \frac12 (\mu_O + \mu_N) - \frac12 (\mu_M + \mu_A) & \rightarrow\: & \mathbf{c}_1 =
\frac12 \begin{pmatrix}
1 & 1 & -1 & -1
\end{pmatrix} \\
L_1 & = \textstyle{\frac12} (\mu_O + \mu_N) -
\textstyle{\frac12} (\mu_M + \mu_A) & \rightarrow\: & \mathbf{c}_1 =
\textstyle{\frac12}
\begin{pmatrix}
1 & 1 & -1 & -1
\end{pmatrix} \\
L_2 & = \mu_O - \mu_N & \rightarrow\: & \mathbf{c}_2 =
\begin{pmatrix}
1 & -1 & 0 & 0
Expand Down Expand Up @@ -622,18 +624,21 @@ C
t(C) %*% C
```
For a dataset, `data`, with this factor as `group`, you can set up the analyses to use these
contrasts by assigning the matrix `C` to `contrasts()`. When the contrasts are changed,
For the `dogfood` data, with `formula` as the group factor,
you can set up the analyses to use these
contrasts by assigning the matrix `C` to `contrasts()` for that factor in the dataset itself.
When the contrasts are changed,
it is necessary to refit the model. The estimated coefficients then become the estimated
mean differences of the contrasts.
mean differences for the contrasts.
```{r dogfood-contrasts}
contrasts(dogfood$formula) <- C
dogfood.mod <- lm(cbind(start, amount) ~ formula,
data=dogfood)
coef(dogfood.mod)
```
For example, Ours vs. Theirs estimated by `formulac1` takes 0.69 less time to start eating
and eats 5.44 more on average.
For multivariate tests, when all contrasts are pairwise orthogonal,
the overall test of a factor with
Expand Down Expand Up @@ -668,15 +673,15 @@ Then, we can illustrate @eq-H-contrasts by extracting the 1 df $\mathbf{H}$ mat
from the results of `linearHypothesis`.
<!-- this doesn't work -->
```{r dogfood-Eqn, results='asis', eval=FALSE, echo=FALSE}
```{r dogfood-Eqn, results='asis'}
options(print.latexMatrix = list(display.labels=FALSE))
SSP_H1 <- H1$SSPH |> round(digits=2)
SSP_H2 <- H2$SSPH |> round(digits=2)
SSP_H3 <- H3$SSPH |> round(digits=2)
Eqn(latexMatrix(SSP_H), "=",
Eqn(latexMatrix(SSP_H), "=",
latexMatrix(SSP_H1), "+",
latexMatrix(SSP_H2), "+",
latexMatrix(SSP_H3))
latexMatrix(SSP_H3), quarto=TRUE)
```
$$
Expand Down
5 changes: 4 additions & 1 deletion docs/08-collinearity-ridge.html
Original file line number Diff line number Diff line change
Expand Up @@ -1026,7 +1026,7 @@ <h1 class="title"><span id="sec-collin" class="quarto-section-identifier"><span
<p>There is a close connection with principal components regression mentioned in <a href="#sec-remedies" class="quarto-xref"><span>Section 8.3</span></a>. Ridge regression shrinks <em>all</em> dimensions in proportion to <span class="math inline">\(\text{df}_k(i)\)</span>, so the low variance dimensions are shrunk more. Principal components regression discards the low variance dimensions and leaves the high variance dimensions unchanged.</p>
</section><section id="the-genridge-package" class="level3" data-number="8.4.2"><h3 data-number="8.4.2" class="anchored" data-anchor-id="the-genridge-package">
<span class="header-section-number">8.4.2</span> The <code>genridge</code> package</h3>
<p>Ridge regression and other shrinkage methods are available in several packages including <span style="color: brown;"><strong>MASS</strong></span> (the <code><a href="https://rdrr.io/pkg/MASS/man/lm.ridge.html">lm.ridge()</a></code> function), <span style="color: brown;"><strong>glmnet</strong></span> <span class="citation" data-cites="R-glmnet">(<a href="95-references.html#ref-R-glmnet" role="doc-biblioref">Friedman et al., 2023</a>)</span>, and <span style="color: brown;"><strong>penalized</strong></span> <span class="citation" data-cites="R-penalized">(<a href="95-references.html#ref-R-penalized" role="doc-biblioref"><strong>R-penalized?</strong></a>)</span>, but none of these provides insightful graphical displays. <code><a href="https://glmnet.stanford.edu/reference/glmnet.html">glmnet::glmnet()</a></code> also implements a method for multivariate responses with a `family=“mgaussian”.</p>
<p>Ridge regression and other shrinkage methods are available in several packages including <span style="color: brown;"><strong>MASS</strong></span> (the <code><a href="https://rdrr.io/pkg/MASS/man/lm.ridge.html">lm.ridge()</a></code> function), <span style="color: brown;"><strong>glmnet</strong></span> <span class="citation" data-cites="R-glmnet">(<a href="95-references.html#ref-R-glmnet" role="doc-biblioref">Friedman et al., 2023</a>)</span>, and <span style="color: brown;"><strong>penalized</strong></span> <span class="citation" data-cites="R-penalized">(<a href="95-references.html#ref-R-penalized" role="doc-biblioref">Goeman et al., 2022</a>)</span>, but none of these provides insightful graphical displays. <code><a href="https://glmnet.stanford.edu/reference/glmnet.html">glmnet::glmnet()</a></code> also implements a method for multivariate responses with a `family=“mgaussian”.</p>
<p>Here, I focus in the <span style="color: brown;"><strong>genridge</strong></span> package <span class="citation" data-cites="R-genridge">(<a href="95-references.html#ref-R-genridge" role="doc-biblioref">Friendly, 2024</a>)</span>, where the <code><a href="https://friendly.github.io/genridge/reference/ridge.html">ridge()</a></code> function is the workhorse and <code>pca.ridge()</code> transforms these results to PCA/SVD space. <code>vif.ridge()</code> calculates VIFs for class <code>"ridge"</code> objects and <code><a href="https://friendly.github.io/genridge/reference/precision.html">precision()</a></code> calculates precision and shrinkage measures.</p>
<p>A variety of plotting functions is available for univariate, bivariate and 3D plots:</p>
<ul>
Expand Down Expand Up @@ -1382,6 +1382,9 @@ <h1 class="title"><span id="sec-collin" class="quarto-section-identifier"><span
<div id="ref-Gabriel:71" class="csl-entry" role="listitem">
Gabriel, K. R. (1971). The biplot graphic display of matrices with application to principal components analysis. <em>Biometrics</em>, <em>58</em>(3), 453–467. <a href="https://doi.org/10.2307/2334381">https://doi.org/10.2307/2334381</a>
</div>
<div id="ref-R-penalized" class="csl-entry" role="listitem">
Goeman, J., Meijer, R., Chaturvedi, N., &amp; Lueder, M. (2022). <em>Penalized: L1 (lasso and fused lasso) and L2 (ridge) penalized estimation in GLMs and in the cox model</em>. <a href="https://CRAN.R-project.org/package=penalized">https://CRAN.R-project.org/package=penalized</a>
</div>
<div id="ref-GowerHand:96" class="csl-entry" role="listitem">
Gower, J. C., &amp; Hand, D. J. (1996). <em>Biplots</em>. Chapman &amp; Hall.
</div>
Expand Down
Loading

0 comments on commit 0d5dea3

Please sign in to comment.