diff --git a/_quarto.yml b/_quarto.yml index 5d5adb7..8c4fa37 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -171,8 +171,6 @@ format: # see: https://github.com/bgreenwell/quarto-crc for quarto and CRC press # https://github.com/yihui/bookdown-crc bookdown, CRC Press - pdf-engine-opt: - interaction: nonstopmode pdf: documentclass: krantz classoption: [10pt, krantz2] @@ -184,7 +182,8 @@ format: geometry: - top=20mm - left=25mm - code-block-bg: 'E8FFFF' #'#f1f1f1' + code-block-bg: "dce8e7" # or #e6f2ee, not 'E8FFFF' #'#f1f1f1' + interaction: nonstopmode diff --git a/bib/pkgs.bib b/bib/pkgs.bib index 554ce23..fa057cd 100644 --- a/bib/pkgs.bib +++ b/bib/pkgs.bib @@ -186,6 +186,14 @@ @Manual{R-GGally url = {https://ggobi.github.io/ggally/}, } +@Manual{R-gganimate, + title = {gganimate: A Grammar of Animated Graphics}, + author = {Thomas Lin Pedersen and David Robinson}, + year = {2024}, + note = {R package version 1.0.9}, + url = {https://gganimate.com}, +} + @Manual{R-ggbiplot, title = {ggbiplot: A Grammar of Graphics Implementation of Biplots}, author = {Vincent Q. Vu and Michael Friendly}, @@ -301,7 +309,7 @@ @Manual{R-langevitour } @Manual{R-lattice, - title = {lattice: Trellis Graphics for {R}}, + title = {lattice: Trellis Graphics for R}, author = {Deepayan Sarkar}, year = {2024}, note = {R package version 0.22-6}, diff --git a/bib/pkgs.txt b/bib/pkgs.txt index 630b65c..c7e0bfd 100644 --- a/bib/pkgs.txt +++ b/bib/pkgs.txt @@ -76,6 +76,7 @@ car carData dplyr forcats +gganimate ggplot2 knitr matlib diff --git a/docs/03-multivariate_plots.html b/docs/03-multivariate_plots.html index e8bb784..23dbccd 100644 --- a/docs/03-multivariate_plots.html +++ b/docs/03-multivariate_plots.html @@ -1863,7 +1863,7 @@
knitr::include_graphics("images/diabetes-pca-tsne.gif")
You can see that the PCA configuration is morphed into the that for t-SNE largely by rotation 90\(^o\) clockwise, so that dimension 1 in PCA becomes dimension 2 in t-SNE. This is not unexpected, because PCA finds the dimensions in to order of maximum variance, whereas t-SNE is only trying to match the distances in the data to those in the solution. To interpret the result from t-SNE you are free to interchange the axes, or indeed to rotate the solution arbitrarily.
+It is more interesting that the sizes and shapes of the group clusters change from one solution to the other. The normal group is most compact in the PCA solution, but becomes the least compact in t-SNE.
In many multivariate data displays, such as scatterplot matrices, parallel coordinate plots and others reviewed in Chapter 3, the order of different variables might seem arbitrary. They might appear in alphabetic order, or more often in the order they appear in your dataset, for example when you use pairs(mydata)
. Yet, the principle of effect ordering (Friendly & Kwan (2003)) for variables says you should try to arrange the variables so that adjacent ones are as similar as possible.6
For example, the mtcars
dataset contains data on 32 automobiles from the 1974 U.S. magazine Motor Trend and consists of fuel comsumption (mpg
) and 10 aspects of automobile design (cyl
: number of cyliners; hp
: horsepower, wt
: weight) and performance (qsec
: time to drive a quarter-mile). What can we see from a simple corrplot()
of their correlations?
data(mtcars)
+data(mtcars)
library(corrplot)
R <- cor(mtcars)
corrplot(R,
@@ -1760,7 +1760,7 @@
In this display you can scan the rows and columns to “look up” the sign and approximate magnitude of a given correlation; for example, the correlation between mpg
and cyl
appears to be about -0.9, while that between mpg
and gear
is about 0.5. Of course, you could print the correlation matrix to find the actual values (-0.86 and 0.48 respectively):
-print(floor(100*R))
+print(floor(100*R))
#> mpg cyl disp hp drat wt qsec vs am gear carb
#> mpg 100 -86 -85 -78 68 -87 41 66 59 48 -56
#> cyl -86 100 90 83 -70 78 -60 -82 -53 -50 52
@@ -1786,7 +1786,7 @@ TODO: Make a diagram of this
For the mtcars
data the biplot in Figure 4.28 accounts for 84% of the total variance so a 2D representation is fairly good. The plot shows the variables as widely dispersed. There is a collection at the left of positively correlated variables and another positively correlated set at the right.
-mtcars.pca <- prcomp(mtcars, scale. = TRUE)
+mtcars.pca <- prcomp(mtcars, scale. = TRUE)
ggbiplot(mtcars.pca,
circle = TRUE,
point.size = 2.5,
@@ -1807,7 +1807,7 @@ In corrplot()
principal component variable ordering is implemented using the order = "AOE"
option. There are a variety of other methods based on hierarchical cluster analysis described in the package vignette.
Figure 4.29 shows the result of ordering the variables by this method. A nice feature of corrplot()
is the ability to manually highlight blocks of variables that have a similar pattern of signs by outlining them with rectangles. From the biplot, the two main clusters of positively correlated variables seemed clear, and are outlined in the plot using corrplot::corrRect()
. What became clear in the corrplot is that qsec
, the time to drive a quarter-mile from a dead start didn’t quite fit this pattern, so I highlighted it separately.
-corrplot(R,
+corrplot(R,
method = 'ellipse',
order = "AOE",
title = "PCA variable order",
@@ -1872,7 +1872,7 @@ TODO: Show the necessary parts, including the screeplot.
An image can be imported using imager::load.image()
which creates a "cimg"
object, a 4-dimensional array with dimensions named x,y,z,c
. x
and y
are the usual spatial dimensions, z
is a depth dimension (which would correspond to time in a movie), and c
is a color dimension containing R, G, B values.
-library(imager)
+library(imager)
img <- imager::load.image(here::here("images", "MonaLisa-BW.jpg"))
dim(img)
#> [1] 640 954 1 1
@@ -1880,7 +1880,7 @@
An as.data.frame()
method converts this to a data frame with x
and y
coordinates. Each x-y pair is a location in the 640 by 954 pixel grid, and the value
is a grayscale value ranging from zero to one.
-img_df_long <- as.data.frame(img)
+img_df_long <- as.data.frame(img)
head(img_df_long)
#> x y value
#> 1 1 1 0.431
@@ -1892,7 +1892,7 @@
However, to do a PCA we will need a matrix of data in wide format containing the grayscale pixel values. We can do this using tidyr::pivot_wider()
, giving a result with 640 rows and 954 columns.
-img_df <- pivot_wider(img_df_long,
+img_df <- pivot_wider(img_df_long,
names_from = y,
values_from = value) |>
select(-x)
@@ -1901,12 +1901,12 @@
Mona’s PCA is produced from this img_df
with prcomp()
:
-img_pca <- img_df |>
+img_pca <- img_df |>
prcomp(scale = TRUE, center = TRUE)
With 955 columns, the PCA comprises 955 eigenvalue/eigenvector pairs. However, the rank of a matrix is the smaller of the number of rows and columns, so only 640 eigenvalues can be non-zero. Printing the first 10 shows that the first three dimensions account for 46% of the variance and we only get to 63% with 10 components.
-img_pca |>
+img_pca |>
broom::tidy(matrix = "eigenvalues") |> head(10)
#> # A tibble: 10 × 4
#> PC std.dev percent cumulative
@@ -1924,7 +1924,7 @@
Figure 4.31 shows a screeplot of proportions of variance. Because there are so many components and most of the information is concentrated in the largest dimensions, I’ve used a \(\log_{10}()\) scale on the horizontal axis. Beyond 10 or so dimensions, the variance of additional components looks quite tiny.
-ggscreeplot(img_pca) +
+ggscreeplot(img_pca) +
scale_x_log10()
@@ -1940,7 +1940,7 @@ Then, if \(\mathbf{M}\) is the \(640 \times 955\) matrix of pixel values, a best approximation \(\widehat{\mathbf{M}}_k\) using \(k\) dimensions can be obtained as \(\widehat{\mathbf{M}}_k = \mathbf{X}_k\;\mathbf{V}_k^\mathsf{T}\) where \(\mathbf{X}_k\) are the principal component scores and \(\mathbf{V}_k\) are the eigenvectors corresponding to the \(k\) largest eigenvalues. The function approx_pca()
does this, and also undoes the scaling and centering carried out in PCA.
TODO: Also, separate approximation from the pivot_longer code…
-Code
approx_pca <- function(n_comp = 20, pca_object = img_pca){
+Code
approx_pca <- function(n_comp = 20, pca_object = img_pca){
## Multiply the matrix of rotated data (component scores) by the transpose of
## the matrix of eigenvectors (i.e. the component loadings) to get back to a
## matrix of original data values
@@ -1978,7 +1978,7 @@
Finally, the recovered images, using 2, 3 , 4, 5, 10, 15, 20, 50, and 100 principal components can be plotted using ggplot. In the code below, the approx_pca()
function is run for each of the 9 values specified by n_pcs
giving a data frame recovered_imgs
containing all reconstructed images, with variables x
, y
and value
(the greyscale pixel value).
-n_pcs <- c(2:5, 10, 15, 20, 50, 100)
+n_pcs <- c(2:5, 10, 15, 20, 50, 100)
names(n_pcs) <- paste("First", n_pcs, "Components", sep = "_")
recovered_imgs <- map_dfr(n_pcs,
@@ -1989,7 +1989,7 @@
In ggplot()
, each is plotted using geom_raster()
, using value
to as the fill color. A quirk of images imported to R is that origin is taken as the upper left corner, so the Y axis scale needs to be reversed. The 9 images are then plotted together using facet_wrap()
.
-p <- ggplot(data = recovered_imgs,
+p <- ggplot(data = recovered_imgs,
mapping = aes(x = x, y = y, fill = value))
p_out <- p + geom_raster() +
scale_y_reverse() +
@@ -2021,7 +2021,7 @@ The data ellipse (Section 3.2), or ellipsoid in more than 2D is fundamental in regression. But also, as Pearson showed, it is key to understanding principal components analysis, where the principal component directions are simply the axes of the ellipsoid of the data. As such, observations that are unusual in data space may not stand out in univariate views of the variables, but will stand out in principal component space, usually on the smallest dimension.
As an illustration, I created a dataset of \(n = 100\) observations with a linear relation, \(y = x + \mathcal{N}(0, 1)\) and then added two discrepant points at (1.5, -1.5), (-1.5, 1.5).
-set.seed(123345)
+
diff --git a/docs/06-linear_models-plots.html b/docs/06-linear_models-plots.html
index ae8625f..9d9b354 100644
--- a/docs/06-linear_models-plots.html
+++ b/docs/06-linear_models-plots.html
@@ -1704,8 +1704,6 @@ Figure 6.17. Income is represented as log10(income)
in the model prestige.mod3
, and it is also easier to understand the interaction by plotting income on a log scale, using the axes
argument to specify a transformation of the \(x\) axis. I use 68% confidence bands here to make the differences among type more apparent.
diff --git a/docs/07-lin-mod-topics.html b/docs/07-lin-mod-topics.html
index 948ab29..f763522 100644
--- a/docs/07-lin-mod-topics.html
+++ b/docs/07-lin-mod-topics.html
@@ -353,7 +353,7 @@
+library(forcats)
+library(gganimate)
-
+
7.1 Ellipsoids in data space and \(\mathbf{\beta}\) space
It is most common to look at data and fitted models in “data space,” where axes correspond to variables, points represent observations, and fitted models are plotted as lines (or planes) in this space. As we’ve suggested, data ellipsoids provide informative summaries of relationships in data space. For linear models, particularly regression models with quantitative predictors, there is another space—“\(\mathbf{\beta}\) space”—that provides deeper views of models and the relationships among them. This discussion extends Friendly et al. (2013), Sec. 4.6.
In \(\mathbf{\beta}\) space, the axes pertain to coefficients, for example \((\beta_0, \beta_1)\) in a simple linear regression. Points in this space are models (true, hypothesized, fitted) whose coordinates represent values of these parameters. For example, one point \(\widehat{\mathbf{\beta}}_{\text{OLS}} = (\hat{\beta}_0, \hat{\beta}_1)\) represents the least squares estimate; other points, \(\widehat{\mathbf{\beta}}_{\text{WLS}}\) and \(\widehat{\mathbf{\beta}}_{\text{ML}}\) would give weighted least squares and maximum likelihood estimates, and the line \(\beta_1 = 0\) represents the null hypothesis that the slope is zero.
@@ -665,7 +666,7 @@
color = "Error on Y?",
shape = "Error on Y?",
linetype = "Error on Y?") +
- legend_inside(c(0.2, 0.9))
+ legend_inside(c(0.25, 0.8))
p2 <- ggplot(data=model_stats,
aes(x = errX, y = sigma,
@@ -677,8 +678,7 @@
y = "Model residual standard error",
color = "Error on Y?",
shape = "Error on Y?",
- linetype = "Error on Y?") +
- legend_inside(c(0.8, 0.9))
+ linetype = "Error on Y?")
p1 + p2
@@ -725,7 +725,7 @@
Packages used here:
-9 packages used here: car, carData, dplyr, forcats, ggplot2, knitr, matlib, patchwork, tidyr
+10 packages used here: car, carData, dplyr, forcats, gganimate, ggplot2, knitr, matlib, patchwork, tidyr
diff --git a/docs/08-collinearity-ridge.html b/docs/08-collinearity-ridge.html
index ad45e41..a754736 100644
--- a/docs/08-collinearity-ridge.html
+++ b/docs/08-collinearity-ridge.html
@@ -397,28 +397,29 @@
8.1 What is collinearity?
-The chapter quote above is not untypical of researchers who have read standard treatments of linear models (eg.: ???) and yet are still confused about what collinearity is, how to find its sources and how to correct them. In Friendly & Kwan (2009), we liken this problem to that of the reader of Martin Hansford’s successful series of books, Where’s Waldo. These consist of a series of full-page illustrations of hundreds of people and things and a few Waldos— a character wearing a red and white striped shirt and hat, glasses, and carrying a walking stick or other paraphernalia. Waldo was never disguised, yet the complex arrangement of misleading visual cues in the pictures made him very hard to find. Collinearity diagnostics often provide a similar puzzle.
+The chapter quote above is not untypical of researchers who have read standard treatments of linear models (e.g, Graybill (1961);Hocking (2013)) and yet are still confused about what collinearity is, how to find its sources and how to correct them. In Friendly & Kwan (2009), we liken this problem to that of the reader of Martin Hansford’s successful series of books, Where’s Waldo. These consist of a series of full-page illustrations of hundreds of people and things and a few Waldos— a character wearing a red and white striped shirt and hat, glasses, and carrying a walking stick or other paraphernalia. Waldo was never disguised, yet the complex arrangement of misleading visual cues in the pictures made him very hard to find. Collinearity diagnostics often provide a similar puzzle.
Recall the standard classical linear model for a response variable \(y\) with a collection of predictors in \(\mathbf{X} = (\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_p)\)
\[\begin{aligned}
-\mathbf{y} & =& \beta_0 + \beta_1 \mathbf{x}_1 + \beta_2 \mathbf{x}_2 + \cdots + \beta_p \mathbf{x}_p + \mathbf{\epsilon} \\
- & = & \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \; ,
-\end{aligned}\] for which the ordinary least squares solution is:
+\mathbf{y} & = \beta_0 + \beta_1 \mathbf{x}_1 + \beta_2 \mathbf{x}_2 + \cdots + \beta_p \mathbf{x}_p + \mathbf{\epsilon} \\
+ & = \mathbf{X} \mathbf{\beta} + \mathbf{\epsilon} \; ,
+\end{aligned}\]
+for which the ordinary least squares solution is:
\[
\widehat{\mathbf{b}} = (\mathbf{X}^\mathsf{T} \mathbf{X})^{-1} \; \mathbf{X}^\mathsf{T} \mathbf{y} \; ,
-\] with sampling variances and covariances \(\text{Var} (\widehat{\mathbf{b}}) = \sigma^2 \times (\mathbf{X}^\mathsf{T} \mathbf{X})^{-1}\) and \(\sigma^2\) is the variance of the residuals \(\mathbf{\epsilon}\), estimated by the mean squared error (MSE).
+\] with sampling variances and covariances \(\text{Var} (\widehat{\mathbf{b}}) = \sigma_\epsilon^2 \times (\mathbf{X}^\mathsf{T} \mathbf{X})^{-1}\) and \(\sigma_\epsilon^2\) is the variance of the residuals \(\mathbf{\epsilon}\), estimated by the mean squared error (MSE).
In the limiting case, when one \(x_i\) is perfectly predictable from the other \(x\)s, i.e., \(R^2 (x_i | \text{other }x) = 1\),
- there is no unique solution for the regression coefficients \(\mathbf{b} = (\mathbf{X}^\mathsf{T} \mathbf{X})^{-1} \mathbf{X} \mathbf{y}\);
- the standard errors \(s (b_i)\) of the estimated coefficients are infinite and t statistics \(t_i = b_i / s (b_i)\) are 0.
-This extreme case reflects a situation when one or more predictors are effectively redundant, for example when you include two variables \(x\) and \(y\) and their sum \(z = x + y\) in a model, or use ipsatized scores that sum to a constant. More generally, collinearity refers to the case when there are very high multiple correlations among the predictors, such as \(R^2 (x_i | \text{other }x) \ge 0.9\). Note that you can’t tell simply by looking at the simple correlations. A large correlation \(r_{ij}\) is sufficient for collinearity, but not necessary — you can have variables \(x_1, x_2, x_3\) for which the pairwise correlation are low, but the multiple correlation is high.
+This extreme case reflects a situation when one or more predictors are effectively redundant, for example when you include two variables \(x\) and \(y\) and their sum \(z = x + y\) in a model, or use ipsatized scores that sum to a constant (such as proportions of a total). More generally, collinearity refers to the case when there are very high multiple correlations among the predictors, such as \(R^2 (x_i | \text{other }x) \ge 0.9\). Note that you can’t tell simply by looking at the simple correlations. A large correlation \(r_{ij}\) is sufficient for collinearity, but not necessary — you can have variables \(x_1, x_2, x_3\) for which the pairwise correlation are low, but the multiple correlation is high.
The consequences are:
- The estimated coefficients have large standard errors, \(s(\hat{b_j})\). They are multiplied by the square root of the variance inflation factor, \(\sqrt{\text{VIF}}\), discussed below.
- This deflates the \(t\)-statistics, \(t = \hat{b_j} / s(\hat{b_j})\) by the same factor.
- Thus you may find a situation where an overall model is highly significant (large \(F\)-statistic), while no (or few) of the individual predictors are. This is a puzzlement!
- Beyond this, the least squares solution may have poor numerical accurracy (Longley, 1967), because the solution depends on the determinant \(|\,\mathbf{X}^\mathsf{T} \mathbf{X}\,|\), which approaches 0 as multiple correlations increase.
-- As well, recall that the coefficients \(\hat{b}\) are partial coefficients, meaning the estimated change \(\Delta y\) in \(y\) when \(x\) changes by one unit \(\Delta x\), but holding all other variables constant. Then, the model may be trying to estimate something that does not occur in the data.
+- As well, recall that the coefficients \(\hat{b}\) are partial coefficients, meaning that they estimate change \(\Delta y\) in \(y\) when \(x\) changes by one unit \(\Delta x\), but holding all other variables constant. Then, the model may be trying to estimate something that does not occur in the data.
8.1.1 Visualizing collinearity
@@ -532,7 +533,7 @@
-Recall (Section #sec-data-beta) that the confidence ellipse for \((\beta_1, \beta_2)\) is just a 90 degree rotation (and rescaling) of the data ellipse for \((x_1, x_2)\): it is wide (more variance) in any direction where the data ellipse is narrow.
+Recall (#sec-betaspace) that the confidence ellipse for \((\beta_1, \beta_2)\) is just a 90 degree rotation (and rescaling) of the data ellipse for \((x_1, x_2)\): it is wide (more variance) in any direction where the data ellipse is narrow.
The shadows of the confidence ellipses on the coordinate axes in Figure 8.2 represent the standard errors of the coefficients, and get larger with increasing \(\rho\). This is the effect of variance inflation, described in the following section.
8.2 Measuring collinearity
@@ -624,7 +625,7 @@ In the linear regression model with standardized predictors, the covariance matrix of the estimated intercept-excluding parameter vector \(\mathbf{b}^\star\) has the simpler form, \[
\mathcal{V} (\mathbf{b}^\star) = \frac{\sigma^2}{n-1} \mathbf{R}^{-1}_{X} \; .
\] where \(\mathbf{R}_{X}\) is the correlation matrix among the predictors. It can then be seen that the VIF\(_j\) are just the diagonal entries of \(\mathbf{R}^{-1}_{X}\).
-More generally, the matrix \(\mathbf{R}^{-1}_{X} = (r^{ij})\), when standardized to a correlation matrix as \(-r^{ij} / \sqrt{r^{ii} \; r^{jj}}\) gives the matrix of all partial correlations, \(r_{ij \,|\, \textrm{others}}\). }
+More generally, the matrix \(\mathbf{R}^{-1}_{X} = (r^{ij})\), when standardized to a correlation matrix as \(-r^{ij} / \sqrt{r^{ii} \; r^{jj}}\) gives the matrix of all partial correlations, \(r_{ij \,|\, \textrm{others}\). }
@@ -940,6 +941,12 @@
Gower, J. C., & Hand, D. J. (1996). Biplots. Chapman & Hall.
+
+Graybill, F. A. (1961). An introduction to linear statistical models. McGraw-Hill.
+
+
+Hocking, R. R. (2013). Methods and applications of linear models: Regression and the analysis of variance. Wiley. https://books.google.ca/books?id=iq2J-1iS6HcC
+
Kwan, E., Lu, I. R. R., & Friendly, M. (2009). Tableplot: A new tool for assessing precise predictions. Zeitschrift für Psychologie / Journal of Psychology, 217(1), 38–48. https://doi.org/10.1027/0044-3409.217.1.38
diff --git a/docs/91-colophon.html b/docs/91-colophon.html
index 215ad1e..29ee451 100644
--- a/docs/91-colophon.html
+++ b/docs/91-colophon.html
@@ -458,210 +458,216 @@ Colophon
CRAN
+gganimate
+1.0.9
+2024-02-27
+CRAN
+
+
ggbiplot
0.6.3
2024-10-12
local
-
+
ggdensity
1.0.0
2023-02-09
CRAN
-
+
ggeffects
1.7.1
2024-09-01
CRAN
-
+
ggpcp
0.2.0
2022-11-28
CRAN
-
+
ggplot2
3.5.1
2024-04-23
CRAN
-
+
ggpubr
0.6.0
2023-02-10
CRAN
-
+
ggrepel
0.9.6
2024-09-07
CRAN
-
+
ggstats
0.7.0
2024-09-22
CRAN
-
+
heplots
1.7.0
2024-05-02
CRAN
-
+
Hotelling
1.0-8
2021-09-09
CRAN
-
+
imager
1.0.2
2024-05-13
CRAN
-
+
insight
0.20.5
2024-10-02
CRAN
-
+
knitr
1.48
2024-07-07
CRAN
-
+
lubridate
1.9.3
2023-09-27
CRAN
-
+
magrittr
2.0.3
2022-03-30
CRAN
-
+
marginaleffects
0.22.0
2024-08-31
CRAN
-
+
MASS
7.3-61
2024-06-13
CRAN
-
+
matlib
1.0.1
2024-10-23
local
-
+
modelbased
0.8.8
2024-06-11
CRAN
-
+
modelsummary
2.2.0
2024-09-02
CRAN
-
+
parameters
0.22.2
2024-09-03
CRAN
-
+
patchwork
1.3.0
2024-09-16
CRAN
-
+
performance
0.12.3
2024-09-02
CRAN
-
+
purrr
1.0.2
2023-08-10
CRAN
-
+
readr
2.1.5
2024-01-10
CRAN
-
+
report
0.5.9
2024-07-10
CRAN
-
+
Rtsne
0.17
2023-12-07
CRAN
-
+
see
0.9.0
2024-09-06
CRAN
-
+
stringr
1.5.1
2023-11-14
CRAN
-
+
tibble
3.2.1
2023-03-20
CRAN
-
+
tidyr
1.3.1
2024-01-24
CRAN
-
+
tidyverse
2.0.0
2023-02-22
CRAN
-
+
tourr
1.2.0
2024-04-20
CRAN
-
+
vcd
1.4-13
2024-09-16
CRAN
-
+
VisCollin
0.1.2
2023-09-05
diff --git a/docs/95-references.html b/docs/95-references.html
index d7ca56a..6d47698 100644
--- a/docs/95-references.html
+++ b/docs/95-references.html
@@ -850,6 +850,10 @@ References
and ANOVA. The American Statistician,
32(1), 17–22. https://doi.org/10.1080/00031305.1978.10479237
+
+Hocking, R. R. (2013). Methods and applications of linear models:
+Regression and the analysis of variance. Wiley. https://books.google.ca/books?id=iq2J-1iS6HcC
+
Hofmann, H., VanderPlas, S., & Ge, Y. (2022). Ggpcp: Parallel
coordinate plots in the ggplot2 framework. https://github.com/heike/ggpcp
@@ -1044,6 +1048,10 @@ References
and correlation of organs. Philosophical Transactions of the Royal
Society of London, 200(321–330), 1–66. https://doi.org/10.1098/rsta.1903.0001
+
+Pedersen, T. L., & Robinson, D. (2024). Gganimate: A grammar of
+animated graphics. https://gganimate.com
+
Pineo, P. O., & Porter, J. (1967). Occupational prestige in canada*.
Canadian Review of Sociology, 4(1), 24–40.
diff --git a/docs/figs/ch07/fig-measerr-stats-1.png b/docs/figs/ch07/fig-measerr-stats-1.png
index e263e36..a941822 100644
Binary files a/docs/figs/ch07/fig-measerr-stats-1.png and b/docs/figs/ch07/fig-measerr-stats-1.png differ
diff --git a/docs/index.html b/docs/index.html
index 1fda5a7..34ef3f4 100644
--- a/docs/index.html
+++ b/docs/index.html
@@ -5,7 +5,7 @@
-
+
Visualizing Multivariate Data and Models in R