Skip to content

Commit

Permalink
removing broken link
Browse files Browse the repository at this point in the history
  • Loading branch information
osorensen committed Apr 7, 2024
1 parent 19aa630 commit 8dcf9be
Show file tree
Hide file tree
Showing 9 changed files with 8 additions and 4 deletions.
2 changes: 1 addition & 1 deletion vignettes-raw/optimization.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ summary(mod_nm)

## Implementation Details

At a given set of parameters, the marginal likelihood is evaluated completely in C++. For solving the penalized iteratively reweighted least squares problem arising due to the Laplace approximation, we use sparse matrix methods from the [Eigen C++ template library](https://eigen.tuxfamily.org/index.php?title=Main_Page) through the `RcppEigen` package [@batesFastElegantNumerical2013]. In order to keep track of the derivatives throughout this iterative process, we use the [autodiff library](https://autodiff.github.io/) [@lealAutodiffModernFast2018]. However, since `autodiff` natively only supports dense matrix operations with `Eigen`, we have extended this library so that it also supports sparse matrix operations. This modified version of the `autodiff` library can be found at `inst/include/autodiff/`.
At a given set of parameters, the marginal likelihood is evaluated completely in C++. For solving the penalized iteratively reweighted least squares problem arising due to the Laplace approximation, we use sparse matrix methods from the Eigen C++ template library through the `RcppEigen` package [@batesFastElegantNumerical2013]. In order to keep track of the derivatives throughout this iterative process, we use the [autodiff library](https://autodiff.github.io/) [@lealAutodiffModernFast2018]. However, since `autodiff` natively only supports dense matrix operations with `Eigen`, we have extended this library so that it also supports sparse matrix operations. This modified version of the `autodiff` library can be found at `inst/include/autodiff/`.

In order to maximize the marginal likelihood, we currently rely on the `optim()` function in R. To make use of the fact that both the marginal likelihood value itself and first derivatives are returned from the C++ function, we use memoisation, provided by the `memoise` package [@wickhamMemoiseMemoisationFunctions2021]. However, the optimization process still involves copying all model data between R and C++ for each new set of parameters. This is potentially an efficiency bottleneck with large datasets, although with the limited profiling that has been done so far, it seems like the vast majority of the computation time is spent actually solving the penalized iteratively reweighted least squares problem in C++.

Expand Down
5 changes: 5 additions & 0 deletions vignettes/glmm_factor.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,11 @@ In this case we can confirm that the `galamm` function is correctly implemented
```r
library(lme4)
#> Loading required package: Matrix
#>
#> Attaching package: 'lme4'
#> The following object is masked from 'package:galamm':
#>
#> llikAIC
count_mod_lme4 <- glmer(
formula = y ~ lbas * treat + lage + v4 + (1 | subj),
data = epilep,
Expand Down
2 changes: 1 addition & 1 deletion vignettes/optimization.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -406,7 +406,7 @@ summary(mod_nm)

## Implementation Details

At a given set of parameters, the marginal likelihood is evaluated completely in C++. For solving the penalized iteratively reweighted least squares problem arising due to the Laplace approximation, we use sparse matrix methods from the [Eigen C++ template library](https://eigen.tuxfamily.org/index.php?title=Main_Page) through the `RcppEigen` package [@batesFastElegantNumerical2013]. In order to keep track of the derivatives throughout this iterative process, we use the [autodiff library](https://autodiff.github.io/) [@lealAutodiffModernFast2018]. However, since `autodiff` natively only supports dense matrix operations with `Eigen`, we have extended this library so that it also supports sparse matrix operations. This modified version of the `autodiff` library can be found at `inst/include/autodiff/`.
At a given set of parameters, the marginal likelihood is evaluated completely in C++. For solving the penalized iteratively reweighted least squares problem arising due to the Laplace approximation, we use sparse matrix methods from the Eigen C++ template library through the `RcppEigen` package [@batesFastElegantNumerical2013]. In order to keep track of the derivatives throughout this iterative process, we use the [autodiff library](https://autodiff.github.io/) [@lealAutodiffModernFast2018]. However, since `autodiff` natively only supports dense matrix operations with `Eigen`, we have extended this library so that it also supports sparse matrix operations. This modified version of the `autodiff` library can be found at `inst/include/autodiff/`.

In order to maximize the marginal likelihood, we currently rely on the `optim()` function in R. To make use of the fact that both the marginal likelihood value itself and first derivatives are returned from the C++ function, we use memoisation, provided by the `memoise` package [@wickhamMemoiseMemoisationFunctions2021]. However, the optimization process still involves copying all model data between R and C++ for each new set of parameters. This is potentially an efficiency bottleneck with large datasets, although with the limited profiling that has been done so far, it seems like the vast majority of the computation time is spent actually solving the penalized iteratively reweighted least squares problem in C++.

Expand Down
Binary file modified vignettes/scaling-glmm-plot-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified vignettes/scaling-hsced-plot-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified vignettes/scaling-lmm-plot-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified vignettes/scaling-semiparametric-binomial-plot-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified vignettes/scaling-semiparametric-gaussian-plot-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 1 addition & 2 deletions vignettes/semiparametric.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -771,8 +771,7 @@ We can look at the model summary:
```r
summary(mod_byvar_mixed)
#> GALAMM fit by maximum marginal likelihood.
#> Formula: y ~ domain + sl(x, by = domain, factor = c("ability1", "ability2")) +
#> (0 + domain1:ability1 + domain2:ability2 | id)
#> Formula: y ~ domain + sl(x, by = domain, factor = c("ability1", "ability2")) + (0 + domain1:ability1 + domain2:ability2 | id)
#> Data: dat
#> Control: galamm_control(optim_control = list(factr = 1e+09, trace = 3, REPORT = 30, maxit = 1000))
#>
Expand Down

0 comments on commit 8dcf9be

Please sign in to comment.