diff --git a/man/details_decision_tree_partykit.Rd b/man/details_decision_tree_partykit.Rd
index 87afcbcaa..ecd0cd264 100644
--- a/man/details_decision_tree_partykit.Rd
+++ b/man/details_decision_tree_partykit.Rd
@@ -8,14 +8,14 @@
tree-based structure using hypothesis testing methods.
}
\details{
-For this engine, there are multiple modes: censored regression,
-regression, and classification
+For this engine, there are multiple modes: regression, classification,
+and censored regression
\subsection{Tuning Parameters}{
This model has 2 tuning parameters:
\itemize{
-\item \code{tree_depth}: Tree Depth (type: integer, default: see below)
\item \code{min_n}: Minimal Node Size (type: integer, default: 20L)
+\item \code{tree_depth}: Tree Depth (type: integer, default: see below)
}
The \code{tree_depth} parameter defaults to \code{0} which means no restrictions
diff --git a/man/details_discrim_linear_sparsediscrim.Rd b/man/details_discrim_linear_sparsediscrim.Rd
index 3e1c2eed6..6e4233118 100644
--- a/man/details_discrim_linear_sparsediscrim.Rd
+++ b/man/details_discrim_linear_sparsediscrim.Rd
@@ -51,7 +51,7 @@ discrim_linear(regularization_method = character(0)) \%>\%
##
## Model fit template:
## discrim::fit_regularized_linear(x = missing_arg(), y = missing_arg(),
-## method = character(0))
+## regularization_method = character(0))
}\if{html}{\out{}}
}
diff --git a/man/details_discrim_quad_sparsediscrim.Rd b/man/details_discrim_quad_sparsediscrim.Rd
index 9d588c471..ddc934dfb 100644
--- a/man/details_discrim_quad_sparsediscrim.Rd
+++ b/man/details_discrim_quad_sparsediscrim.Rd
@@ -49,7 +49,7 @@ discrim_quad(regularization_method = character(0)) \%>\%
##
## Model fit template:
## discrim::fit_regularized_quad(x = missing_arg(), y = missing_arg(),
-## method = character(0))
+## regularization_method = character(0))
}\if{html}{\out{}}
}
diff --git a/man/details_linear_reg_lm.Rd b/man/details_linear_reg_lm.Rd
index a046e4f67..33085b721 100644
--- a/man/details_linear_reg_lm.Rd
+++ b/man/details_linear_reg_lm.Rd
@@ -60,8 +60,8 @@ suboptimal; in the case of replication weights, \strong{even wrong}. Hence,
standard errors and analysis of variance tables should be treated with
care” (emphasis added)
-Depending on your application, the degrees of freedown for the model
-(and other statistics) might be incorrect.
+Depending on your application, the degrees of freedom for the model (and
+other statistics) might be incorrect.
}
\subsection{Saving fitted model objects}{
diff --git a/man/details_logistic_reg_gee.Rd b/man/details_logistic_reg_gee.Rd
index 52f2d8072..df4b6db1e 100644
--- a/man/details_logistic_reg_gee.Rd
+++ b/man/details_logistic_reg_gee.Rd
@@ -62,7 +62,7 @@ call would look like:
\if{html}{\out{
}}\preformatted{gee(breaks ~ tension, id = wool, data = warpbreaks, corstr = "exchangeable")
}\if{html}{\out{
}}
-With parsnip, we suggest using the formula method when fitting:
+With \code{parsnip}, we suggest using the formula method when fitting:
\if{html}{\out{}}\preformatted{library(tidymodels)
data("toenail", package = "HSAUR3")
diff --git a/man/details_mars_earth.Rd b/man/details_mars_earth.Rd
index 8478a10b8..1cb7a6c69 100644
--- a/man/details_mars_earth.Rd
+++ b/man/details_mars_earth.Rd
@@ -19,10 +19,7 @@ This model has 3 tuning parameters:
\item \code{prune_method}: Pruning Method (type: character, default: ‘backward’)
}
-The default value of \code{num_terms} depends on the number of predictor
-columns. For a data frame \code{x}, the default is
-\code{min(200, max(20, 2 * ncol(x))) + 1} (see
-\code{\link[earth:earth]{earth::earth()}} and the reference below).
+Parsnip changes the default range for \code{num_terms} to \code{c(50, 500)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_mlp_brulee.Rd b/man/details_mlp_brulee.Rd
index a3dc99f75..050be24ac 100644
--- a/man/details_mlp_brulee.Rd
+++ b/man/details_mlp_brulee.Rd
@@ -38,6 +38,8 @@ each batch.
\item \code{stop_iter()}: A non-negative integer for how many iterations with no
improvement before stopping. (default: 5L).
}
+
+Parsnip changes the default range for \code{learn_rate} to \code{c(-2.5, -0.5)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_nearest_neighbor_kknn.Rd b/man/details_nearest_neighbor_kknn.Rd
index e931e78c3..e88f854c3 100644
--- a/man/details_nearest_neighbor_kknn.Rd
+++ b/man/details_nearest_neighbor_kknn.Rd
@@ -18,6 +18,9 @@ This model has 3 tuning parameters:
‘optimal’)
\item \code{dist_power}: Minkowski Distance Order (type: double, default: 2.0)
}
+
+Parsnip changes the default range for \code{neighbors} to \code{c(1, 15)} and
+\code{dist_power} to \code{c(1/10, 2)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_rand_forest_aorsf.Rd b/man/details_rand_forest_aorsf.Rd
index 7796fdf1d..eeee82cd3 100644
--- a/man/details_rand_forest_aorsf.Rd
+++ b/man/details_rand_forest_aorsf.Rd
@@ -9,16 +9,16 @@ trees, each de-correlated from the others. The final prediction uses all
predictions from the individual trees and combines them.
}
\details{
-For this engine, there are multiple modes: censored regression,
-classification, and regression
+For this engine, there are multiple modes: classification, regression,
+and censored regression
\subsection{Tuning Parameters}{
This model has 3 tuning parameters:
\itemize{
-\item \code{trees}: # Trees (type: integer, default: 500L)
-\item \code{min_n}: Minimal Node Size (type: integer, default: 5L)
\item \code{mtry}: # Randomly Selected Predictors (type: integer, default:
ceiling(sqrt(n_predictors)))
+\item \code{trees}: # Trees (type: integer, default: 500L)
+\item \code{min_n}: Minimal Node Size (type: integer, default: 5L)
}
Additionally, this model has one engine-specific tuning parameter:
diff --git a/man/details_rand_forest_partykit.Rd b/man/details_rand_forest_partykit.Rd
index 25184df2e..e7c759d7f 100644
--- a/man/details_rand_forest_partykit.Rd
+++ b/man/details_rand_forest_partykit.Rd
@@ -9,15 +9,15 @@ trees, each independent of the others. The final prediction uses all
predictions from the individual trees and combines them.
}
\details{
-For this engine, there are multiple modes: censored regression,
-regression, and classification
+For this engine, there are multiple modes: regression, classification,
+and censored regression
\subsection{Tuning Parameters}{
This model has 3 tuning parameters:
\itemize{
-\item \code{trees}: # Trees (type: integer, default: 500L)
\item \code{min_n}: Minimal Node Size (type: integer, default: 20L)
\item \code{mtry}: # Randomly Selected Predictors (type: integer, default: 5L)
+\item \code{trees}: # Trees (type: integer, default: 500L)
}
}
diff --git a/man/details_svm_linear_LiblineaR.Rd b/man/details_svm_linear_LiblineaR.Rd
index b52638165..b006ca61e 100644
--- a/man/details_svm_linear_LiblineaR.Rd
+++ b/man/details_svm_linear_LiblineaR.Rd
@@ -22,6 +22,8 @@ This model has 2 tuning parameters:
This engine fits models that are L2-regularized for L2-loss. In the
\code{\link[LiblineaR:LiblineaR]{LiblineaR::LiblineaR()}} documentation, these
are types 1 (classification) and 11 (regression).
+
+Parsnip changes the default range for \code{cost} to \code{c(-10, 5)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_svm_linear_kernlab.Rd b/man/details_svm_linear_kernlab.Rd
index aa60e8fee..355a71579 100644
--- a/man/details_svm_linear_kernlab.Rd
+++ b/man/details_svm_linear_kernlab.Rd
@@ -18,6 +18,8 @@ This model has 2 tuning parameters:
\item \code{cost}: Cost (type: double, default: 1.0)
\item \code{margin}: Insensitivity Margin (type: double, default: 0.1)
}
+
+Parsnip changes the default range for \code{cost} to \code{c(-10, 5)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_svm_poly_kernlab.Rd b/man/details_svm_poly_kernlab.Rd
index 30f6c00fc..55e6463b8 100644
--- a/man/details_svm_poly_kernlab.Rd
+++ b/man/details_svm_poly_kernlab.Rd
@@ -20,6 +20,8 @@ This model has 4 tuning parameters:
\item \code{scale_factor}: Scale Factor (type: double, default: 1.0)
\item \code{margin}: Insensitivity Margin (type: double, default: 0.1)
}
+
+Parsnip changes the default range for \code{cost} to \code{c(-10, 5)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/details_svm_rbf_kernlab.Rd b/man/details_svm_rbf_kernlab.Rd
index 7e4f8f6bc..f3a2ad418 100644
--- a/man/details_svm_rbf_kernlab.Rd
+++ b/man/details_svm_rbf_kernlab.Rd
@@ -26,6 +26,8 @@ kernlab estimates it from the data using a heuristic method. See
\code{\link[kernlab:sigest]{kernlab::sigest()}}. This method uses random
numbers so, without setting the seed before fitting, the model will not
be reproducible.
+
+Parsnip changes the default range for \code{cost} to \code{c(-10, 5)}.
}
\subsection{Translation from parsnip to the original package (regression)}{
diff --git a/man/rmd/C5_rules_C5.0.md b/man/rmd/C5_rules_C5.0.md
index c99ae4832..c57dee2fd 100644
--- a/man/rmd/C5_rules_C5.0.md
+++ b/man/rmd/C5_rules_C5.0.md
@@ -20,7 +20,7 @@ Note that C5.0 has a tool for _early stopping_ during boosting where less iterat
The **rules** extension package is required to fit this model.
-```r
+``` r
library(rules)
C5_rules(
diff --git a/man/rmd/auto_ml_h2o.md b/man/rmd/auto_ml_h2o.md
index f2497d78d..2a1d3ac89 100644
--- a/man/rmd/auto_ml_h2o.md
+++ b/man/rmd/auto_ml_h2o.md
@@ -20,7 +20,7 @@ Engine arguments of interest
[agua::h2o_train_auto()] is a wrapper around [h2o::h2o.automl()].
-```r
+``` r
auto_ml() %>%
set_engine("h2o") %>%
set_mode("regression") %>%
@@ -41,7 +41,7 @@ auto_ml() %>%
## Translation from parsnip to the original package (classification)
-```r
+``` r
auto_ml() %>%
set_engine("h2o") %>%
set_mode("classification") %>%
diff --git a/man/rmd/bag_mars_earth.md b/man/rmd/bag_mars_earth.md
index d3ef2a7a1..9825b5a70 100644
--- a/man/rmd/bag_mars_earth.md
+++ b/man/rmd/bag_mars_earth.md
@@ -22,7 +22,7 @@ The default value of `num_terms` depends on the number of predictor columns. For
The **baguette** extension package is required to fit this model.
-```r
+``` r
bag_mars(num_terms = integer(1), prod_degree = integer(1), prune_method = character(1)) %>%
set_engine("earth") %>%
set_mode("regression") %>%
@@ -50,7 +50,7 @@ bag_mars(num_terms = integer(1), prod_degree = integer(1), prune_method = charac
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_mars(
diff --git a/man/rmd/bag_mlp_nnet.md b/man/rmd/bag_mlp_nnet.md
index 93955cb49..d412ca3cc 100644
--- a/man/rmd/bag_mlp_nnet.md
+++ b/man/rmd/bag_mlp_nnet.md
@@ -22,7 +22,7 @@ These defaults are set by the `baguette` package and are different than those in
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_mlp(penalty = double(1), hidden_units = integer(1)) %>%
@@ -52,7 +52,7 @@ bag_mlp(penalty = double(1), hidden_units = integer(1)) %>%
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_mlp(penalty = double(1), hidden_units = integer(1)) %>%
diff --git a/man/rmd/bag_tree_C5.0.md b/man/rmd/bag_tree_C5.0.md
index 18b139868..f14e053d7 100644
--- a/man/rmd/bag_tree_C5.0.md
+++ b/man/rmd/bag_tree_C5.0.md
@@ -16,7 +16,7 @@ This model has 1 tuning parameters:
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_tree(min_n = integer()) %>%
diff --git a/man/rmd/bag_tree_rpart.md b/man/rmd/bag_tree_rpart.md
index 1b20f023f..95729b6df 100644
--- a/man/rmd/bag_tree_rpart.md
+++ b/man/rmd/bag_tree_rpart.md
@@ -25,7 +25,7 @@ For the `class_cost` parameter, the value can be a non-negative scalar for a cla
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1)) %>%
@@ -56,7 +56,7 @@ bag_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1
The **baguette** extension package is required to fit this model.
-```r
+``` r
library(baguette)
bag_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1)) %>%
@@ -86,7 +86,7 @@ bag_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
bag_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1)) %>%
diff --git a/man/rmd/boost_tree_C5.0.md b/man/rmd/boost_tree_C5.0.md
index 07f73a2a1..7f26b8002 100644
--- a/man/rmd/boost_tree_C5.0.md
+++ b/man/rmd/boost_tree_C5.0.md
@@ -20,7 +20,7 @@ The implementation of C5.0 limits the number of trees to be between 1 and 100.
## Translation from parsnip to the original package (classification)
-```r
+``` r
boost_tree(trees = integer(), min_n = integer(), sample_size = numeric()) %>%
set_engine("C5.0") %>%
set_mode("classification") %>%
diff --git a/man/rmd/boost_tree_h2o.md b/man/rmd/boost_tree_h2o.md
index 7b12a78f1..e40f0d731 100644
--- a/man/rmd/boost_tree_h2o.md
+++ b/man/rmd/boost_tree_h2o.md
@@ -36,7 +36,7 @@ This model has 8 tuning parameters:
The **agua** extension package is required to fit this model.
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), tree_depth = integer(),
learn_rate = numeric(), min_n = integer(), loss_reduction = numeric(), stop_iter = integer()
@@ -73,7 +73,7 @@ boost_tree(
The **agua** extension package is required to fit this model.
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), tree_depth = integer(),
learn_rate = numeric(), min_n = integer(), loss_reduction = numeric(), stop_iter = integer()
diff --git a/man/rmd/boost_tree_mboost.md b/man/rmd/boost_tree_mboost.md
index 4bd1ed517..8e7c19e05 100644
--- a/man/rmd/boost_tree_mboost.md
+++ b/man/rmd/boost_tree_mboost.md
@@ -26,7 +26,7 @@ The `mtry` parameter is related to the number of predictors. The default is to u
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
boost_tree() %>%
diff --git a/man/rmd/boost_tree_spark.md b/man/rmd/boost_tree_spark.md
index 37fbd0d86..4ea6fa7c9 100644
--- a/man/rmd/boost_tree_spark.md
+++ b/man/rmd/boost_tree_spark.md
@@ -28,7 +28,7 @@ The `mtry` parameter is related to the number of predictors. The default depends
## Translation from parsnip to the original package (regression)
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric()
@@ -63,7 +63,7 @@ boost_tree(
## Translation from parsnip to the original package (classification)
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric()
diff --git a/man/rmd/boost_tree_xgboost.md b/man/rmd/boost_tree_xgboost.md
index 5ad594062..9d4c1778e 100644
--- a/man/rmd/boost_tree_xgboost.md
+++ b/man/rmd/boost_tree_xgboost.md
@@ -30,7 +30,7 @@ For `mtry`, the default value of `NULL` translates to using all available column
## Translation from parsnip to the original package (regression)
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric(),
@@ -67,7 +67,7 @@ boost_tree(
## Translation from parsnip to the original package (classification)
-```r
+``` r
boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric(),
@@ -128,7 +128,7 @@ This model can utilize sparse data during model fitting and prediction. Both spa
The xgboost function that parsnip indirectly wraps, [xgboost::xgb.train()], takes most arguments via the `params` list argument. To supply engine-specific arguments that are documented in [xgboost::xgb.train()] as arguments to be passed via `params`, supply the list elements directly as named arguments to [set_engine()] rather than as elements in `params`. For example, pass a non-default evaluation metric like this:
-```r
+``` r
# good
boost_tree() %>%
set_engine("xgboost", eval_metric = "mae")
@@ -146,7 +146,7 @@ boost_tree() %>%
...rather than this:
-```r
+``` r
# bad
boost_tree() %>%
set_engine("xgboost", params = list(eval_metric = "mae"))
diff --git a/man/rmd/cubist_rules_Cubist.md b/man/rmd/cubist_rules_Cubist.md
index 4ce1e2f33..3a009cad6 100644
--- a/man/rmd/cubist_rules_Cubist.md
+++ b/man/rmd/cubist_rules_Cubist.md
@@ -21,7 +21,7 @@ This model has 3 tuning parameters:
The **rules** extension package is required to fit this model.
-```r
+``` r
library(rules)
cubist_rules(
diff --git a/man/rmd/decision_tree_C5.0.md b/man/rmd/decision_tree_C5.0.md
index f2b2a78c2..59e4a767e 100644
--- a/man/rmd/decision_tree_C5.0.md
+++ b/man/rmd/decision_tree_C5.0.md
@@ -14,7 +14,7 @@ This model has 1 tuning parameters:
## Translation from parsnip to the original package (classification)
-```r
+``` r
decision_tree(min_n = integer()) %>%
set_engine("C5.0") %>%
set_mode("classification") %>%
diff --git a/man/rmd/decision_tree_partykit.md b/man/rmd/decision_tree_partykit.md
index 65eeca527..8fa6c53af 100644
--- a/man/rmd/decision_tree_partykit.md
+++ b/man/rmd/decision_tree_partykit.md
@@ -1,7 +1,7 @@
-For this engine, there are multiple modes: censored regression, regression, and classification
+For this engine, there are multiple modes: regression, classification, and censored regression
## Tuning Parameters
@@ -9,10 +9,10 @@ For this engine, there are multiple modes: censored regression, regression, and
This model has 2 tuning parameters:
-- `tree_depth`: Tree Depth (type: integer, default: see below)
-
- `min_n`: Minimal Node Size (type: integer, default: 20L)
+- `tree_depth`: Tree Depth (type: integer, default: see below)
+
The `tree_depth` parameter defaults to `0` which means no restrictions are applied to tree depth.
An engine-specific parameter for this model is:
@@ -24,7 +24,7 @@ An engine-specific parameter for this model is:
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
@@ -53,7 +53,7 @@ decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
@@ -84,7 +84,7 @@ decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
diff --git a/man/rmd/decision_tree_rpart.md b/man/rmd/decision_tree_rpart.md
index ea3a36d12..10a1e788d 100644
--- a/man/rmd/decision_tree_rpart.md
+++ b/man/rmd/decision_tree_rpart.md
@@ -18,7 +18,7 @@ This model has 3 tuning parameters:
## Translation from parsnip to the original package (classification)
-```r
+``` r
decision_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1)) %>%
set_engine("rpart") %>%
set_mode("classification") %>%
@@ -45,7 +45,7 @@ decision_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = dou
## Translation from parsnip to the original package (regression)
-```r
+``` r
decision_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = double(1)) %>%
set_engine("rpart") %>%
set_mode("regression") %>%
@@ -74,7 +74,7 @@ decision_tree(tree_depth = integer(1), min_n = integer(1), cost_complexity = dou
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
decision_tree(
diff --git a/man/rmd/decision_tree_spark.md b/man/rmd/decision_tree_spark.md
index 4fa02ec15..43c6a3617 100644
--- a/man/rmd/decision_tree_spark.md
+++ b/man/rmd/decision_tree_spark.md
@@ -16,7 +16,7 @@ This model has 2 tuning parameters:
## Translation from parsnip to the original package (classification)
-```r
+``` r
decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
set_engine("spark") %>%
set_mode("classification") %>%
@@ -42,7 +42,7 @@ decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
## Translation from parsnip to the original package (regression)
-```r
+``` r
decision_tree(tree_depth = integer(1), min_n = integer(1)) %>%
set_engine("spark") %>%
set_mode("regression") %>%
diff --git a/man/rmd/discrim_flexible_earth.md b/man/rmd/discrim_flexible_earth.md
index edfdc7337..7dc2f3ce7 100644
--- a/man/rmd/discrim_flexible_earth.md
+++ b/man/rmd/discrim_flexible_earth.md
@@ -22,7 +22,7 @@ The default value of `num_terms` depends on the number of columns (`p`): `min(20
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_flexible(
diff --git a/man/rmd/discrim_linear_MASS.md b/man/rmd/discrim_linear_MASS.md
index deb6efb09..4448e1577 100644
--- a/man/rmd/discrim_linear_MASS.md
+++ b/man/rmd/discrim_linear_MASS.md
@@ -12,7 +12,7 @@ This engine has no tuning parameters.
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_linear() %>%
diff --git a/man/rmd/discrim_linear_mda.md b/man/rmd/discrim_linear_mda.md
index 3bd4cdcdf..0217c09c5 100644
--- a/man/rmd/discrim_linear_mda.md
+++ b/man/rmd/discrim_linear_mda.md
@@ -17,7 +17,7 @@ This model has 1 tuning parameter:
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_linear(penalty = numeric(0)) %>%
diff --git a/man/rmd/discrim_linear_sda.md b/man/rmd/discrim_linear_sda.md
index fdab652d6..ccf745e9a 100644
--- a/man/rmd/discrim_linear_sda.md
+++ b/man/rmd/discrim_linear_sda.md
@@ -22,7 +22,7 @@ However, there are a few engine-specific parameters that can be set or optimized
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_linear() %>%
diff --git a/man/rmd/discrim_linear_sparsediscrim.md b/man/rmd/discrim_linear_sparsediscrim.md
index 69eaf780b..53adc38fd 100644
--- a/man/rmd/discrim_linear_sparsediscrim.md
+++ b/man/rmd/discrim_linear_sparsediscrim.md
@@ -23,7 +23,7 @@ The possible values of this parameter, and the functions that they execute, are:
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_linear(regularization_method = character(0)) %>%
@@ -41,7 +41,7 @@ discrim_linear(regularization_method = character(0)) %>%
##
## Model fit template:
## discrim::fit_regularized_linear(x = missing_arg(), y = missing_arg(),
-## method = character(0))
+## regularization_method = character(0))
```
## Preprocessing requirements
diff --git a/man/rmd/discrim_quad_MASS.md b/man/rmd/discrim_quad_MASS.md
index 26fcfb940..3607036e7 100644
--- a/man/rmd/discrim_quad_MASS.md
+++ b/man/rmd/discrim_quad_MASS.md
@@ -12,7 +12,7 @@ This engine has no tuning parameters.
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_quad() %>%
diff --git a/man/rmd/discrim_quad_sparsediscrim.md b/man/rmd/discrim_quad_sparsediscrim.md
index 055b4c825..5751b1314 100644
--- a/man/rmd/discrim_quad_sparsediscrim.md
+++ b/man/rmd/discrim_quad_sparsediscrim.md
@@ -22,7 +22,7 @@ The possible values of this parameter, and the functions that they execute, are:
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_quad(regularization_method = character(0)) %>%
@@ -40,7 +40,7 @@ discrim_quad(regularization_method = character(0)) %>%
##
## Model fit template:
## discrim::fit_regularized_quad(x = missing_arg(), y = missing_arg(),
-## method = character(0))
+## regularization_method = character(0))
```
## Preprocessing requirements
diff --git a/man/rmd/discrim_regularized_klaR.md b/man/rmd/discrim_regularized_klaR.md
index e5fcc0d3e..473f7f635 100644
--- a/man/rmd/discrim_regularized_klaR.md
+++ b/man/rmd/discrim_regularized_klaR.md
@@ -27,7 +27,7 @@ Some special cases for the RDA model:
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
discrim_regularized(frac_identity = numeric(0), frac_common_cov = numeric(0)) %>%
diff --git a/man/rmd/gen_additive_mod_mgcv.md b/man/rmd/gen_additive_mod_mgcv.md
index 69b00ca78..2e46173f5 100644
--- a/man/rmd/gen_additive_mod_mgcv.md
+++ b/man/rmd/gen_additive_mod_mgcv.md
@@ -18,7 +18,7 @@ This model has 2 tuning parameters:
## Translation from parsnip to the original package (regression)
-```r
+``` r
gen_additive_mod(adjust_deg_free = numeric(1), select_features = logical(1)) %>%
set_engine("mgcv") %>%
set_mode("regression") %>%
@@ -42,7 +42,7 @@ gen_additive_mod(adjust_deg_free = numeric(1), select_features = logical(1)) %>%
## Translation from parsnip to the original package (classification)
-```r
+``` r
gen_additive_mod(adjust_deg_free = numeric(1), select_features = logical(1)) %>%
set_engine("mgcv") %>%
set_mode("classification") %>%
@@ -69,7 +69,7 @@ This model should be used with a model formula so that smooth terms can be speci
-```r
+``` r
library(mgcv)
gen_additive_mod() %>%
set_engine("mgcv") %>%
@@ -99,7 +99,7 @@ The smoothness of the terms will need to be manually specified (e.g., using `s(x
When using a workflow, pass the _model formula_ to [workflows::add_model()]'s `formula` argument, and a simplified _preprocessing formula_ elsewhere.
-```r
+``` r
spec <-
gen_additive_mod() %>%
set_engine("mgcv") %>%
diff --git a/man/rmd/glmnet-details.md b/man/rmd/glmnet-details.md
index 3c76f8f11..dba57c29b 100644
--- a/man/rmd/glmnet-details.md
+++ b/man/rmd/glmnet-details.md
@@ -21,7 +21,7 @@ When the `predict()` method is called, it automatically uses the penalty that wa
-```r
+``` r
library(tidymodels)
fit <-
@@ -45,7 +45,7 @@ predict(fit, mtcars[1:3,])
However, any penalty values can be predicted simultaneously using the `multi_predict()` method:
-```r
+``` r
# predict at c(0.00, 0.01)
multi_predict(fit, mtcars[1:3,], penalty = c(0.00, 0.01))
```
@@ -59,7 +59,7 @@ multi_predict(fit, mtcars[1:3,], penalty = c(0.00, 0.01))
## 3
```
-```r
+``` r
# unnested:
multi_predict(fit, mtcars[1:3,], penalty = c(0.00, 0.01)) %>%
add_rowindex() %>%
@@ -83,7 +83,7 @@ multi_predict(fit, mtcars[1:3,], penalty = c(0.00, 0.01)) %>%
It may appear odd that the `lambda` value does not get used in the fit:
-```r
+``` r
linear_reg(penalty = 1) %>%
set_engine("glmnet") %>%
translate()
@@ -117,7 +117,7 @@ For example, we have found that if you want a fully ridge regression model (i.e.
If we want to use our own path, the argument is passed as an engine-specific option:
-```r
+``` r
coef_path_values <- c(0, 10^seq(-5, 1, length.out = 7))
fit_ridge <-
@@ -132,7 +132,7 @@ all.equal(sort(fit_ridge$fit$lambda), coef_path_values)
## [1] TRUE
```
-```r
+``` r
# predict at penalty = 1
predict(fit_ridge, mtcars[1:3,])
```
@@ -155,7 +155,7 @@ predict(fit_ridge, mtcars[1:3,])
When parsnip makes a model, it gives it an extra class. Use the `tidy()` method on the object, it produces coefficients for the penalty that was originally requested:
-```r
+``` r
tidy(fit)
```
@@ -175,7 +175,7 @@ tidy(fit)
Note that there is a `tidy()` method for `glmnet` objects in the `broom` package. If this is used directly on the underlying `glmnet` object, it returns _all of coefficients on the path_:
-```r
+``` r
# Use the basic tidy() method for glmnet
all_tidy_coefs <- broom:::tidy.glmnet(fit$fit)
all_tidy_coefs
@@ -194,7 +194,7 @@ all_tidy_coefs
## # i 634 more rows
```
-```r
+``` r
length(unique(all_tidy_coefs$lambda))
```
diff --git a/man/rmd/linear_reg_brulee.md b/man/rmd/linear_reg_brulee.md
index ee1fea6f3..926ae5fc1 100644
--- a/man/rmd/linear_reg_brulee.md
+++ b/man/rmd/linear_reg_brulee.md
@@ -28,7 +28,7 @@ Other engine arguments of interest:
## Translation from parsnip to the original package (regression)
-```r
+``` r
linear_reg(penalty = double(1)) %>%
set_engine("brulee") %>%
translate()
diff --git a/man/rmd/linear_reg_gee.md b/man/rmd/linear_reg_gee.md
index 01aaab16b..1eef0675f 100644
--- a/man/rmd/linear_reg_gee.md
+++ b/man/rmd/linear_reg_gee.md
@@ -12,7 +12,7 @@ This model has no formal tuning parameters. It may be beneficial to determine th
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
diff --git a/man/rmd/linear_reg_glm.md b/man/rmd/linear_reg_glm.md
index 552984dde..cff01bf81 100644
--- a/man/rmd/linear_reg_glm.md
+++ b/man/rmd/linear_reg_glm.md
@@ -10,7 +10,7 @@ This engine has no tuning parameters but you can set the `family` parameter (and
## Translation from parsnip to the original package
-```r
+``` r
linear_reg() %>%
set_engine("glm") %>%
translate()
@@ -29,7 +29,7 @@ linear_reg() %>%
To use a non-default `family` and/or `link`, pass in as an argument to `set_engine()`:
-```r
+``` r
linear_reg() %>%
set_engine("glm", family = stats::poisson(link = "sqrt")) %>%
translate()
diff --git a/man/rmd/linear_reg_glmer.md b/man/rmd/linear_reg_glmer.md
index 36f4de928..0ec98d6d5 100644
--- a/man/rmd/linear_reg_glmer.md
+++ b/man/rmd/linear_reg_glmer.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
@@ -48,7 +48,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/linear_reg_glmnet.md b/man/rmd/linear_reg_glmnet.md
index b2f74d885..ec44b1eb2 100644
--- a/man/rmd/linear_reg_glmnet.md
+++ b/man/rmd/linear_reg_glmnet.md
@@ -20,7 +20,7 @@ The `penalty` parameter has no default and requires a single numeric value. For
## Translation from parsnip to the original package
-```r
+``` r
linear_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("glmnet") %>%
translate()
diff --git a/man/rmd/linear_reg_gls.md b/man/rmd/linear_reg_gls.md
index d845d0081..59e4ae45f 100644
--- a/man/rmd/linear_reg_gls.md
+++ b/man/rmd/linear_reg_gls.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
@@ -42,7 +42,7 @@ The model can accept case weights.
With parsnip, we suggest using the _fixed effects_ formula method when fitting, but the details of the correlation structure should be passed to `set_engine()` since it is an irregular (but required) argument:
-```r
+``` r
library(tidymodels)
# load nlme to be able to use the `cor*()` functions
library(nlme)
diff --git a/man/rmd/linear_reg_h2o.md b/man/rmd/linear_reg_h2o.md
index 54f2cd9b3..be50cc620 100644
--- a/man/rmd/linear_reg_h2o.md
+++ b/man/rmd/linear_reg_h2o.md
@@ -24,7 +24,7 @@ The choice of `mixture` depends on the engine parameter `solver`, which is autom
-```r
+``` r
linear_reg(penalty = 1, mixture = 0.5) %>%
set_engine("h2o") %>%
translate()
diff --git a/man/rmd/linear_reg_keras.md b/man/rmd/linear_reg_keras.md
index 50c0cfac8..798139d65 100644
--- a/man/rmd/linear_reg_keras.md
+++ b/man/rmd/linear_reg_keras.md
@@ -16,7 +16,7 @@ For `penalty`, the amount of regularization is _only_ L2 penalty (i.e., ridge or
## Translation from parsnip to the original package
-```r
+``` r
linear_reg(penalty = double(1)) %>%
set_engine("keras") %>%
translate()
diff --git a/man/rmd/linear_reg_lm.md b/man/rmd/linear_reg_lm.md
index 9fa851bc0..0bf21f18d 100644
--- a/man/rmd/linear_reg_lm.md
+++ b/man/rmd/linear_reg_lm.md
@@ -10,7 +10,7 @@ This engine has no tuning parameters.
## Translation from parsnip to the original package
-```r
+``` r
linear_reg() %>%
set_engine("lm") %>%
translate()
@@ -39,7 +39,7 @@ The `fit()` and `fit_xy()` arguments have arguments called `case_weights` that e
_However_, the documentation in [stats::lm()] assumes that is specific type of case weights are being used: "Non-NULL weights can be used to indicate that different observations have different variances (with the values in weights being inversely proportional to the variances); or equivalently, when the elements of weights are positive integers `w_i`, that each response `y_i` is the mean of `w_i` unit-weight observations (including the case that there are w_i observations equal to `y_i` and the data have been summarized). However, in the latter case, notice that within-group variation is not used. Therefore, the sigma estimate and residual degrees of freedom may be suboptimal; in the case of replication weights, **even wrong**. Hence, standard errors and analysis of variance tables should be treated with care" (emphasis added)
-Depending on your application, the degrees of freedown for the model (and other statistics) might be incorrect.
+Depending on your application, the degrees of freedom for the model (and other statistics) might be incorrect.
## Saving fitted model objects
diff --git a/man/rmd/linear_reg_lme.md b/man/rmd/linear_reg_lme.md
index 27d3db2f3..c939889b7 100644
--- a/man/rmd/linear_reg_lme.md
+++ b/man/rmd/linear_reg_lme.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
diff --git a/man/rmd/linear_reg_lmer.md b/man/rmd/linear_reg_lmer.md
index d28b4f37e..96a92b6cb 100644
--- a/man/rmd/linear_reg_lmer.md
+++ b/man/rmd/linear_reg_lmer.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
@@ -39,7 +39,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/linear_reg_spark.md b/man/rmd/linear_reg_spark.md
index 2bd15afed..755802e44 100644
--- a/man/rmd/linear_reg_spark.md
+++ b/man/rmd/linear_reg_spark.md
@@ -22,7 +22,7 @@ For `penalty`, the amount of regularization includes both the L1 penalty (i.e.,
## Translation from parsnip to the original package
-```r
+``` r
linear_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("spark") %>%
translate()
diff --git a/man/rmd/linear_reg_stan.md b/man/rmd/linear_reg_stan.md
index 8da583a1b..d1f539f2e 100644
--- a/man/rmd/linear_reg_stan.md
+++ b/man/rmd/linear_reg_stan.md
@@ -23,7 +23,7 @@ See [rstan::sampling()] and [rstanarm::priors()] for more information on these a
## Translation from parsnip to the original package
-```r
+``` r
linear_reg() %>%
set_engine("stan") %>%
translate()
diff --git a/man/rmd/linear_reg_stan_glmer.md b/man/rmd/linear_reg_stan_glmer.md
index edd456ce2..4a0554335 100644
--- a/man/rmd/linear_reg_stan_glmer.md
+++ b/man/rmd/linear_reg_stan_glmer.md
@@ -25,7 +25,7 @@ See `?rstanarm::stan_glmer` and `?rstan::sampling` for more information.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
linear_reg() %>%
@@ -53,7 +53,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/logistic_reg_LiblineaR.md b/man/rmd/logistic_reg_LiblineaR.md
index 6da4a8430..f4ac4f861 100644
--- a/man/rmd/logistic_reg_LiblineaR.md
+++ b/man/rmd/logistic_reg_LiblineaR.md
@@ -20,7 +20,7 @@ Be aware that the `LiblineaR` engine regularizes the intercept. Other regularize
## Translation from parsnip to the original package
-```r
+``` r
logistic_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("LiblineaR") %>%
translate()
diff --git a/man/rmd/logistic_reg_brulee.md b/man/rmd/logistic_reg_brulee.md
index 9573a98fa..29823ff49 100644
--- a/man/rmd/logistic_reg_brulee.md
+++ b/man/rmd/logistic_reg_brulee.md
@@ -29,7 +29,7 @@ Other engine arguments of interest:
## Translation from parsnip to the original package (classification)
-```r
+``` r
logistic_reg(penalty = double(1)) %>%
set_engine("brulee") %>%
translate()
diff --git a/man/rmd/logistic_reg_gee.md b/man/rmd/logistic_reg_gee.md
index 4ca4e3371..bf2a4b4fe 100644
--- a/man/rmd/logistic_reg_gee.md
+++ b/man/rmd/logistic_reg_gee.md
@@ -12,7 +12,7 @@ This model has no formal tuning parameters. It may be beneficial to determine th
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
logistic_reg() %>%
@@ -47,7 +47,7 @@ Both `gee:gee()` and `gee:geepack()` specify the id/cluster variable using an ar
gee(breaks ~ tension, id = wool, data = warpbreaks, corstr = "exchangeable")
```
-With parsnip, we suggest using the formula method when fitting:
+With `parsnip`, we suggest using the formula method when fitting:
```r
library(tidymodels)
diff --git a/man/rmd/logistic_reg_glmer.md b/man/rmd/logistic_reg_glmer.md
index 1bd43f722..5df1028d3 100644
--- a/man/rmd/logistic_reg_glmer.md
+++ b/man/rmd/logistic_reg_glmer.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
logistic_reg() %>%
@@ -39,7 +39,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/logistic_reg_glmnet.md b/man/rmd/logistic_reg_glmnet.md
index d4c19eff0..e616b20ee 100644
--- a/man/rmd/logistic_reg_glmnet.md
+++ b/man/rmd/logistic_reg_glmnet.md
@@ -22,7 +22,7 @@ The `penalty` parameter has no default and requires a single numeric value. For
## Translation from parsnip to the original package
-```r
+``` r
logistic_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("glmnet") %>%
translate()
diff --git a/man/rmd/logistic_reg_h2o.md b/man/rmd/logistic_reg_h2o.md
index 121f9f875..11f452a60 100644
--- a/man/rmd/logistic_reg_h2o.md
+++ b/man/rmd/logistic_reg_h2o.md
@@ -25,7 +25,7 @@ The choice of `mixture` depends on the engine parameter `solver`, which is autom
[agua::h2o_train_glm()] for `logistic_reg()` is a wrapper around [h2o::h2o.glm()]. h2o will automatically picks the link function and distribution family or binomial responses.
-```r
+``` r
logistic_reg() %>%
set_engine("h2o") %>%
translate()
@@ -44,7 +44,7 @@ logistic_reg() %>%
To use a non-default argument in [h2o::h2o.glm()], pass in as an engine argument to `set_engine()`:
-```r
+``` r
logistic_reg() %>%
set_engine("h2o", compute_p_values = TRUE) %>%
translate()
diff --git a/man/rmd/logistic_reg_keras.md b/man/rmd/logistic_reg_keras.md
index cf3ea76df..bba345c40 100644
--- a/man/rmd/logistic_reg_keras.md
+++ b/man/rmd/logistic_reg_keras.md
@@ -16,7 +16,7 @@ For `penalty`, the amount of regularization is _only_ L2 penalty (i.e., ridge or
## Translation from parsnip to the original package
-```r
+``` r
logistic_reg(penalty = double(1)) %>%
set_engine("keras") %>%
translate()
diff --git a/man/rmd/logistic_reg_spark.md b/man/rmd/logistic_reg_spark.md
index feed4a39b..7b9b200a6 100644
--- a/man/rmd/logistic_reg_spark.md
+++ b/man/rmd/logistic_reg_spark.md
@@ -22,7 +22,7 @@ For `penalty`, the amount of regularization includes both the L1 penalty (i.e.,
## Translation from parsnip to the original package
-```r
+``` r
logistic_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("spark") %>%
translate()
diff --git a/man/rmd/logistic_reg_stan.md b/man/rmd/logistic_reg_stan.md
index 7587d8db2..dbcfe2177 100644
--- a/man/rmd/logistic_reg_stan.md
+++ b/man/rmd/logistic_reg_stan.md
@@ -23,7 +23,7 @@ See [rstan::sampling()] and [rstanarm::priors()] for more information on these a
## Translation from parsnip to the original package
-```r
+``` r
logistic_reg() %>%
set_engine("stan") %>%
translate()
diff --git a/man/rmd/logistic_reg_stan_glmer.md b/man/rmd/logistic_reg_stan_glmer.md
index 78ef38853..a2c770739 100644
--- a/man/rmd/logistic_reg_stan_glmer.md
+++ b/man/rmd/logistic_reg_stan_glmer.md
@@ -25,7 +25,7 @@ See `?rstanarm::stan_glmer` and `?rstan::sampling` for more information.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
logistic_reg() %>%
@@ -52,7 +52,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/mars_earth.md b/man/rmd/mars_earth.md
index 7d552d76b..1ed5ee796 100644
--- a/man/rmd/mars_earth.md
+++ b/man/rmd/mars_earth.md
@@ -15,12 +15,12 @@ This model has 3 tuning parameters:
- `prune_method`: Pruning Method (type: character, default: 'backward')
-The default value of `num_terms` depends on the number of predictor columns. For a data frame `x`, the default is `min(200, max(20, 2 * ncol(x))) + 1` (see [earth::earth()] and the reference below).
+Parsnip changes the default range for `num_terms` to `c(50, 500)`.
## Translation from parsnip to the original package (regression)
-```r
+``` r
mars(num_terms = integer(1), prod_degree = integer(1), prune_method = character(1)) %>%
set_engine("earth") %>%
set_mode("regression") %>%
@@ -46,7 +46,7 @@ mars(num_terms = integer(1), prod_degree = integer(1), prune_method = character(
## Translation from parsnip to the original package (classification)
-```r
+``` r
mars(num_terms = integer(1), prod_degree = integer(1), prune_method = character(1)) %>%
set_engine("earth") %>%
set_mode("classification") %>%
diff --git a/man/rmd/mlp_brulee.md b/man/rmd/mlp_brulee.md
index 77b03ffda..f8580b094 100644
--- a/man/rmd/mlp_brulee.md
+++ b/man/rmd/mlp_brulee.md
@@ -32,11 +32,12 @@ Other engine arguments of interest:
- `class_weights()`: Numeric class weights. See [brulee::brulee_mlp()].
- `stop_iter()`: A non-negative integer for how many iterations with no improvement before stopping. (default: 5L).
+Parsnip changes the default range for `learn_rate` to `c(-2.5, -0.5)`.
## Translation from parsnip to the original package (regression)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
@@ -74,7 +75,7 @@ Note that parsnip automatically sets linear activation in the last layer.
## Translation from parsnip to the original package (classification)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
diff --git a/man/rmd/mlp_h2o.md b/man/rmd/mlp_h2o.md
index 469fb4119..410ca15b9 100644
--- a/man/rmd/mlp_h2o.md
+++ b/man/rmd/mlp_h2o.md
@@ -38,7 +38,7 @@ Other engine arguments of interest:
[agua::h2o_train_mlp] is a wrapper around [h2o::h2o.deeplearning()].
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
@@ -75,7 +75,7 @@ mlp(
## Translation from parsnip to the original package (classification)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
diff --git a/man/rmd/mlp_keras.md b/man/rmd/mlp_keras.md
index 85e2c9bb1..be3722b94 100644
--- a/man/rmd/mlp_keras.md
+++ b/man/rmd/mlp_keras.md
@@ -22,7 +22,7 @@ This model has 5 tuning parameters:
## Translation from parsnip to the original package (regression)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
@@ -56,7 +56,7 @@ mlp(
## Translation from parsnip to the original package (classification)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
diff --git a/man/rmd/mlp_nnet.md b/man/rmd/mlp_nnet.md
index d404e0136..3cf9f1459 100644
--- a/man/rmd/mlp_nnet.md
+++ b/man/rmd/mlp_nnet.md
@@ -21,7 +21,7 @@ Note that, in [nnet::nnet()], the maximum number of parameters is an argument wi
## Translation from parsnip to the original package (regression)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
@@ -52,7 +52,7 @@ Note that parsnip automatically sets linear activation in the last layer.
## Translation from parsnip to the original package (classification)
-```r
+``` r
mlp(
hidden_units = integer(1),
penalty = double(1),
diff --git a/man/rmd/multinom_reg_brulee.md b/man/rmd/multinom_reg_brulee.md
index 20166fac6..8a32c1961 100644
--- a/man/rmd/multinom_reg_brulee.md
+++ b/man/rmd/multinom_reg_brulee.md
@@ -29,7 +29,7 @@ Other engine arguments of interest:
## Translation from parsnip to the original package (classification)
-```r
+``` r
multinom_reg(penalty = double(1)) %>%
set_engine("brulee") %>%
translate()
diff --git a/man/rmd/multinom_reg_glmnet.md b/man/rmd/multinom_reg_glmnet.md
index 1914e0860..389c066e8 100644
--- a/man/rmd/multinom_reg_glmnet.md
+++ b/man/rmd/multinom_reg_glmnet.md
@@ -22,7 +22,7 @@ The `penalty` parameter has no default and requires a single numeric value. For
## Translation from parsnip to the original package
-```r
+``` r
multinom_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("glmnet") %>%
translate()
diff --git a/man/rmd/multinom_reg_h2o.md b/man/rmd/multinom_reg_h2o.md
index 4b4c5e7de..8a06045ea 100644
--- a/man/rmd/multinom_reg_h2o.md
+++ b/man/rmd/multinom_reg_h2o.md
@@ -23,7 +23,7 @@ The choice of `mixture` depends on the engine parameter `solver`, which is autom
[agua::h2o_train_glm()] for `multinom_reg()` is a wrapper around [h2o::h2o.glm()] with `family = 'multinomial'`.
-```r
+``` r
multinom_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("h2o") %>%
translate()
diff --git a/man/rmd/multinom_reg_keras.md b/man/rmd/multinom_reg_keras.md
index d24a59427..7836e3c06 100644
--- a/man/rmd/multinom_reg_keras.md
+++ b/man/rmd/multinom_reg_keras.md
@@ -16,7 +16,7 @@ For `penalty`, the amount of regularization is _only_ L2 penalty (i.e., ridge or
## Translation from parsnip to the original package
-```r
+``` r
multinom_reg(penalty = double(1)) %>%
set_engine("keras") %>%
translate()
diff --git a/man/rmd/multinom_reg_nnet.md b/man/rmd/multinom_reg_nnet.md
index e5aaadc52..3ebf0f0de 100644
--- a/man/rmd/multinom_reg_nnet.md
+++ b/man/rmd/multinom_reg_nnet.md
@@ -16,7 +16,7 @@ For `penalty`, the amount of regularization includes only the L2 penalty (i.e.,
## Translation from parsnip to the original package
-```r
+``` r
multinom_reg(penalty = double(1)) %>%
set_engine("nnet") %>%
translate()
diff --git a/man/rmd/multinom_reg_spark.md b/man/rmd/multinom_reg_spark.md
index 35d30c7cf..28668e65d 100644
--- a/man/rmd/multinom_reg_spark.md
+++ b/man/rmd/multinom_reg_spark.md
@@ -22,7 +22,7 @@ For `penalty`, the amount of regularization includes both the L1 penalty (i.e.,
## Translation from parsnip to the original package
-```r
+``` r
multinom_reg(penalty = double(1), mixture = double(1)) %>%
set_engine("spark") %>%
translate()
diff --git a/man/rmd/naive_Bayes_h2o.md b/man/rmd/naive_Bayes_h2o.md
index e25b4d348..12a877a77 100644
--- a/man/rmd/naive_Bayes_h2o.md
+++ b/man/rmd/naive_Bayes_h2o.md
@@ -30,7 +30,7 @@ The **agua** extension package is required to fit this model.
[agua::h2o_train_nb()] is a wrapper around [h2o::h2o.naiveBayes()].
-```r
+``` r
naive_Bayes(Laplace = numeric(0)) %>%
set_engine("h2o") %>%
translate()
diff --git a/man/rmd/naive_Bayes_klaR.md b/man/rmd/naive_Bayes_klaR.md
index 961d2ad15..4af515369 100644
--- a/man/rmd/naive_Bayes_klaR.md
+++ b/man/rmd/naive_Bayes_klaR.md
@@ -21,7 +21,7 @@ Note that the engine argument `usekernel` is set to `TRUE` by default when using
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
naive_Bayes(smoothness = numeric(0), Laplace = numeric(0)) %>%
diff --git a/man/rmd/naive_Bayes_naivebayes.md b/man/rmd/naive_Bayes_naivebayes.md
index 63fcec9a0..6c491c62a 100644
--- a/man/rmd/naive_Bayes_naivebayes.md
+++ b/man/rmd/naive_Bayes_naivebayes.md
@@ -21,7 +21,7 @@ Note that the engine argument `usekernel` is set to `TRUE` by default when using
The **discrim** extension package is required to fit this model.
-```r
+``` r
library(discrim)
naive_Bayes(smoothness = numeric(0), Laplace = numeric(0)) %>%
diff --git a/man/rmd/nearest_neighbor_kknn.md b/man/rmd/nearest_neighbor_kknn.md
index ac6862d69..224a406d0 100644
--- a/man/rmd/nearest_neighbor_kknn.md
+++ b/man/rmd/nearest_neighbor_kknn.md
@@ -15,10 +15,13 @@ This model has 3 tuning parameters:
- `dist_power`: Minkowski Distance Order (type: double, default: 2.0)
+Parsnip changes the default range for `neighbors` to `c(1, 15)` and `dist_power` to `c(1/10, 2)`.
+
+
## Translation from parsnip to the original package (regression)
-```r
+``` r
nearest_neighbor(
neighbors = integer(1),
weight_func = character(1),
@@ -49,7 +52,7 @@ nearest_neighbor(
## Translation from parsnip to the original package (classification)
-```r
+``` r
nearest_neighbor(
neighbors = integer(1),
weight_func = character(1),
diff --git a/man/rmd/null-model.md b/man/rmd/null-model.md
index 85ac9eb85..57e49bcda 100644
--- a/man/rmd/null-model.md
+++ b/man/rmd/null-model.md
@@ -6,7 +6,7 @@ For this type of model, the template of the fit calls are below:
## parsnip
-```r
+``` r
null_model() %>%
set_engine("parsnip") %>%
set_mode("regression") %>%
@@ -23,7 +23,7 @@ null_model() %>%
```
-```r
+``` r
null_model() %>%
set_engine("parsnip") %>%
set_mode("classification") %>%
diff --git a/man/rmd/one-hot.md b/man/rmd/one-hot.md
index cb221c248..b05a49315 100644
--- a/man/rmd/one-hot.md
+++ b/man/rmd/one-hot.md
@@ -5,7 +5,7 @@ By default, `model.matrix()` generates binary indicator variables for factor pre
For example, `species` and `island` both have three levels but `model.matrix()` creates two indicator variables for each:
-```r
+``` r
library(dplyr)
library(modeldata)
data(penguins)
@@ -17,7 +17,7 @@ levels(penguins$species)
## [1] "Adelie" "Chinstrap" "Gentoo"
```
-```r
+``` r
levels(penguins$island)
```
@@ -25,7 +25,7 @@ levels(penguins$island)
## [1] "Biscoe" "Dream" "Torgersen"
```
-```r
+``` r
model.matrix(~ species + island, data = penguins) %>%
colnames()
```
@@ -38,7 +38,7 @@ model.matrix(~ species + island, data = penguins) %>%
For a formula with no intercept, the first factor is expanded to indicators for _all_ factor levels but all other factors are expanded to all but one (as above):
-```r
+``` r
model.matrix(~ 0 + species + island, data = penguins) %>%
colnames()
```
@@ -53,7 +53,7 @@ For inference, this hybrid encoding can be problematic.
To generate all indicators, use this contrast:
-```r
+``` r
# Switch out the contrast method
old_contr <- options("contrasts")$contrasts
new_contr <- old_contr
@@ -69,7 +69,7 @@ model.matrix(~ species + island, data = penguins) %>%
## [5] "islandBiscoe" "islandDream" "islandTorgersen"
```
-```r
+``` r
options(contrasts = old_contr)
```
diff --git a/man/rmd/pls_mixOmics.md b/man/rmd/pls_mixOmics.md
index a4f3c7621..5041c00b5 100644
--- a/man/rmd/pls_mixOmics.md
+++ b/man/rmd/pls_mixOmics.md
@@ -19,7 +19,7 @@ This model has 2 tuning parameters:
The **plsmod** extension package is required to fit this model.
-```r
+``` r
library(plsmod)
pls(num_comp = integer(1), predictor_prop = double(1)) %>%
@@ -54,7 +54,7 @@ pls(num_comp = integer(1), predictor_prop = double(1)) %>%
The **plsmod** extension package is required to fit this model.
-```r
+``` r
library(plsmod)
pls(num_comp = integer(1), predictor_prop = double(1)) %>%
@@ -84,7 +84,7 @@ In this case, [plsmod::pls_fit()] has the same role as above but eventually targ
This package is available via the Bioconductor repository and is not accessible via CRAN. You can install using:
-```r
+``` r
if (!require("remotes", quietly = TRUE)) {
install.packages("remotes")
}
diff --git a/man/rmd/poisson_reg_gee.md b/man/rmd/poisson_reg_gee.md
index fff3b0503..2a21ef5d0 100644
--- a/man/rmd/poisson_reg_gee.md
+++ b/man/rmd/poisson_reg_gee.md
@@ -12,7 +12,7 @@ This model has no formal tuning parameters. It may be beneficial to determine th
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
poisson_reg(engine = "gee") %>%
diff --git a/man/rmd/poisson_reg_glm.md b/man/rmd/poisson_reg_glm.md
index 39aa91889..333228561 100644
--- a/man/rmd/poisson_reg_glm.md
+++ b/man/rmd/poisson_reg_glm.md
@@ -12,7 +12,7 @@ This engine has no tuning parameters.
The **poissonreg** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg() %>%
diff --git a/man/rmd/poisson_reg_glmer.md b/man/rmd/poisson_reg_glmer.md
index f610f5eac..fc87257fe 100644
--- a/man/rmd/poisson_reg_glmer.md
+++ b/man/rmd/poisson_reg_glmer.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
poisson_reg(engine = "glmer") %>%
@@ -39,7 +39,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/poisson_reg_glmnet.md b/man/rmd/poisson_reg_glmnet.md
index 8bde6966f..485ebfa65 100644
--- a/man/rmd/poisson_reg_glmnet.md
+++ b/man/rmd/poisson_reg_glmnet.md
@@ -24,7 +24,7 @@ The `penalty` parameter has no default and requires a single numeric value. For
The **poissonreg** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg(penalty = double(1), mixture = double(1)) %>%
diff --git a/man/rmd/poisson_reg_h2o.md b/man/rmd/poisson_reg_h2o.md
index 807e5e477..44670bce4 100644
--- a/man/rmd/poisson_reg_h2o.md
+++ b/man/rmd/poisson_reg_h2o.md
@@ -25,7 +25,7 @@ The choice of `mixture` depends on the engine parameter `solver`, which is autom
The **agua** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg(penalty = double(1), mixture = double(1)) %>%
diff --git a/man/rmd/poisson_reg_hurdle.md b/man/rmd/poisson_reg_hurdle.md
index 0d23a9d7f..d1fd006f6 100644
--- a/man/rmd/poisson_reg_hurdle.md
+++ b/man/rmd/poisson_reg_hurdle.md
@@ -12,7 +12,7 @@ This engine has no tuning parameters.
The **poissonreg** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg() %>%
@@ -43,7 +43,7 @@ When fitting a parsnip model with this engine directly, the formula method is re
-```r
+``` r
library(tidymodels)
tidymodels_prefer()
@@ -72,7 +72,7 @@ poisson_reg() %>%
However, when using a workflow, the best approach is to avoid using [workflows::add_formula()] and use [workflows::add_variables()] in conjunction with a model formula:
-```r
+``` r
data("bioChemists", package = "pscl")
spec <-
poisson_reg() %>%
diff --git a/man/rmd/poisson_reg_stan.md b/man/rmd/poisson_reg_stan.md
index 73dda6395..eea6becd8 100644
--- a/man/rmd/poisson_reg_stan.md
+++ b/man/rmd/poisson_reg_stan.md
@@ -25,7 +25,7 @@ See [rstan::sampling()] and [rstanarm::priors()] for more information on these a
The **poissonreg** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg() %>%
diff --git a/man/rmd/poisson_reg_stan_glmer.md b/man/rmd/poisson_reg_stan_glmer.md
index a02cf3d43..38c5eac37 100644
--- a/man/rmd/poisson_reg_stan_glmer.md
+++ b/man/rmd/poisson_reg_stan_glmer.md
@@ -25,7 +25,7 @@ See `?rstanarm::stan_glmer` and `?rstan::sampling` for more information.
The **multilevelmod** extension package is required to fit this model.
-```r
+``` r
library(multilevelmod)
poisson_reg(engine = "stan_glmer") %>%
@@ -52,7 +52,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction:
diff --git a/man/rmd/poisson_reg_zeroinfl.md b/man/rmd/poisson_reg_zeroinfl.md
index d976c857b..ccc82d1a1 100644
--- a/man/rmd/poisson_reg_zeroinfl.md
+++ b/man/rmd/poisson_reg_zeroinfl.md
@@ -12,7 +12,7 @@ This engine has no tuning parameters.
The **poissonreg** extension package is required to fit this model.
-```r
+``` r
library(poissonreg)
poisson_reg() %>%
@@ -44,7 +44,7 @@ When fitting a parsnip model with this engine directly, the formula method is re
-```r
+``` r
library(tidymodels)
tidymodels_prefer()
@@ -73,7 +73,7 @@ poisson_reg() %>%
However, when using a workflow, the best approach is to avoid using [workflows::add_formula()] and use [workflows::add_variables()] in conjunction with a model formula:
-```r
+``` r
data("bioChemists", package = "pscl")
spec <-
poisson_reg() %>%
diff --git a/man/rmd/proportional_hazards_glmnet.md b/man/rmd/proportional_hazards_glmnet.md
index 424e33406..7d941eb99 100644
--- a/man/rmd/proportional_hazards_glmnet.md
+++ b/man/rmd/proportional_hazards_glmnet.md
@@ -24,7 +24,7 @@ The `penalty` parameter has no default and requires a single numeric value. For
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
proportional_hazards(penalty = double(1), mixture = double(1)) %>%
@@ -68,7 +68,7 @@ For example, in this model, the numeric column `rx` is used to estimate two diff
-```r
+``` r
library(survival)
library(censored)
library(dplyr)
diff --git a/man/rmd/proportional_hazards_survival.md b/man/rmd/proportional_hazards_survival.md
index f50970545..90b94e77d 100644
--- a/man/rmd/proportional_hazards_survival.md
+++ b/man/rmd/proportional_hazards_survival.md
@@ -12,7 +12,7 @@ This model has no tuning parameters.
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
proportional_hazards() %>%
@@ -42,7 +42,7 @@ The model formula can include _special_ terms, such as [survival::strata()]. The
For example, in this model, the numeric column `rx` is used to estimate two different baseline hazards for each value of the column:
-```r
+``` r
library(survival)
proportional_hazards() %>%
diff --git a/man/rmd/rand_forest_aorsf.md b/man/rmd/rand_forest_aorsf.md
index df77774b8..3dd2d5b1d 100644
--- a/man/rmd/rand_forest_aorsf.md
+++ b/man/rmd/rand_forest_aorsf.md
@@ -1,7 +1,7 @@
-For this engine, there are multiple modes: censored regression, classification, and regression
+For this engine, there are multiple modes: classification, regression, and censored regression
## Tuning Parameters
@@ -9,12 +9,12 @@ For this engine, there are multiple modes: censored regression, classification,
This model has 3 tuning parameters:
+- `mtry`: # Randomly Selected Predictors (type: integer, default: ceiling(sqrt(n_predictors)))
+
- `trees`: # Trees (type: integer, default: 500L)
- `min_n`: Minimal Node Size (type: integer, default: 5L)
-- `mtry`: # Randomly Selected Predictors (type: integer, default: ceiling(sqrt(n_predictors)))
-
Additionally, this model has one engine-specific tuning parameter:
* `split_min_stat`: Minimum test statistic required to split a node. Defaults are `3.841459` for censored regression (which is roughly a p-value of 0.05) and `0` for classification and regression. For classification, this tuning parameter should be between 0 and 1, and for regression it should be greater than or equal to 0. Higher values of this parameter cause trees grown by `aorsf` to have less depth.
@@ -24,7 +24,7 @@ Additionally, this model has one engine-specific tuning parameter:
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
rand_forest() %>%
@@ -47,7 +47,7 @@ rand_forest() %>%
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
rand_forest() %>%
@@ -71,7 +71,7 @@ rand_forest() %>%
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
rand_forest() %>%
diff --git a/man/rmd/rand_forest_h2o.md b/man/rmd/rand_forest_h2o.md
index 4a5bc5b95..5f32cb475 100644
--- a/man/rmd/rand_forest_h2o.md
+++ b/man/rmd/rand_forest_h2o.md
@@ -22,7 +22,7 @@ This model has 3 tuning parameters:
[agua::h2o_train_rf()] is a wrapper around [h2o::h2o.randomForest()].
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
@@ -54,7 +54,7 @@ rand_forest(
## Translation from parsnip to the original package (classification)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
diff --git a/man/rmd/rand_forest_partykit.md b/man/rmd/rand_forest_partykit.md
index c1d87bd3e..7204c7c97 100644
--- a/man/rmd/rand_forest_partykit.md
+++ b/man/rmd/rand_forest_partykit.md
@@ -1,7 +1,7 @@
-For this engine, there are multiple modes: censored regression, regression, and classification
+For this engine, there are multiple modes: regression, classification, and censored regression
## Tuning Parameters
@@ -9,18 +9,18 @@ For this engine, there are multiple modes: censored regression, regression, and
This model has 3 tuning parameters:
-- `trees`: # Trees (type: integer, default: 500L)
-
- `min_n`: Minimal Node Size (type: integer, default: 20L)
- `mtry`: # Randomly Selected Predictors (type: integer, default: 5L)
+- `trees`: # Trees (type: integer, default: 500L)
+
## Translation from parsnip to the original package (regression)
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
rand_forest() %>%
@@ -44,7 +44,7 @@ rand_forest() %>%
The **bonsai** extension package is required to fit this model.
-```r
+``` r
library(bonsai)
rand_forest() %>%
@@ -70,7 +70,7 @@ rand_forest() %>%
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
rand_forest() %>%
diff --git a/man/rmd/rand_forest_randomForest.md b/man/rmd/rand_forest_randomForest.md
index bd0f76427..0d4b04213 100644
--- a/man/rmd/rand_forest_randomForest.md
+++ b/man/rmd/rand_forest_randomForest.md
@@ -22,7 +22,7 @@ This model has 3 tuning parameters:
## Translation from parsnip to the original package (regression)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
@@ -54,7 +54,7 @@ rand_forest(
## Translation from parsnip to the original package (classification)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
diff --git a/man/rmd/rand_forest_ranger.md b/man/rmd/rand_forest_ranger.md
index c4e20f1d1..50420f527 100644
--- a/man/rmd/rand_forest_ranger.md
+++ b/man/rmd/rand_forest_ranger.md
@@ -22,7 +22,7 @@ This model has 3 tuning parameters:
## Translation from parsnip to the original package (regression)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
@@ -55,7 +55,7 @@ rand_forest(
## Translation from parsnip to the original package (classification)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
diff --git a/man/rmd/rand_forest_spark.md b/man/rmd/rand_forest_spark.md
index 3753b3a3f..30d7fbc4c 100644
--- a/man/rmd/rand_forest_spark.md
+++ b/man/rmd/rand_forest_spark.md
@@ -20,7 +20,7 @@ This model has 3 tuning parameters:
## Translation from parsnip to the original package (regression)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
@@ -53,7 +53,7 @@ rand_forest(
## Translation from parsnip to the original package (classification)
-```r
+``` r
rand_forest(
mtry = integer(1),
trees = integer(1),
diff --git a/man/rmd/rule_fit_h2o.md b/man/rmd/rule_fit_h2o.md
index 06c5d3d37..bffacfb44 100644
--- a/man/rmd/rule_fit_h2o.md
+++ b/man/rmd/rule_fit_h2o.md
@@ -35,7 +35,7 @@ Other engine arguments of interest:
The **agua** extension package is required to fit this model.
-```r
+``` r
library(rules)
rule_fit(
@@ -73,7 +73,7 @@ rule_fit(
The **agua** extension package is required to fit this model.
-```r
+``` r
rule_fit(
trees = integer(1),
tree_depth = integer(1),
diff --git a/man/rmd/rule_fit_xrf.md b/man/rmd/rule_fit_xrf.md
index 60f7caa35..39cd4919e 100644
--- a/man/rmd/rule_fit_xrf.md
+++ b/man/rmd/rule_fit_xrf.md
@@ -31,7 +31,7 @@ This model has 8 tuning parameters:
The **rules** extension package is required to fit this model.
-```r
+``` r
library(rules)
rule_fit(
@@ -78,7 +78,7 @@ The **rules** extension package is required to fit this model.
-```r
+``` r
library(rules)
rule_fit(
diff --git a/man/rmd/survival_reg_flexsurv.md b/man/rmd/survival_reg_flexsurv.md
index c436a2e44..f99705d92 100644
--- a/man/rmd/survival_reg_flexsurv.md
+++ b/man/rmd/survival_reg_flexsurv.md
@@ -16,7 +16,7 @@ This model has 1 tuning parameters:
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
survival_reg(dist = character(1)) %>%
diff --git a/man/rmd/survival_reg_flexsurvspline.md b/man/rmd/survival_reg_flexsurvspline.md
index bb9cbee66..b7bfb41cb 100644
--- a/man/rmd/survival_reg_flexsurvspline.md
+++ b/man/rmd/survival_reg_flexsurvspline.md
@@ -14,7 +14,7 @@ This model has one engine-specific tuning parameter:
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
survival_reg() %>%
diff --git a/man/rmd/survival_reg_survival.md b/man/rmd/survival_reg_survival.md
index bbd45bcec..ea6c8eabe 100644
--- a/man/rmd/survival_reg_survival.md
+++ b/man/rmd/survival_reg_survival.md
@@ -16,7 +16,7 @@ This model has 1 tuning parameters:
The **censored** extension package is required to fit this model.
-```r
+``` r
library(censored)
survival_reg(dist = character(1)) %>%
@@ -49,7 +49,7 @@ The model formula can include _special_ terms, such as [survival::strata()]. The
For example, in this model, the numeric column `rx` is used to estimate two different scale parameters for each value of the column:
-```r
+``` r
library(survival)
survival_reg() %>%
diff --git a/man/rmd/svm_linear_LiblineaR.md b/man/rmd/svm_linear_LiblineaR.md
index 72ff0f300..03be20d7e 100644
--- a/man/rmd/svm_linear_LiblineaR.md
+++ b/man/rmd/svm_linear_LiblineaR.md
@@ -15,10 +15,12 @@ This model has 2 tuning parameters:
This engine fits models that are L2-regularized for L2-loss. In the [LiblineaR::LiblineaR()] documentation, these are types 1 (classification) and 11 (regression).
+Parsnip changes the default range for `cost` to `c(-10, 5)`.
+
## Translation from parsnip to the original package (regression)
-```r
+``` r
svm_linear(
cost = double(1),
margin = double(1)
@@ -45,7 +47,7 @@ svm_linear(
## Translation from parsnip to the original package (classification)
-```r
+``` r
svm_linear(
cost = double(1)
) %>%
diff --git a/man/rmd/svm_linear_kernlab.md b/man/rmd/svm_linear_kernlab.md
index 02a41ef31..ecf2ceac8 100644
--- a/man/rmd/svm_linear_kernlab.md
+++ b/man/rmd/svm_linear_kernlab.md
@@ -13,10 +13,12 @@ This model has 2 tuning parameters:
- `margin`: Insensitivity Margin (type: double, default: 0.1)
+Parsnip changes the default range for `cost` to `c(-10, 5)`.
+
## Translation from parsnip to the original package (regression)
-```r
+``` r
svm_linear(
cost = double(1),
margin = double(1)
@@ -43,7 +45,7 @@ svm_linear(
## Translation from parsnip to the original package (classification)
-```r
+``` r
svm_linear(
cost = double(1)
) %>%
diff --git a/man/rmd/svm_poly_kernlab.md b/man/rmd/svm_poly_kernlab.md
index cdc80cc9b..857909eb4 100644
--- a/man/rmd/svm_poly_kernlab.md
+++ b/man/rmd/svm_poly_kernlab.md
@@ -17,10 +17,12 @@ This model has 4 tuning parameters:
- `margin`: Insensitivity Margin (type: double, default: 0.1)
+Parsnip changes the default range for `cost` to `c(-10, 5)`.
+
## Translation from parsnip to the original package (regression)
-```r
+``` r
svm_poly(
cost = double(1),
degree = integer(1),
@@ -52,7 +54,7 @@ svm_poly(
## Translation from parsnip to the original package (classification)
-```r
+``` r
svm_poly(
cost = double(1),
degree = integer(1),
diff --git a/man/rmd/svm_rbf_kernlab.md b/man/rmd/svm_rbf_kernlab.md
index dc311a7c1..1e199475a 100644
--- a/man/rmd/svm_rbf_kernlab.md
+++ b/man/rmd/svm_rbf_kernlab.md
@@ -17,10 +17,12 @@ This model has 3 tuning parameters:
There is no default for the radial basis function kernel parameter. kernlab estimates it from the data using a heuristic method. See [kernlab::sigest()]. This method uses random numbers so, without setting the seed before fitting, the model will not be reproducible.
+Parsnip changes the default range for `cost` to `c(-10, 5)`.
+
## Translation from parsnip to the original package (regression)
-```r
+``` r
svm_rbf(
cost = double(1),
rbf_sigma = double(1),
@@ -49,7 +51,7 @@ svm_rbf(
## Translation from parsnip to the original package (classification)
-```r
+``` r
svm_rbf(
cost = double(1),
rbf_sigma = double(1)
diff --git a/man/rmd/template-no-pooling.Rmd b/man/rmd/template-no-pooling.Rmd
index 35263c68f..b6b8ef2f1 100644
--- a/man/rmd/template-no-pooling.Rmd
+++ b/man/rmd/template-no-pooling.Rmd
@@ -6,7 +6,7 @@ This model can use subject-specific coefficient estimates to make predictions (i
\eta_{i} = (\beta_0 + b_{0i}) + \beta_1x_{i1}
```
-where $i$ denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
+where `i` denotes the `i`th independent experimental unit (e.g. subject). When the model has seen subject `i`, it can use that subject's data to adjust the _population_ intercept to be more specific to that subjects results.
What happens when data are being predicted for a subject that was not used in the model fit? In that case, this package uses _only_ the population parameter estimates for prediction: