- BREAKING CHANGE: Remove
$loglik()
method from all learners. - feat: Update hyperparameter set of
lrn("classif.ranger")
andlrn("regr.ranger")
for 0.17.0, addingna.action
parameter and"missings"
property, andpoisson
splitrule for regression with a newpoisson.tau
parameter. - compatibility: mlr3 0.22.0.
- fix: Hyperparameter set of
lrn("classif.ranger")
andlrn("regr.ranger")
. Removealpha
andminprop
hyperparameter. Remove default ofrespect.unordered.factors
. Change lower bound ofmax_depth
from 0 to 1. Removese.method
fromlrn("classif.ranger")
. - feat: use
base_margin
in xgboost learners (#205). - fix: validation for learner
lrn("regr.xgboost")
now works properly. Previously the training data was used. - feat: add weights for logistic regression again, which were incorrectly removed in a previous release (#265).
- BREAKING CHANGE: When using internal tuning for xgboost learners, the
eval_metric
must now be set. This achieves that one needs to make the conscious decision which performance metric to use for early stopping. - BREAKING CHANGE: Change xgboost default nrounds from 1 to 1000.
- feat:
LearnerClassifXgboost
andLearnerRegrXgboost
now support internal tuning and validation. This now also works in conjunction withmlr3pipelines
.
- Adaption to new paradox version 1.0.0.
- Adaption to memory optimization in mlr3 0.17.1.
- Added labels to learners.
- Added formula argument to
nnet
learner and support feature type"integer"
. - Added
min.bucket
parameter toclassif.ranger
andregr.ranger
.
- Enable new early stopping mechanism for xgboost.
- Improved documentation.
- fix: unloading
mlr3learners
removes learners from dictionary.
- Added
regr.nnet
learner. - Removed the option to use weights in
classif.log_reg
. - Added
default_values()
function for ranger and svm learners. - Improved documentation.
- Survival learners have been moved to mlr3extralearners (maintained on Github): https://github.com/mlr-org/mlr3extralearners
- Most learners now reorder the columns in the predict task according to the order of columns in the training task.
- Removed workaround for old mlr3 versions.
eval_metric()
is now explicitly set for xgboost learners to silence a deprecation warning.- Improved how the added hyperparameter
mtry.ratio
is converted tomtry
to simplify tuning. - Multiple updates to hyperparameter sets.
- Fixed the internal encoding of the positive class for classification learners
based on
glm
andglmnet
(#199). While predictions in previous versions were correct, the estimated coefficients had the wrong sign. - Reworked handling of
lambda
ands
forglmnet
learners (#197). - Learners based on
glmnet
now support to extract selected features (#200). - Learners based on
kknn
now raise an exception ifk >= n
(#191). - Learners based on
ranger
now come with the virtual hyperparametermtry.ratio
to set the hyperparametermtry
based on the proportion of features to use. - Multiple learners now support the extraction of the log-likelihood (via method
$loglik()
), allowing to calculate measures like AIC or BIC inmlr3
(#182).
- Fixed SVM learners for new release of package
e1071
.
- Changed hyperparameters of all learners to make them run sequentially in their
defaults.
The new function
set_threads()
in mlr3 provides a generic way to set the respective hyperparameter to the desired number of parallel threads. - Added
survival:aft
objective tosurv.xgboost
- Removed hyperparameter
predict.all
from ranger learners (#172).
- Fixed stochastic test failures on solaris.
- Fixed
surv.ranger
, c.f. mlr-org/mlr3proba#165. - Added
classif.nnet
learner (moved frommlr3extralearners
).
- Fixed a bug in the survival random forest
LearnerSurvRanger
.
- Disabled some
glmnet
tests on solaris. - Removed dependency on orphaned package
bibtex
.
- Fixed a potential label switch in
classif.glmnet
andclassif.cv_glmnet
withpredict_type
set to"prob"
(#155). - Fixed learners from package
glmnet
to be more robust if the order of features has changed between train and predict.
- The
$model
slot of the {kknn} learner now returns a list containing some information which is being used during the predict step. Before, the slot was empty because there is no training step for kknn. - Compact in-memory representation of R6 objects to save space when saving mlr3
objects via
saveRDS()
,serialize()
etc. - glmnet learners:
penalty.factor
is a vector param, not aParamDbl
(#141) - glmnet: Add params
mxitnr
andepsnr
from glmnet v4.0 update - Add learner
surv.glmnet
(#130) - Suggest package
mlr3proba
(#144) - Add learner
surv.xgboost
(#135) - Add learner
surv.ranger
(#134)
- Split glmnet learner into
cv_glmnet
andglmnet
(#99) - glmnet learners: Add
predict.gamma
andnewoffset
arg (#98) - We now test that all learners can be constructed without parameters.
- A new custom "Paramtest" which lives
inst/paramtest
was added. This test checks against the arguments of the upstream train & predict functions and ensures that all parameters are implemented in the respective mlr3 learner (#96). - A lot missing parameters were added to learners. See #96 for a complete list.
- Add parameter
interaction_constraints
to {xgboost} learners (#97).
- Added learner
classif.multinom
from packagennet
. - Learners
regr.lm
andclassif.log_reg
now ignore the global option"contrasts"
. - Add vignette
additional-learners.Rmd
listing all mlr3 custom learners - Move Learner*Glmnet to Learner*CVGlmnet and add Learner*Glmnet (without internal tuning) (#90)
- Add parameter
interaction_constraints
(#95)
- Added missing feature type
logical()
to multiple learners.
- Added parameter and parameter dependencies to
regr.glmnet
,regr.km
,regr.ranger
,regr.svm
,regr.xgboost
,classif.glmnet
,classif.lda
,classif.naivebayes
,classif.qda
,classif.ranger
andclassif.svm
. glmnet
: Addedrelax
parameter (v3.0)xgboost
: Updated parameters for v0.90.0.2
- Fixed a bug in
*.xgboost
and*.svm
which was triggered if columns were reordered between$train()
and$predict()
.
-
Changes to work with new
mlr3::Learner
API. -
Improved documentation.
-
Added references.
-
add new parameters of xgboost version 0.90.2
-
add parameter dependencies for xgboost
- Maintenance release.
- Initial upload to CRAN.