R/LearnerRegrGlmnet.R
mlr_learners_regr.glmnet.RdGeneralized linear models with elastic net regularization.
Calls glmnet::glmnet() from package glmnet.
The default for hyperparameter family is changed to "gaussian".
Caution: This learner is different to cv_glmnet in that it does not use the
internal optimization of lambda. The parameter needs to be tuned by the user.
Essentially, one needs to tune parameter s which is used at predict-time.
See https://stackoverflow.com/questions/50995525/ for more information.
This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():
mlr_learners$get("regr.glmnet") lrn("regr.glmnet")
Friedman J, Hastie T, Tibshirani R (2010). “Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software, 33(1), 1--22. doi: 10.18637/jss.v033.i01 .
mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGlmnet
new()Creates a new instance of this R6 class.
LearnerRegrGlmnet$new()
clone()The objects of this class are cloneable with this method.
LearnerRegrGlmnet$clone(deep = FALSE)
deepWhether to make a deep clone.
if (requireNamespace("glmnet")) { learner = mlr3::lrn("regr.glmnet") print(learner) # available parameters: learner$param_set$ids() }#> <LearnerRegrGlmnet:regr.glmnet> #> * Model: - #> * Parameters: family=gaussian #> * Packages: glmnet #> * Predict Type: response #> * Feature types: logical, integer, numeric #> * Properties: weights#> [1] "family" "offset" "alpha" "type.measure" #> [5] "s" "nlambda" "lambda.min.ratio" "lambda" #> [9] "standardize" "intercept" "thresh" "dfmax" #> [13] "pmax" "exclude" "penalty.factor" "lower.limits" #> [17] "upper.limits" "maxit" "type.gaussian" "type.logistic" #> [21] "type.multinomial" "keep" "parallel" "trace.it" #> [25] "alignment" "grouped" "relax" "fdev" #> [29] "devmax" "eps" "epsnr" "big" #> [33] "mnlam" "pmin" "exmx" "prec" #> [37] "mxit" "mxitnr" "newoffset" "predict.gamma" #> [41] "exact" "gamma"