Skip to contents

Generalized linear models with elastic net regularization. Calls glmnet::glmnet() from package glmnet.

The default for hyperparameter family is set to "gaussian".

Details

Caution: This learner is different to learners calling glmnet::cv.glmnet() in that it does not use the internal optimization of parameter lambda. Instead, lambda needs to be tuned by the user (e.g., via mlr3tuning). When lambda is tuned, the glmnet will be trained for each tuning iteration. While fitting the whole path of lambdas would be more efficient, as is done by default in glmnet::glmnet(), tuning/selecting the parameter at prediction time (using parameter s) is currently not supported in mlr3 (at least not in efficient manner). Tuning the s parameter is, therefore, currently discouraged.

When the data are i.i.d. and efficiency is key, we recommend using the respective auto-tuning counterparts in mlr_learners_classif.cv_glmnet() or mlr_learners_regr.cv_glmnet(). However, in some situations this is not applicable, usually when data are imbalanced or not i.i.d. (longitudinal, time-series) and tuning requires custom resampling strategies (blocked design, stratification).

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("regr.glmnet")
lrn("regr.glmnet")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, mlr3learners, glmnet

Parameters

IdTypeDefaultLevelsRange
alignmentcharacterlambdalambda, fraction-
alphanumeric1\([0, 1]\)
bignumeric9.9e+35\((-\infty, \infty)\)
devmaxnumeric0.999\([0, 1]\)
dfmaxinteger-\([0, \infty)\)
epsnumeric1e-06\([0, 1]\)
epsnrnumeric1e-08\([0, 1]\)
exactlogicalFALSETRUE, FALSE-
excludeinteger-\([1, \infty)\)
exmxnumeric250\((-\infty, \infty)\)
familycharactergaussiangaussian, poisson-
fdevnumeric1e-05\([0, 1]\)
gammanumeric1\((-\infty, \infty)\)
groupedlogicalTRUETRUE, FALSE-
interceptlogicalTRUETRUE, FALSE-
keeplogicalFALSETRUE, FALSE-
lambdauntyped--
lambda.min.rationumeric-\([0, 1]\)
lower.limitsuntyped--
maxitinteger100000\([1, \infty)\)
mnlaminteger5\([1, \infty)\)
mxitinteger100\([1, \infty)\)
mxitnrinteger25\([1, \infty)\)
newoffsetuntyped--
nlambdainteger100\([1, \infty)\)
offsetuntyped-
parallellogicalFALSETRUE, FALSE-
penalty.factoruntyped--
pmaxinteger-\([0, \infty)\)
pminnumeric1e-09\([0, 1]\)
precnumeric1e-10\((-\infty, \infty)\)
relaxlogicalFALSETRUE, FALSE-
snumeric0.01\([0, \infty)\)
standardizelogicalTRUETRUE, FALSE-
standardize.responselogicalFALSETRUE, FALSE-
threshnumeric1e-07\([0, \infty)\)
trace.itinteger0\([0, 1]\)
type.gaussiancharacter-covariance, naive-
type.logisticcharacter-Newton, modified.Newton-
type.multinomialcharacter-ungrouped, grouped-
upper.limitsuntyped--

References

Friedman J, Hastie T, Tibshirani R (2010). “Regularization Paths for Generalized Linear Models via Coordinate Descent.” Journal of Statistical Software, 33(1), 1--22. doi:10.18637/jss.v033.i01 .

See also

Other Learner: mlr_learners_classif.cv_glmnet, mlr_learners_classif.glmnet, mlr_learners_classif.kknn, mlr_learners_classif.lda, mlr_learners_classif.log_reg, mlr_learners_classif.multinom, mlr_learners_classif.naive_bayes, mlr_learners_classif.nnet, mlr_learners_classif.qda, mlr_learners_classif.ranger, mlr_learners_classif.svm, mlr_learners_classif.xgboost, mlr_learners_regr.cv_glmnet, mlr_learners_regr.kknn, mlr_learners_regr.km, mlr_learners_regr.lm, mlr_learners_regr.nnet, mlr_learners_regr.ranger, mlr_learners_regr.svm, mlr_learners_regr.xgboost

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrGlmnet

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method selected_features()

Returns the set of selected features as reported by glmnet::predict.glmnet() with type set to "nonzero".

Usage

LearnerRegrGlmnet$selected_features(lambda = NULL)

Arguments

lambda

(numeric(1))
Custom lambda, defaults to the active lambda depending on parameter set.

Returns

(character()) of feature names.


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrGlmnet$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("glmnet", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.glmnet")
print(learner)

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# print the model
print(learner$model)

# importance method
if("importance" %in% learner$properties) print(learner$importance)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
}
#> <LearnerRegrGlmnet:regr.glmnet>: GLM with Elastic Net Regularization
#> * Model: -
#> * Parameters: family=gaussian
#> * Packages: mlr3, mlr3learners, glmnet
#> * Predict Types:  [response]
#> * Feature Types: logical, integer, numeric
#> * Properties: weights
#> 
#> Call:  (if (cv) glmnet::cv.glmnet else glmnet::glmnet)(x = data, y = target,      family = "gaussian") 
#> 
#>    Df  %Dev Lambda
#> 1   0  0.00 5.2860
#> 2   1 13.52 4.8170
#> 3   1 24.74 4.3890
#> 4   1 34.06 3.9990
#> 5   2 42.65 3.6440
#> 6   2 50.35 3.3200
#> 7   3 56.76 3.0250
#> 8   3 62.08 2.7560
#> 9   3 66.49 2.5110
#> 10  3 70.16 2.2880
#> 11  3 73.20 2.0850
#> 12  3 75.73 1.9000
#> 13  3 77.83 1.7310
#> 14  3 79.57 1.5770
#> 15  3 81.02 1.4370
#> 16  3 82.22 1.3090
#> 17  3 83.22 1.1930
#> 18  3 84.04 1.0870
#> 19  3 84.73 0.9905
#> 20  3 85.30 0.9025
#> 21  3 85.77 0.8223
#> 22  4 86.23 0.7493
#> 23  4 86.66 0.6827
#> 24  4 87.02 0.6221
#> 25  4 87.32 0.5668
#> 26  4 87.57 0.5165
#> 27  4 87.78 0.4706
#> 28  4 87.97 0.4288
#> 29  5 88.11 0.3907
#> 30  5 88.25 0.3560
#> 31  5 88.35 0.3244
#> 32  6 88.46 0.2955
#> 33  6 88.56 0.2693
#> 34  6 88.64 0.2454
#> 35  6 88.71 0.2236
#> 36  6 88.77 0.2037
#> 37  6 88.82 0.1856
#> 38  6 88.86 0.1691
#> 39  6 88.89 0.1541
#> 40  6 88.92 0.1404
#> 41  6 88.94 0.1279
#> 42  7 88.96 0.1166
#> 43  7 88.99 0.1062
#> 44  7 89.01 0.0968
#> 45  7 89.03 0.0882
#> 46  7 89.04 0.0803
#> 47  7 89.05 0.0732
#> 48  8 89.18 0.0667
#> 49  8 89.34 0.0608
#> 50  8 89.47 0.0554
#> 51  8 89.58 0.0505
#> 52  8 89.68 0.0460
#> 53  8 89.75 0.0419
#> 54  8 89.82 0.0382
#> 55  8 89.87 0.0348
#> 56  8 89.91 0.0317
#> 57  7 89.95 0.0289
#> 58  7 89.98 0.0263
#> 59  7 89.99 0.0240
#> 60  7 90.01 0.0218
#> 61  7 90.03 0.0199
#> 62  7 90.04 0.0181
#> 63  7 90.05 0.0165
#> 64  7 90.05 0.0151
#> 65  7 90.06 0.0137
#> 66  8 90.07 0.0125
#> 67  9 90.08 0.0114
#> 68 10 90.09 0.0104
#> 69 10 90.10 0.0095
#> 70 10 90.11 0.0086
#> 71 10 90.12 0.0078
#> 72 10 90.13 0.0072
#> 73 10 90.13 0.0065
#> 74 10 90.13 0.0059
#> 75 10 90.14 0.0054
#> 76 10 90.14 0.0049
#> 77 10 90.14 0.0045
#> 78 10 90.14 0.0041
#> 79 10 90.15 0.0037
#> 80 10 90.15 0.0034
#> Warning: Multiple lambdas have been fit. Lambda will be set to 0.01 (see parameter 's').
#> regr.mse 
#> 8.759691