Ordinary linear regression.
Calls stats::lm().
Offset
If a Task has a column with the role offset, it will automatically be used during training.
The offset is incorporated through the formula interface to ensure compatibility with stats::lm().
We add it to the model formula as offset(<column_name>) and also include it in the training data.
During prediction, the default behavior is to use the offset column from the test set (enabled by use_pred_offset = TRUE).
Otherwise, if the user sets use_pred_offset = FALSE, a zero offset is applied, effectively disabling the offset adjustment during prediction.
Dictionary
This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn():
Meta Information
- Task type: “regr” 
- Predict Types: “response”, “se” 
- Feature Types: “logical”, “integer”, “numeric”, “character”, “factor” 
- Required Packages: mlr3, mlr3learners, 'stats' 
Parameters
| Id | Type | Default | Levels | Range | 
| df | numeric | Inf | \((-\infty, \infty)\) | |
| interval | character | - | none, confidence, prediction | - | 
| level | numeric | 0.95 | \((-\infty, \infty)\) | |
| model | logical | TRUE | TRUE, FALSE | - | 
| pred.var | untyped | - | - | |
| qr | logical | TRUE | TRUE, FALSE | - | 
| scale | numeric | NULL | \((-\infty, \infty)\) | |
| singular.ok | logical | TRUE | TRUE, FALSE | - | 
| x | logical | FALSE | TRUE, FALSE | - | 
| y | logical | FALSE | TRUE, FALSE | - | 
| rankdeficient | character | - | warnif, simple, non-estim, NA, NAwarn | - | 
| tol | numeric | 1e-07 | \((-\infty, \infty)\) | |
| verbose | logical | FALSE | TRUE, FALSE | - | 
| use_pred_offset | logical | TRUE | TRUE, FALSE | - | 
Contrasts
To ensure reproducibility, this learner always uses the default contrasts:
- contr.treatment()for unordered factors, and
- contr.poly()for ordered factors.
Setting the option "contrasts" does not have any effect.
Instead, set the respective hyperparameter or use mlr3pipelines to create dummy features.
See also
- Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners 
- Package mlr3extralearners for more learners. 
- as.data.table(mlr_learners)for a table of available Learners in the running session (depending on the loaded packages).
- mlr3pipelines to combine learners with pre- and postprocessing steps. 
- Extension packages for additional task types: - mlr3proba for probabilistic supervised regression and survival analysis. 
- mlr3cluster for unsupervised clustering. 
 
- mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces. 
Other Learner:
mlr_learners_classif.cv_glmnet,
mlr_learners_classif.glmnet,
mlr_learners_classif.kknn,
mlr_learners_classif.lda,
mlr_learners_classif.log_reg,
mlr_learners_classif.multinom,
mlr_learners_classif.naive_bayes,
mlr_learners_classif.nnet,
mlr_learners_classif.qda,
mlr_learners_classif.ranger,
mlr_learners_classif.svm,
mlr_learners_classif.xgboost,
mlr_learners_regr.cv_glmnet,
mlr_learners_regr.glmnet,
mlr_learners_regr.kknn,
mlr_learners_regr.km,
mlr_learners_regr.nnet,
mlr_learners_regr.ranger,
mlr_learners_regr.svm,
mlr_learners_regr.xgboost
Super classes
mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrLM
Methods
Inherited methods
mlr3::Learner$base_learner()mlr3::Learner$configure()mlr3::Learner$encapsulate()mlr3::Learner$format()mlr3::Learner$help()mlr3::Learner$predict()mlr3::Learner$predict_newdata()mlr3::Learner$print()mlr3::Learner$reset()mlr3::Learner$selected_features()mlr3::Learner$train()mlr3::LearnerRegr$predict_newdata_fast()
Examples
# Define the Learner and set parameter values
learner = lrn("regr.lm")
print(learner)
#> 
#> ── <LearnerRegrLM> (regr.lm): Linear Model ─────────────────────────────────────
#> • Model: -
#> • Parameters: use_pred_offset=TRUE
#> • Packages: mlr3, mlr3learners, and stats
#> • Predict Types: [response] and se
#> • Feature Types: logical, integer, numeric, character, and factor
#> • Encapsulation: none (fallback: -)
#> • Properties: offset and weights
#> • Other settings: use_weights = 'use'
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# Print the model
print(learner$model)
#> 
#> Call:
#> stats::lm(formula = form, data = data)
#> 
#> Coefficients:
#> (Intercept)           am         carb          cyl         disp         drat  
#>   10.337625     4.732097    -0.034816     0.215756     0.005722     0.737139  
#>        gear           hp         qsec           vs           wt  
#>   -0.730979    -0.021468     0.868526     0.560087    -2.376125  
#> 
# Importance method
if ("importance" %in% learner$properties) print(learner$importance)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
#> regr.mse 
#> 9.640261