Ordinary linear regression.
Calls stats::lm()
.
Dictionary
This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn()
:
Meta Information
Task type: “regr”
Predict Types: “response”, “se”
Feature Types: “logical”, “integer”, “numeric”, “character”, “factor”
Required Packages: mlr3, mlr3learners, 'stats'
Parameters
Id | Type | Default | Levels | Range |
df | numeric | Inf | \((-\infty, \infty)\) | |
interval | character | - | none, confidence, prediction | - |
level | numeric | 0.95 | \((-\infty, \infty)\) | |
model | logical | TRUE | TRUE, FALSE | - |
offset | logical | - | TRUE, FALSE | - |
pred.var | untyped | - | - | |
qr | logical | TRUE | TRUE, FALSE | - |
scale | numeric | NULL | \((-\infty, \infty)\) | |
singular.ok | logical | TRUE | TRUE, FALSE | - |
x | logical | FALSE | TRUE, FALSE | - |
y | logical | FALSE | TRUE, FALSE | - |
rankdeficient | character | - | warnif, simple, non-estim, NA, NAwarn | - |
tol | numeric | 1e-07 | \((-\infty, \infty)\) | |
verbose | logical | FALSE | TRUE, FALSE | - |
Contrasts
To ensure reproducibility, this learner always uses the default contrasts:
contr.treatment()
for unordered factors, andcontr.poly()
for ordered factors.
Setting the option "contrasts"
does not have any effect.
Instead, set the respective hyperparameter or use mlr3pipelines to create dummy features.
See also
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
Package mlr3extralearners for more learners.
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).mlr3pipelines to combine learners with pre- and postprocessing steps.
Extension packages for additional task types:
mlr3proba for probabilistic supervised regression and survival analysis.
mlr3cluster for unsupervised clustering.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Other Learner:
mlr_learners_classif.cv_glmnet
,
mlr_learners_classif.glmnet
,
mlr_learners_classif.kknn
,
mlr_learners_classif.lda
,
mlr_learners_classif.log_reg
,
mlr_learners_classif.multinom
,
mlr_learners_classif.naive_bayes
,
mlr_learners_classif.nnet
,
mlr_learners_classif.qda
,
mlr_learners_classif.ranger
,
mlr_learners_classif.svm
,
mlr_learners_classif.xgboost
,
mlr_learners_regr.cv_glmnet
,
mlr_learners_regr.glmnet
,
mlr_learners_regr.kknn
,
mlr_learners_regr.km
,
mlr_learners_regr.nnet
,
mlr_learners_regr.ranger
,
mlr_learners_regr.svm
,
mlr_learners_regr.xgboost
Super classes
mlr3::Learner
-> mlr3::LearnerRegr
-> LearnerRegrLM
Methods
Method loglik()
Extract the log-likelihood (e.g., via stats::logLik()
from the fitted model.
Examples
if (requireNamespace("stats", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.lm")
print(learner)
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# print the model
print(learner$model)
# importance method
if("importance" %in% learner$properties) print(learner$importance)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
}
#> <LearnerRegrLM:regr.lm>: Linear Model
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3learners, stats
#> * Predict Types: [response], se
#> * Feature Types: logical, integer, numeric, character, factor
#> * Properties: weights
#>
#> Call:
#> stats::lm(formula = task$formula(), data = task$data())
#>
#> Coefficients:
#> (Intercept) am carb cyl disp drat
#> 16.30218 3.76393 0.13314 -0.81045 0.02792 0.90348
#> gear hp qsec vs wt
#> 0.14509 -0.03059 0.80309 0.36197 -4.11977
#>
#> regr.mse
#> 4.351582