Skip to contents

eXtreme Gradient Boosting regression. Calls xgboost::xgb.train() from package xgboost.

To compute on GPUs, you first need to compile xgboost yourself and link against CUDA. See https://xgboost.readthedocs.io/en/stable/build.html#building-with-gpu-support.

Note that using the watchlist parameter directly will lead to problems when wrapping this mlr3::Learner in a mlr3pipelines GraphLearner as the preprocessing steps will not be applied to the data in the watchlist. See the section Early Stopping and Validation on how to do this.

Note

To compute on GPUs, you first need to compile xgboost yourself and link against CUDA. See https://xgboost.readthedocs.io/en/stable/build.html#building-with-gpu-support.

Dictionary

This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn():

mlr_learners$get("regr.xgboost")
lrn("regr.xgboost")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, mlr3learners, xgboost

Parameters

IdTypeDefaultLevelsRange
alphanumeric0\([0, \infty)\)
approxcontriblogicalFALSETRUE, FALSE-
base_scorenumeric0.5\((-\infty, \infty)\)
boostercharactergbtreegbtree, gblinear, dart-
callbacksuntypedlist()-
colsample_bylevelnumeric1\([0, 1]\)
colsample_bynodenumeric1\([0, 1]\)
colsample_bytreenumeric1\([0, 1]\)
deviceuntyped"cpu"-
disable_default_eval_metriclogicalFALSETRUE, FALSE-
early_stopping_roundsintegerNULL\([1, \infty)\)
etanumeric0.3\([0, 1]\)
eval_metricuntyped"rmse"-
feature_selectorcharactercycliccyclic, shuffle, random, greedy, thrifty-
fevaluntypedNULL-
gammanumeric0\([0, \infty)\)
grow_policycharacterdepthwisedepthwise, lossguide-
interaction_constraintsuntyped--
iterationrangeuntyped--
lambdanumeric1\([0, \infty)\)
lambda_biasnumeric0\([0, \infty)\)
max_bininteger256\([2, \infty)\)
max_delta_stepnumeric0\([0, \infty)\)
max_depthinteger6\([0, \infty)\)
max_leavesinteger0\([0, \infty)\)
maximizelogicalNULLTRUE, FALSE-
min_child_weightnumeric1\([0, \infty)\)
missingnumericNA\((-\infty, \infty)\)
monotone_constraintsuntyped0-
normalize_typecharactertreetree, forest-
nroundsinteger-\([1, \infty)\)
nthreadinteger1\([1, \infty)\)
ntreelimitintegerNULL\([1, \infty)\)
num_parallel_treeinteger1\([1, \infty)\)
objectiveuntyped"reg:squarederror"-
one_droplogicalFALSETRUE, FALSE-
outputmarginlogicalFALSETRUE, FALSE-
predcontriblogicalFALSETRUE, FALSE-
predinteractionlogicalFALSETRUE, FALSE-
predleaflogicalFALSETRUE, FALSE-
print_every_ninteger1\([1, \infty)\)
process_typecharacterdefaultdefault, update-
rate_dropnumeric0\([0, 1]\)
refresh_leaflogicalTRUETRUE, FALSE-
reshapelogicalFALSETRUE, FALSE-
sampling_methodcharacteruniformuniform, gradient_based-
sample_typecharacteruniformuniform, weighted-
save_nameuntypedNULL-
save_periodintegerNULL\([0, \infty)\)
scale_pos_weightnumeric1\((-\infty, \infty)\)
seed_per_iterationlogicalFALSETRUE, FALSE-
skip_dropnumeric0\([0, 1]\)
strict_shapelogicalFALSETRUE, FALSE-
subsamplenumeric1\([0, 1]\)
top_kinteger0\([0, \infty)\)
traininglogicalFALSETRUE, FALSE-
tree_methodcharacterautoauto, exact, approx, hist, gpu_hist-
tweedie_variance_powernumeric1.5\([1, 2]\)
updateruntyped--
verboseinteger1\([0, 2]\)
watchlistuntypedNULL-
xgb_modeluntypedNULL-

Early Stopping and Validation

In order to monitor the validation performance during the training, you can set the $validate field of the Learner. For information on how to configure the valdiation set, see the Validation section of mlr3::Learner. This validation data can also be used for early stopping, which can be enabled by setting the early_stopping_rounds parameter. The final (or in the case of early stopping best) validation scores can be accessed via $internal_valid_scores, and the optimal nrounds via $internal_tuned_values.

Initial parameter values

  • nrounds:

    • Actual default: no default.

    • Adjusted default: 1.

    • Reason for change: Without a default construction of the learner would error. Just setting a nonsense default to workaround this. nrounds needs to be tuned by the user.

  • nthread:

    • Actual value: Undefined, triggering auto-detection of the number of CPUs.

    • Adjusted value: 1.

    • Reason for change: Conflicting with parallelization via future.

  • verbose:

    • Actual default: 1.

    • Adjusted default: 0.

    • Reason for change: Reduce verbosity.

References

Chen, Tianqi, Guestrin, Carlos (2016). “Xgboost: A scalable tree boosting system.” In Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 785--794. ACM. doi:10.1145/2939672.2939785 .

See also

Other Learner: mlr_learners_classif.cv_glmnet, mlr_learners_classif.glmnet, mlr_learners_classif.kknn, mlr_learners_classif.lda, mlr_learners_classif.log_reg, mlr_learners_classif.multinom, mlr_learners_classif.naive_bayes, mlr_learners_classif.nnet, mlr_learners_classif.qda, mlr_learners_classif.ranger, mlr_learners_classif.svm, mlr_learners_classif.xgboost, mlr_learners_regr.cv_glmnet, mlr_learners_regr.glmnet, mlr_learners_regr.kknn, mlr_learners_regr.km, mlr_learners_regr.lm, mlr_learners_regr.nnet, mlr_learners_regr.ranger, mlr_learners_regr.svm

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrXgboost

Active bindings

internal_valid_scores

(named list() or NULL) The validation scores extracted from model$evaluation_log. If early stopping is activated, this contains the validation scores of the model for the optimal nrounds, otherwise the nrounds for the final model.

internal_tuned_values

(named list() or NULL) If early stopping is activated, this returns a list with nrounds, which is extracted from $best_iteration of the model and otherwise NULL.

validate

(numeric(1) or character(1) or NULL) How to construct the internal validation data. This parameter can be either NULL, a ratio, "test", or "predefined". Returns the $best_iteration when early stopping is activated.

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method importance()

The importance scores are calculated with xgboost::xgb.importance().

Usage

LearnerRegrXgboost$importance()

Returns

Named numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrXgboost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (FALSE) {
if (requireNamespace("xgboost", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.xgboost")
print(learner)

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# print the model
print(learner$model)

# importance method
if("importance" %in% learner$properties) print(learner$importance)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
}
}

if (FALSE) {
# Train learner with early stopping on spam data set
task = tsk("mtcars")

# use 30 percent for validation
# Set early stopping parameter
learner = lrn("regr.xgboost",
  nrounds = 100,
  early_stopping_rounds = 10,
  validate = 0.3
)

# Train learner with early stopping
learner$train(task)

# Inspect optimal nrounds and validation performance
learner$internal_tuned_values
learner$internal_valid_scores
}