eXtreme Gradient Boosting regression. Calls xgboost::xgb.train() from package xgboost.

Custom mlr3 defaults

  • nrounds:

    • Actual default: no default

    • Adjusted default: 1

    • Reason for change: Without a default construction of the learner would error. Just setting a nonsense default to workaround this. nrounds needs to be tuned by the user.

  • verbose:

    • Actual default: 1

    • Adjusted default: 0

    • Reason for change: Reduce verbosity.

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("regr.xgboost")
lrn("regr.xgboost")

References

Chen T, Guestrin C (2016). “Xgboost: A scalable tree boosting system.” In Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 785--794. ACM. doi: 10.1145/2939672.2939785 .

See also

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrXgboost

Methods

Public methods

Inherited methods

Method new()

Creates a new instance of this R6 class.

Usage

LearnerRegrXgboost$new()


Method importance()

The importance scores are calculated with xgboost::xgb.importance().

Usage

LearnerRegrXgboost$importance()

Returns

Named numeric().


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrXgboost$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("xgboost")) { learner = mlr3::lrn("regr.xgboost") print(learner) # available parameters: learner$param_set$ids() }
#> <LearnerRegrXgboost:regr.xgboost> #> * Model: - #> * Parameters: nrounds=1, verbose=0 #> * Packages: xgboost #> * Predict Type: response #> * Feature types: logical, integer, numeric #> * Properties: importance, missings, weights
#> [1] "booster" "watchlist" #> [3] "eta" "gamma" #> [5] "max_depth" "min_child_weight" #> [7] "subsample" "colsample_bytree" #> [9] "colsample_bylevel" "colsample_bynode" #> [11] "num_parallel_tree" "lambda" #> [13] "lambda_bias" "alpha" #> [15] "objective" "eval_metric" #> [17] "base_score" "max_delta_step" #> [19] "missing" "monotone_constraints" #> [21] "tweedie_variance_power" "nthread" #> [23] "nrounds" "feval" #> [25] "verbose" "print_every_n" #> [27] "early_stopping_rounds" "maximize" #> [29] "sample_type" "normalize_type" #> [31] "rate_drop" "skip_drop" #> [33] "one_drop" "tree_method" #> [35] "grow_policy" "max_leaves" #> [37] "max_bin" "callbacks" #> [39] "sketch_eps" "scale_pos_weight" #> [41] "updater" "refresh_leaf" #> [43] "feature_selector" "top_k" #> [45] "predictor" "save_period" #> [47] "save_name" "xgb_model" #> [49] "interaction_constraints" "outputmargin" #> [51] "ntreelimit" "predleaf" #> [53] "predcontrib" "approxcontrib" #> [55] "predinteraction" "reshape" #> [57] "training"