eXtreme Gradient Boosting regression. Calls xgboost::xgb.train() from package xgboost.

We changed the following defaults for this learner:

  • Verbosity is reduced by setting verbose to 0.

  • Number of boosting iterations nrounds is set to 1.

Format

R6::R6Class() inheriting from mlr3::LearnerRegr.

Construction

LearnerRegrXgboost$new()
mlr3::mlr_learners$get("regr.xgboost")
mlr3::lrn("regr.xgboost")

References

Chen T, Guestrin C (2016). “Xgboost: A scalable tree boosting system.” In Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 785--794. ACM. doi: 10.1145/2939672.2939785 .

See also

Examples

learner = mlr3::lrn("regr.xgboost") print(learner)
#> <LearnerRegrXgboost:regr.xgboost> #> * Model: - #> * Parameters: nrounds=1, verbose=0 #> * Packages: xgboost #> * Predict Type: response #> * Feature types: integer, numeric #> * Properties: importance, missings, weights
# available parameters: learner$param_set$ids()
#> [1] "booster" "watchlist" "eta" #> [4] "gamma" "max_depth" "min_child_weight" #> [7] "subsample" "colsample_bytree" "colsample_bylevel" #> [10] "colsample_bynode" "num_parallel_tree" "lambda" #> [13] "lambda_bias" "alpha" "objective" #> [16] "eval_metric" "base_score" "max_delta_step" #> [19] "missing" "monotone_constraints" "tweedie_variance_power" #> [22] "nthread" "nrounds" "feval" #> [25] "verbose" "print_every_n" "early_stopping_rounds" #> [28] "maximize" "sample_type" "normalize_type" #> [31] "rate_drop" "skip_drop" "one_drop" #> [34] "tree_method" "grow_policy" "max_leaves" #> [37] "max_bin" "callbacks" "sketch_eps" #> [40] "scale_pos_weight" "updater" "refresh_leaf" #> [43] "feature_selector" "top_k" "predictor"