R/LearnerSurvXgboost.R
mlr_learners_surv.xgboost.RdeXtreme Gradient Boosting regression.
Calls xgboost::xgb.train() from package xgboost.
nrounds:
Actual default: no default
Adjusted default: 1
Reason for change: Without a default construction of the learner
would error. Just setting a nonsense default to workaround this.
nrounds needs to be tuned by the user.
verbose:
Actual default: 1
Adjusted default: 0
Reason for change: Reduce verbosity.
objective:
Actual default: reg:squarederror
Adjusted default: survival:cox
Reason for change: This is the only available objective for survival.
eval_metric:
Actual default: no default
Adjusted default: cox-nloglik
Reason for change: Only sensible metric for objective.
This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():
mlr_learners$get("surv.xgboost") lrn("surv.xgboost")
Chen, Tianqi, Guestrin, Carlos (2016). “Xgboost: A scalable tree boosting system.” In Proceedings of the 22nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 785--794. ACM. doi: 10.1145/2939672.2939785 .
mlr3::Learner -> mlr3proba::LearnerSurv -> LearnerSurvXgboost
new()Creates a new instance of this R6 class.
LearnerSurvXgboost$new()
importance()The importance scores are calculated with xgboost::xgb.importance().
LearnerSurvXgboost$importance()
Named numeric().
clone()The objects of this class are cloneable with this method.
LearnerSurvXgboost$clone(deep = FALSE)
deepWhether to make a deep clone.
if (requireNamespace("xgboost")) { learner = mlr3::lrn("surv.xgboost") print(learner) # available parameters: learner$param_set$ids() }#> <LearnerSurvXgboost:surv.xgboost> #> * Model: - #> * Parameters: nrounds=1, verbose=0, eval_metric=cox-nloglik #> * Packages: xgboost #> * Predict Type: crank #> * Feature types: logical, integer, numeric #> * Properties: importance, missings, weights#> [1] "booster" "watchlist" "eta" #> [4] "gamma" "max_depth" "min_child_weight" #> [7] "subsample" "colsample_bytree" "colsample_bylevel" #> [10] "colsample_bynode" "num_parallel_tree" "lambda" #> [13] "lambda_bias" "alpha" "objective" #> [16] "eval_metric" "base_score" "max_delta_step" #> [19] "missing" "monotone_constraints" "tweedie_variance_power" #> [22] "nthread" "nrounds" "feval" #> [25] "verbose" "print_every_n" "early_stopping_rounds" #> [28] "maximize" "sample_type" "normalize_type" #> [31] "rate_drop" "skip_drop" "one_drop" #> [34] "tree_method" "grow_policy" "max_leaves" #> [37] "max_bin" "callbacks" "sketch_eps" #> [40] "scale_pos_weight" "updater" "refresh_leaf" #> [43] "feature_selector" "top_k" "predictor"