Skip to contents

Random classification forest. Calls ranger::ranger() from package ranger.

Custom mlr3 parameters

  • mtry:

    • This hyperparameter can alternatively be set via our hyperparameter mtry.ratio as mtry = max(ceiling(mtry.ratio * n_features), 1). Note that mtry and mtry.ratio are mutually exclusive.

Initial parameter values

  • num.threads:

    • Actual default: NULL, triggering auto-detection of the number of CPUs.

    • Adjusted value: 1.

    • Reason for change: Conflicting with parallelization via future.

Dictionary

This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn():

mlr_learners$get("classif.ranger")
lrn("classif.ranger")

Meta Information

  • Task type: “classif”

  • Predict Types: “response”, “prob”

  • Feature Types: “logical”, “integer”, “numeric”, “character”, “factor”, “ordered”

  • Required Packages: mlr3, mlr3learners, ranger

Parameters

IdTypeDefaultLevelsRange
alphanumeric0.5\((-\infty, \infty)\)
always.split.variablesuntyped--
class.weightsuntypedNULL-
holdoutlogicalFALSETRUE, FALSE-
importancecharacter-none, impurity, impurity_corrected, permutation-
keep.inbaglogicalFALSETRUE, FALSE-
max.depthintegerNULL\([0, \infty)\)
min.bucketinteger1\([1, \infty)\)
min.node.sizeintegerNULL\([1, \infty)\)
minpropnumeric0.1\((-\infty, \infty)\)
mtryinteger-\([1, \infty)\)
mtry.rationumeric-\([0, 1]\)
num.random.splitsinteger1\([1, \infty)\)
node.statslogicalFALSETRUE, FALSE-
num.threadsinteger1\([1, \infty)\)
num.treesinteger500\([1, \infty)\)
oob.errorlogicalTRUETRUE, FALSE-
regularization.factoruntyped1-
regularization.usedepthlogicalFALSETRUE, FALSE-
replacelogicalTRUETRUE, FALSE-
respect.unordered.factorscharacterignoreignore, order, partition-
sample.fractionnumeric-\([0, 1]\)
save.memorylogicalFALSETRUE, FALSE-
scale.permutation.importancelogicalFALSETRUE, FALSE-
se.methodcharacterinfjackjack, infjack-
seedintegerNULL\((-\infty, \infty)\)
split.select.weightsuntypedNULL-
splitrulecharacterginigini, extratrees, hellinger-
verboselogicalTRUETRUE, FALSE-
write.forestlogicalTRUETRUE, FALSE-

References

Wright, N. M, Ziegler, Andreas (2017). “ranger: A Fast Implementation of Random Forests for High Dimensional Data in C++ and R.” Journal of Statistical Software, 77(1), 1–17. doi:10.18637/jss.v077.i01 .

Breiman, Leo (2001). “Random Forests.” Machine Learning, 45(1), 5–32. ISSN 1573-0565, doi:10.1023/A:1010933404324 .

See also

Other Learner: mlr_learners_classif.cv_glmnet, mlr_learners_classif.glmnet, mlr_learners_classif.kknn, mlr_learners_classif.lda, mlr_learners_classif.log_reg, mlr_learners_classif.multinom, mlr_learners_classif.naive_bayes, mlr_learners_classif.nnet, mlr_learners_classif.qda, mlr_learners_classif.svm, mlr_learners_classif.xgboost, mlr_learners_regr.cv_glmnet, mlr_learners_regr.glmnet, mlr_learners_regr.kknn, mlr_learners_regr.km, mlr_learners_regr.lm, mlr_learners_regr.nnet, mlr_learners_regr.ranger, mlr_learners_regr.svm, mlr_learners_regr.xgboost

Super classes

mlr3::Learner -> mlr3::LearnerClassif -> LearnerClassifRanger

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method importance()

The importance scores are extracted from the model slot variable.importance. Parameter importance.mode must be set to "impurity", "impurity_corrected", or "permutation"

Usage

LearnerClassifRanger$importance()

Returns

Named numeric().


Method oob_error()

The out-of-bag error, extracted from model slot prediction.error.

Usage

LearnerClassifRanger$oob_error()

Returns

numeric(1).


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerClassifRanger$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("ranger", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("classif.ranger")
print(learner)

# Define a Task
task = tsk("sonar")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# print the model
print(learner$model)

# importance method
if("importance" %in% learner$properties) print(learner$importance)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
}
#> <LearnerClassifRanger:classif.ranger>: Random Forest
#> * Model: -
#> * Parameters: num.threads=1
#> * Packages: mlr3, mlr3learners, ranger
#> * Predict Types:  [response], prob
#> * Feature Types: logical, integer, numeric, character, factor, ordered
#> * Properties: hotstart_backward, importance, multiclass, oob_error,
#>   twoclass, weights
#> Ranger result
#> 
#> Call:
#>  ranger::ranger(dependent.variable.name = task$target_names, data = task$data(),      probability = self$predict_type == "prob", case.weights = task$weights$weight,      num.threads = 1L) 
#> 
#> Type:                             Classification 
#> Number of trees:                  500 
#> Sample size:                      139 
#> Number of independent variables:  60 
#> Mtry:                             7 
#> Target node size:                 1 
#> Variable importance mode:         none 
#> Splitrule:                        gini 
#> OOB prediction error:             19.42 % 
#> function () 
#> .__LearnerClassifRanger__importance(self = self, private = private, 
#>     super = super)
#> <environment: 0x555e22707568>
#> classif.ce 
#>   0.173913