Skip to contents

Kriging regression. Calls DiceKriging::km() from package DiceKriging.

  • The predict type hyperparameter "type" defaults to "sk" (simple kriging).

  • The additional hyperparameter nugget.stability is used to overwrite the hyperparameter nugget with nugget.stability * var(y) before training to improve the numerical stability. We recommend a value of 1e-8.

  • The additional hyperparameter jitter can be set to add N(0, [jitter])-distributed noise to the data before prediction to avoid perfect interpolation. We recommend a value of 1e-12.

Dictionary

This Learner can be instantiated via the dictionary mlr_learners or with the associated sugar function lrn():

mlr_learners$get("regr.km")
lrn("regr.km")

Meta Information

  • Task type: “regr”

  • Predict Types: “response”, “se”

  • Feature Types: “logical”, “integer”, “numeric”

  • Required Packages: mlr3, mlr3learners, DiceKriging

Parameters

IdTypeDefaultLevelsRange
bias.correctlogicalFALSETRUE, FALSE-
checkNameslogicalTRUETRUE, FALSE-
coef.covuntyped-
coef.trenduntyped-
coef.varuntyped-
controluntyped-
cov.computelogicalTRUETRUE, FALSE-
covtypecharactermatern5_2gauss, matern5_2, matern3_2, exp, powexp-
estim.methodcharacterMLEMLE, LOO-
grlogicalTRUETRUE, FALSE-
isologicalFALSETRUE, FALSE-
jitternumeric0\([0, \infty)\)
kerneluntyped-
knotsuntyped-
light.returnlogicalFALSETRUE, FALSE-
loweruntyped-
multistartinteger1\((-\infty, \infty)\)
noise.varuntyped-
nuggetnumeric-\((-\infty, \infty)\)
nugget.estimlogicalFALSETRUE, FALSE-
nugget.stabilitynumeric0\([0, \infty)\)
optim.methodcharacterBFGSBFGS, gen-
parinituntyped-
penaltyuntyped-
scalinglogicalFALSETRUE, FALSE-
se.computelogicalTRUETRUE, FALSE-
typecharacterSKSK, UK-
upperuntyped-

References

Roustant O, Ginsbourger D, Deville Y (2012). “DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization.” Journal of Statistical Software, 51(1), 1--55. doi:10.18637/jss.v051.i01 .

See also

Other Learner: mlr_learners_classif.cv_glmnet, mlr_learners_classif.glmnet, mlr_learners_classif.kknn, mlr_learners_classif.lda, mlr_learners_classif.log_reg, mlr_learners_classif.multinom, mlr_learners_classif.naive_bayes, mlr_learners_classif.nnet, mlr_learners_classif.qda, mlr_learners_classif.ranger, mlr_learners_classif.svm, mlr_learners_classif.xgboost, mlr_learners_regr.cv_glmnet, mlr_learners_regr.glmnet, mlr_learners_regr.kknn, mlr_learners_regr.lm, mlr_learners_regr.nnet, mlr_learners_regr.ranger, mlr_learners_regr.svm, mlr_learners_regr.xgboost

Super classes

mlr3::Learner -> mlr3::LearnerRegr -> LearnerRegrKM

Methods

Inherited methods


Method new()

Creates a new instance of this R6 class.

Usage


Method clone()

The objects of this class are cloneable with this method.

Usage

LearnerRegrKM$clone(deep = FALSE)

Arguments

deep

Whether to make a deep clone.

Examples

if (requireNamespace("DiceKriging", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.km")
print(learner)

# Define a Task
task = tsk("mtcars")

# Create train and test set
ids = partition(task)

# Train the learner on the training ids
learner$train(task, row_ids = ids$train)

# print the model
print(learner$model)

# importance method
if("importance" %in% learner$properties) print(learner$importance)

# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)

# Score the predictions
predictions$score()
}
#> <LearnerRegrKM:regr.km>: Kriging
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3learners, DiceKriging
#> * Predict Types:  [response], se
#> * Feature Types: logical, integer, numeric
#> * Properties: -
#> 
#> optimisation start
#> ------------------
#> * estimation method   : MLE 
#> * optimisation method : BFGS 
#> * analytical gradient : used
#> * trend model : ~1
#> * covariance model : 
#>   - type :  matern5_2 
#>   - nugget : NO
#>   - parameters lower bounds :  1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 
#>   - parameters upper bounds :  2 14 8 792.6 4.34 4 566 16.6 2 7.474 
#>   - best initial criterion value(s) :  -57.8379 
#> 
#> N = 10, M = 5 machine precision = 2.22045e-16
#> At X0, 0 variables are exactly at the bounds
#> At iterate     0  f=       57.838  |proj g|=       1.5217
#> At iterate     1  f =       55.424  |proj g|=        1.2472
#> At iterate     2  f =       55.134  |proj g|=        1.0806
#> At iterate     3  f =       54.545  |proj g|=       0.13157
#> At iterate     4  f =       54.505  |proj g|=       0.11458
#> At iterate     5  f =       54.467  |proj g|=      0.065399
#> At iterate     6  f =       54.461  |proj g|=      0.053595
#> At iterate     7  f =       54.446  |proj g|=      0.030217
#> At iterate     8  f =       54.438  |proj g|=      0.014875
#> At iterate     9  f =       54.433  |proj g|=      0.013621
#> At iterate    10  f =       54.427  |proj g|=      0.038064
#> At iterate    11  f =       54.425  |proj g|=      0.026275
#> At iterate    12  f =       54.424  |proj g|=     0.0097322
#> At iterate    13  f =       54.424  |proj g|=    0.00087228
#> At iterate    14  f =       54.424  |proj g|=    0.00087242
#> At iterate    15  f =       54.424  |proj g|=     0.0024683
#> At iterate    16  f =       54.424  |proj g|=     0.0049985
#> At iterate    17  f =       54.424  |proj g|=     0.0093887
#> At iterate    18  f =       54.424  |proj g|=      0.016286
#> At iterate    19  f =       54.424  |proj g|=      0.024972
#> At iterate    20  f =       54.424  |proj g|=       0.03234
#> At iterate    21  f =       54.423  |proj g|=      0.028491
#> At iterate    22  f =       54.423  |proj g|=     0.0058527
#> At iterate    23  f =       54.423  |proj g|=     0.0011432
#> At iterate    24  f =       54.423  |proj g|=     0.0011481
#> At iterate    25  f =       54.423  |proj g|=     0.0017131
#> At iterate    26  f =       54.423  |proj g|=     0.0046563
#> At iterate    27  f =       54.423  |proj g|=     0.0085276
#> At iterate    28  f =       54.423  |proj g|=      0.014486
#> At iterate    29  f =       54.422  |proj g|=      0.042184
#> At iterate    30  f =       54.421  |proj g|=      0.033722
#> At iterate    31  f =       54.417  |proj g|=      0.019092
#> At iterate    32  f =       54.417  |proj g|=      0.016637
#> At iterate    33  f =       54.416  |proj g|=     0.0047237
#> At iterate    34  f =       54.413  |proj g|=      0.018784
#> At iterate    35  f =       54.412  |proj g|=     0.0065728
#> At iterate    36  f =       54.402  |proj g|=      0.050257
#> At iterate    37  f =       54.385  |proj g|=       0.11675
#> At iterate    38  f =       54.377  |proj g|=       0.06177
#> At iterate    39  f =       54.357  |proj g|=      0.055719
#> At iterate    40  f =       54.345  |proj g|=      0.073742
#> At iterate    41  f =       54.339  |proj g|=      0.042044
#> At iterate    42  f =       54.336  |proj g|=      0.012098
#> At iterate    43  f =       54.335  |proj g|=     0.0040635
#> At iterate    44  f =       54.335  |proj g|=     0.0027344
#> At iterate    45  f =       54.335  |proj g|=     0.0037714
#> At iterate    46  f =       54.335  |proj g|=     0.0067015
#> At iterate    47  f =       54.335  |proj g|=     0.0065048
#> At iterate    48  f =       54.335  |proj g|=     0.0016658
#> At iterate    49  f =       54.335  |proj g|=    0.00037274
#> At iterate    50  f =       54.335  |proj g|=     0.0001953
#> At iterate    51  f =       54.335  |proj g|=    0.00032893
#> At iterate    52  f =       54.335  |proj g|=    1.0887e-05
#> 
#> iterations 52
#> function evaluations 57
#> segments explored during Cauchy searches 60
#> BFGS updates skipped 0
#> active bounds at final generalized Cauchy point 7
#> norm of the final projected gradient 1.08874e-05
#> final function value 54.3352
#> 
#> F = 54.3352
#> final  value 54.335217 
#> converged
#> 
#> Call:
#> DiceKriging::km(design = data, response = task$truth(), control = pv$control)
#> 
#> Trend  coeff.:
#>                Estimate
#>  (Intercept)    19.0276
#> 
#> Covar. type  : matern5_2 
#> Covar. coeff.:
#>                Estimate
#>    theta(am)     2.0000
#>  theta(carb)    14.0000
#>   theta(cyl)     8.0000
#>  theta(disp)   792.6000
#>  theta(drat)     4.3400
#>  theta(gear)     4.0000
#>    theta(hp)    58.7707
#>  theta(qsec)     2.2783
#>    theta(vs)     2.0000
#>    theta(wt)     2.1218
#> 
#> Variance estimate: 31.53133
#> regr.mse 
#> 13.68941