Kriging regression.
Calls DiceKriging::km()
from package DiceKriging.
The predict type hyperparameter "type" defaults to "sk" (simple kriging).
The additional hyperparameter
nugget.stability
is used to overwrite the hyperparameternugget
withnugget.stability * var(y)
before training to improve the numerical stability. We recommend a value of1e-8
.The additional hyperparameter
jitter
can be set to addN(0, [jitter])
-distributed noise to the data before prediction to avoid perfect interpolation. We recommend a value of1e-12
.
Dictionary
This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn()
:
Meta Information
Task type: “regr”
Predict Types: “response”, “se”
Feature Types: “logical”, “integer”, “numeric”
Required Packages: mlr3, mlr3learners, DiceKriging
Parameters
Id | Type | Default | Levels | Range |
bias.correct | logical | FALSE | TRUE, FALSE | - |
checkNames | logical | TRUE | TRUE, FALSE | - |
coef.cov | untyped | NULL | - | |
coef.trend | untyped | NULL | - | |
coef.var | untyped | NULL | - | |
control | untyped | NULL | - | |
cov.compute | logical | TRUE | TRUE, FALSE | - |
covtype | character | matern5_2 | gauss, matern5_2, matern3_2, exp, powexp | - |
estim.method | character | MLE | MLE, LOO | - |
gr | logical | TRUE | TRUE, FALSE | - |
iso | logical | FALSE | TRUE, FALSE | - |
jitter | numeric | 0 | \([0, \infty)\) | |
kernel | untyped | NULL | - | |
knots | untyped | NULL | - | |
light.return | logical | FALSE | TRUE, FALSE | - |
lower | untyped | NULL | - | |
multistart | integer | 1 | \((-\infty, \infty)\) | |
noise.var | untyped | NULL | - | |
nugget | numeric | - | \((-\infty, \infty)\) | |
nugget.estim | logical | FALSE | TRUE, FALSE | - |
nugget.stability | numeric | 0 | \([0, \infty)\) | |
optim.method | character | BFGS | BFGS, gen | - |
parinit | untyped | NULL | - | |
penalty | untyped | NULL | - | |
scaling | logical | FALSE | TRUE, FALSE | - |
se.compute | logical | TRUE | TRUE, FALSE | - |
type | character | SK | SK, UK | - |
upper | untyped | NULL | - |
References
Roustant O, Ginsbourger D, Deville Y (2012). “DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization.” Journal of Statistical Software, 51(1), 1--55. doi:10.18637/jss.v051.i01 .
See also
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
Package mlr3extralearners for more learners.
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).mlr3pipelines to combine learners with pre- and postprocessing steps.
Extension packages for additional task types:
mlr3proba for probabilistic supervised regression and survival analysis.
mlr3cluster for unsupervised clustering.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Other Learner:
mlr_learners_classif.cv_glmnet
,
mlr_learners_classif.glmnet
,
mlr_learners_classif.kknn
,
mlr_learners_classif.lda
,
mlr_learners_classif.log_reg
,
mlr_learners_classif.multinom
,
mlr_learners_classif.naive_bayes
,
mlr_learners_classif.nnet
,
mlr_learners_classif.qda
,
mlr_learners_classif.ranger
,
mlr_learners_classif.svm
,
mlr_learners_classif.xgboost
,
mlr_learners_regr.cv_glmnet
,
mlr_learners_regr.glmnet
,
mlr_learners_regr.kknn
,
mlr_learners_regr.lm
,
mlr_learners_regr.nnet
,
mlr_learners_regr.ranger
,
mlr_learners_regr.svm
,
mlr_learners_regr.xgboost
Super classes
mlr3::Learner
-> mlr3::LearnerRegr
-> LearnerRegrKM
Examples
if (requireNamespace("DiceKriging", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.km")
print(learner)
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# print the model
print(learner$model)
# importance method
if("importance" %in% learner$properties) print(learner$importance)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
}
#> <LearnerRegrKM:regr.km>: Kriging
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3learners, DiceKriging
#> * Predict Types: [response], se
#> * Feature Types: logical, integer, numeric
#> * Properties: -
#>
#> optimisation start
#> ------------------
#> * estimation method : MLE
#> * optimisation method : BFGS
#> * analytical gradient : used
#> * trend model : ~1
#> * covariance model :
#> - type : matern5_2
#> - nugget : NO
#> - parameters lower bounds : 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10
#> - parameters upper bounds : 2 14 8 792.6 4.34 4 566 16.6 2 7.474
#> - best initial criterion value(s) : -57.8379
#>
#> N = 10, M = 5 machine precision = 2.22045e-16
#> At X0, 0 variables are exactly at the bounds
#> At iterate 0 f= 57.838 |proj g|= 1.5217
#> At iterate 1 f = 55.424 |proj g|= 1.2472
#> At iterate 2 f = 55.134 |proj g|= 1.0806
#> At iterate 3 f = 54.545 |proj g|= 0.13157
#> At iterate 4 f = 54.505 |proj g|= 0.11458
#> At iterate 5 f = 54.467 |proj g|= 0.065399
#> At iterate 6 f = 54.461 |proj g|= 0.053595
#> At iterate 7 f = 54.446 |proj g|= 0.030217
#> At iterate 8 f = 54.438 |proj g|= 0.014875
#> At iterate 9 f = 54.433 |proj g|= 0.013621
#> At iterate 10 f = 54.427 |proj g|= 0.038064
#> At iterate 11 f = 54.425 |proj g|= 0.026275
#> At iterate 12 f = 54.424 |proj g|= 0.0097322
#> At iterate 13 f = 54.424 |proj g|= 0.00087228
#> At iterate 14 f = 54.424 |proj g|= 0.00087242
#> At iterate 15 f = 54.424 |proj g|= 0.0024683
#> At iterate 16 f = 54.424 |proj g|= 0.0049985
#> At iterate 17 f = 54.424 |proj g|= 0.0093887
#> At iterate 18 f = 54.424 |proj g|= 0.016286
#> At iterate 19 f = 54.424 |proj g|= 0.024972
#> At iterate 20 f = 54.424 |proj g|= 0.03234
#> At iterate 21 f = 54.423 |proj g|= 0.028491
#> At iterate 22 f = 54.423 |proj g|= 0.0058527
#> At iterate 23 f = 54.423 |proj g|= 0.0011432
#> At iterate 24 f = 54.423 |proj g|= 0.0011481
#> At iterate 25 f = 54.423 |proj g|= 0.0017131
#> At iterate 26 f = 54.423 |proj g|= 0.0046563
#> At iterate 27 f = 54.423 |proj g|= 0.0085276
#> At iterate 28 f = 54.423 |proj g|= 0.014486
#> At iterate 29 f = 54.422 |proj g|= 0.042184
#> At iterate 30 f = 54.421 |proj g|= 0.033722
#> At iterate 31 f = 54.417 |proj g|= 0.019092
#> At iterate 32 f = 54.417 |proj g|= 0.016637
#> At iterate 33 f = 54.416 |proj g|= 0.0047237
#> At iterate 34 f = 54.413 |proj g|= 0.018784
#> At iterate 35 f = 54.412 |proj g|= 0.0065728
#> At iterate 36 f = 54.402 |proj g|= 0.050257
#> At iterate 37 f = 54.385 |proj g|= 0.11675
#> At iterate 38 f = 54.377 |proj g|= 0.06177
#> At iterate 39 f = 54.357 |proj g|= 0.055719
#> At iterate 40 f = 54.345 |proj g|= 0.073742
#> At iterate 41 f = 54.339 |proj g|= 0.042044
#> At iterate 42 f = 54.336 |proj g|= 0.012098
#> At iterate 43 f = 54.335 |proj g|= 0.0040635
#> At iterate 44 f = 54.335 |proj g|= 0.0027344
#> At iterate 45 f = 54.335 |proj g|= 0.0037714
#> At iterate 46 f = 54.335 |proj g|= 0.0067015
#> At iterate 47 f = 54.335 |proj g|= 0.0065048
#> At iterate 48 f = 54.335 |proj g|= 0.0016658
#> At iterate 49 f = 54.335 |proj g|= 0.00037274
#> At iterate 50 f = 54.335 |proj g|= 0.0001953
#> At iterate 51 f = 54.335 |proj g|= 0.00032893
#> At iterate 52 f = 54.335 |proj g|= 1.0887e-05
#>
#> iterations 52
#> function evaluations 57
#> segments explored during Cauchy searches 60
#> BFGS updates skipped 0
#> active bounds at final generalized Cauchy point 7
#> norm of the final projected gradient 1.08874e-05
#> final function value 54.3352
#>
#> F = 54.3352
#> final value 54.335217
#> converged
#>
#> Call:
#> DiceKriging::km(design = data, response = task$truth(), control = pv$control)
#>
#> Trend coeff.:
#> Estimate
#> (Intercept) 19.0276
#>
#> Covar. type : matern5_2
#> Covar. coeff.:
#> Estimate
#> theta(am) 2.0000
#> theta(carb) 14.0000
#> theta(cyl) 8.0000
#> theta(disp) 792.6000
#> theta(drat) 4.3400
#> theta(gear) 4.0000
#> theta(hp) 58.7707
#> theta(qsec) 2.2783
#> theta(vs) 2.0000
#> theta(wt) 2.1218
#>
#> Variance estimate: 31.53133
#> regr.mse
#> 13.68941