Kriging regression.
Calls DiceKriging::km()
from package DiceKriging.
The predict type hyperparameter "type" defaults to "sk" (simple kriging).
The additional hyperparameter
nugget.stability
is used to overwrite the hyperparameternugget
withnugget.stability * var(y)
before training to improve the numerical stability. We recommend a value of1e-8
.The additional hyperparameter
jitter
can be set to addN(0, [jitter])
-distributed noise to the data before prediction to avoid perfect interpolation. We recommend a value of1e-12
.
Dictionary
This mlr3::Learner can be instantiated via the dictionary mlr3::mlr_learners or with the associated sugar function mlr3::lrn()
:
Meta Information
Task type: “regr”
Predict Types: “response”, “se”
Feature Types: “logical”, “integer”, “numeric”
Required Packages: mlr3, mlr3learners, DiceKriging
Parameters
Id | Type | Default | Levels | Range |
bias.correct | logical | FALSE | TRUE, FALSE | - |
checkNames | logical | TRUE | TRUE, FALSE | - |
coef.cov | untyped | NULL | - | |
coef.trend | untyped | NULL | - | |
coef.var | untyped | NULL | - | |
control | untyped | NULL | - | |
cov.compute | logical | TRUE | TRUE, FALSE | - |
covtype | character | matern5_2 | gauss, matern5_2, matern3_2, exp, powexp | - |
estim.method | character | MLE | MLE, LOO | - |
gr | logical | TRUE | TRUE, FALSE | - |
iso | logical | FALSE | TRUE, FALSE | - |
jitter | numeric | 0 | \([0, \infty)\) | |
kernel | untyped | NULL | - | |
knots | untyped | NULL | - | |
light.return | logical | FALSE | TRUE, FALSE | - |
lower | untyped | NULL | - | |
multistart | integer | 1 | \((-\infty, \infty)\) | |
noise.var | untyped | NULL | - | |
nugget | numeric | - | \((-\infty, \infty)\) | |
nugget.estim | logical | FALSE | TRUE, FALSE | - |
nugget.stability | numeric | 0 | \([0, \infty)\) | |
optim.method | character | BFGS | BFGS, gen | - |
parinit | untyped | NULL | - | |
penalty | untyped | NULL | - | |
scaling | logical | FALSE | TRUE, FALSE | - |
se.compute | logical | TRUE | TRUE, FALSE | - |
type | character | SK | SK, UK | - |
upper | untyped | NULL | - |
References
Roustant O, Ginsbourger D, Deville Y (2012). “DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization.” Journal of Statistical Software, 51(1), 1–55. doi:10.18637/jss.v051.i01 .
See also
Chapter in the mlr3book: https://mlr3book.mlr-org.com/chapters/chapter2/data_and_basic_modeling.html#sec-learners
Package mlr3extralearners for more learners.
as.data.table(mlr_learners)
for a table of available Learners in the running session (depending on the loaded packages).mlr3pipelines to combine learners with pre- and postprocessing steps.
Extension packages for additional task types:
mlr3proba for probabilistic supervised regression and survival analysis.
mlr3cluster for unsupervised clustering.
mlr3tuning for tuning of hyperparameters, mlr3tuningspaces for established default tuning spaces.
Other Learner:
mlr_learners_classif.cv_glmnet
,
mlr_learners_classif.glmnet
,
mlr_learners_classif.kknn
,
mlr_learners_classif.lda
,
mlr_learners_classif.log_reg
,
mlr_learners_classif.multinom
,
mlr_learners_classif.naive_bayes
,
mlr_learners_classif.nnet
,
mlr_learners_classif.qda
,
mlr_learners_classif.ranger
,
mlr_learners_classif.svm
,
mlr_learners_classif.xgboost
,
mlr_learners_regr.cv_glmnet
,
mlr_learners_regr.glmnet
,
mlr_learners_regr.kknn
,
mlr_learners_regr.lm
,
mlr_learners_regr.nnet
,
mlr_learners_regr.ranger
,
mlr_learners_regr.svm
,
mlr_learners_regr.xgboost
Super classes
mlr3::Learner
-> mlr3::LearnerRegr
-> LearnerRegrKM
Examples
if (requireNamespace("DiceKriging", quietly = TRUE)) {
# Define the Learner and set parameter values
learner = lrn("regr.km")
print(learner)
# Define a Task
task = tsk("mtcars")
# Create train and test set
ids = partition(task)
# Train the learner on the training ids
learner$train(task, row_ids = ids$train)
# print the model
print(learner$model)
# importance method
if("importance" %in% learner$properties) print(learner$importance)
# Make predictions for the test rows
predictions = learner$predict(task, row_ids = ids$test)
# Score the predictions
predictions$score()
}
#> <LearnerRegrKM:regr.km>: Kriging
#> * Model: -
#> * Parameters: list()
#> * Packages: mlr3, mlr3learners, DiceKriging
#> * Predict Types: [response], se
#> * Feature Types: logical, integer, numeric
#> * Properties: -
#>
#> optimisation start
#> ------------------
#> * estimation method : MLE
#> * optimisation method : BFGS
#> * analytical gradient : used
#> * trend model : ~1
#> * covariance model :
#> - type : matern5_2
#> - nugget : NO
#> - parameters lower bounds : 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10 1e-10
#> - parameters upper bounds : 2 14 8 801.8 4.34 4 566 16.8 2 7.618
#> - best initial criterion value(s) : -58.39364
#>
#> N = 10, M = 5 machine precision = 2.22045e-16
#> At X0, 0 variables are exactly at the bounds
#> At iterate 0 f= 58.394 |proj g|= 0.65458
#> At iterate 1 f = 58.174 |proj g|= 0.78365
#> At iterate 2 f = 58.053 |proj g|= 0.20068
#> At iterate 3 f = 58.003 |proj g|= 0.17351
#> At iterate 4 f = 57.483 |proj g|= 0.14905
#> At iterate 5 f = 57.466 |proj g|= 0.070431
#> At iterate 6 f = 57.453 |proj g|= 0.12724
#> At iterate 7 f = 57.414 |proj g|= 0.063327
#> At iterate 8 f = 57.38 |proj g|= 0.058924
#> At iterate 9 f = 57.199 |proj g|= 0.20806
#> At iterate 10 f = 57.171 |proj g|= 0.16029
#> At iterate 11 f = 57.161 |proj g|= 0.051045
#> At iterate 12 f = 57.155 |proj g|= 0.098034
#> At iterate 13 f = 57.149 |proj g|= 0.076632
#> At iterate 14 f = 57.146 |proj g|= 0.023702
#> At iterate 15 f = 57.143 |proj g|= 0.072994
#> At iterate 16 f = 57.136 |proj g|= 0.15658
#> At iterate 17 f = 57.117 |proj g|= 0.26162
#> At iterate 18 f = 57.077 |proj g|= 0.33895
#> At iterate 19 f = 57.018 |proj g|= 0.17888
#> At iterate 20 f = 57.009 |proj g|= 0.12259
#> At iterate 21 f = 57.008 |proj g|= 0.04601
#> At iterate 22 f = 57.007 |proj g|= 0.0161
#> At iterate 23 f = 57.007 |proj g|= 0.016285
#> At iterate 24 f = 57.006 |proj g|= 0.04721
#> At iterate 25 f = 57.004 |proj g|= 0.10892
#> At iterate 26 f = 56.999 |proj g|= 0.22179
#> At iterate 27 f = 56.99 |proj g|= 0.29589
#> At iterate 28 f = 56.985 |proj g|= 0.18398
#> At iterate 29 f = 56.98 |proj g|= 0.01533
#> At iterate 30 f = 56.98 |proj g|= 0.02337
#> At iterate 31 f = 56.979 |proj g|= 0.068303
#> At iterate 32 f = 56.976 |proj g|= 0.12392
#> At iterate 33 f = 56.969 |proj g|= 0.20846
#> At iterate 34 f = 56.953 |proj g|= 0.31837
#> At iterate 35 f = 56.907 |proj g|= 0.39657
#> At iterate 36 f = 56.785 |proj g|= 0.5768
#> At iterate 37 f = 56.616 |proj g|= 0.69008
#> At iterate 38 f = 56.357 |proj g|= 0.50184
#> At iterate 39 f = 56.286 |proj g|= 0.22707
#> At iterate 40 f = 56.242 |proj g|= 0.50681
#> At iterate 41 f = 56.14 |proj g|= 1.1696
#> At iterate 42 f = 56.018 |proj g|= 1.5842
#> At iterate 43 f = 55.983 |proj g|= 0.73268
#> At iterate 44 f = 55.902 |proj g|= 0.44156
#> At iterate 45 f = 55.855 |proj g|= 0.40828
#> At iterate 46 f = 55.842 |proj g|= 0.037859
#> At iterate 47 f = 55.838 |proj g|= 0.080262
#> At iterate 48 f = 55.833 |proj g|= 0.1956
#> At iterate 49 f = 55.82 |proj g|= 0.27206
#> At iterate 50 f = 55.802 |proj g|= 0.21758
#> At iterate 51 f = 55.796 |proj g|= 0.050006
#> At iterate 52 f = 55.795 |proj g|= 0.061304
#> At iterate 53 f = 55.793 |proj g|= 0.11254
#> At iterate 54 f = 55.787 |proj g|= 0.25647
#> At iterate 55 f = 55.772 |proj g|= 0.46121
#> At iterate 56 f = 55.743 |proj g|= 0.77069
#> At iterate 57 f = 55.718 |proj g|= 0.58626
#> At iterate 58 f = 55.685 |proj g|= 0.06202
#> At iterate 59 f = 55.684 |proj g|= 0.02268
#> At iterate 60 f = 55.684 |proj g|= 0.083878
#> At iterate 61 f = 55.683 |proj g|= 0.024401
#> At iterate 62 f = 55.682 |proj g|= 0.0016686
#> At iterate 63 f = 55.682 |proj g|= 0.0016702
#> At iterate 64 f = 55.682 |proj g|= 0.0016726
#> At iterate 65 f = 55.682 |proj g|= 0.0065732
#> At iterate 66 f = 55.682 |proj g|= 0.0067642
#> At iterate 67 f = 55.682 |proj g|= 0.008156
#> At iterate 68 f = 55.682 |proj g|= 0.011045
#> At iterate 69 f = 55.682 |proj g|= 0.044371
#> At iterate 70 f = 55.681 |proj g|= 0.043548
#> At iterate 71 f = 55.677 |proj g|= 0.04273
#> At iterate 72 f = 55.668 |proj g|= 0.2078
#> At iterate 73 f = 55.651 |proj g|= 0.28109
#> At iterate 74 f = 55.603 |proj g|= 0.58369
#> At iterate 75 f = 55.541 |proj g|= 0.3966
#> At iterate 76 f = 55.418 |proj g|= 0.539
#> At iterate 77 f = 55.381 |proj g|= 0.51935
#> At iterate 78 f = 55.361 |proj g|= 0.053448
#> At iterate 79 f = 55.36 |proj g|= 0.005173
#> At iterate 80 f = 55.36 |proj g|= 0.00028756
#> At iterate 81 f = 55.36 |proj g|= 8.3708e-05
#>
#> iterations 81
#> function evaluations 90
#> segments explored during Cauchy searches 84
#> BFGS updates skipped 0
#> active bounds at final generalized Cauchy point 7
#> norm of the final projected gradient 8.37075e-05
#> final function value 55.3604
#>
#> F = 55.3604
#> final value 55.360421
#> converged
#>
#> Call:
#> DiceKriging::km(design = data, response = task$truth(), control = pv$control)
#>
#> Trend coeff.:
#> Estimate
#> (Intercept) 20.5212
#>
#> Covar. type : matern5_2
#> Covar. coeff.:
#> Estimate
#> theta(am) 0.0000
#> theta(carb) 14.0000
#> theta(cyl) 8.0000
#> theta(disp) 801.8000
#> theta(drat) 1.0804
#> theta(gear) 4.0000
#> theta(hp) 566.0000
#> theta(qsec) 1.3130
#> theta(vs) 2.0000
#> theta(wt) 3.2168
#>
#> Variance estimate: 31.82454
#> regr.mse
#> 16.72725