最近我正在学习 mlr3 包中的嵌套重采样。根据 mlr3 的书,嵌套重采样的目标是为学习者获得无偏的性能估计。我运行如下测试:
# loading packages
library(mlr3)
library(paradox)
library(mlr3tuning)
# setting tune_grid
tune_grid <- ParamSet$new(
list(
ParamInt$new("mtry", lower = 1, upper = 15),
ParamInt$new("num.trees", lower = 50, upper = 200))
)
# setting AutoTuner
at <- AutoTuner$new(
learner = lrn("classif.ranger", predict_type = "prob"),
resampling = rsmp("cv", folds = 5),
measure = msr("classif.auc"),
search_space = tune_grid,
tuner = tnr("grid_search", resolution = 3),
terminator = trm("none"),
store_tuning_instance = TRUE)
# nested resampling
set.seed(100)
resampling_outer <- rsmp("cv", folds = 3) # outer resampling
rr <- resample(task_train, at, resampling_outer, store_models = TRUE)
> lapply(rr$learners, function(x) x$tuning_result)
[[1]]
mtry num.trees learner_param_vals x_domain classif.auc
1: 1 200 <list[2]> <list[2]> 0.7584991
[[2]]
mtry num.trees learner_param_vals x_domain classif.auc
1: 1 200 <list[2]> <list[2]> 0.7637077
[[3]]
mtry num.trees learner_param_vals x_domain classif.auc
1: 1 125 <list[2]> <list[2]> 0.7645588
> rr$aggregate(msr("classif.auc"))
classif.auc
0.7624477
结果表明,从 3 个内部重采样中选择的 3 个超参数不能保证相同。它类似于这篇文章(它从内部重采样中获得 3 个不同的 cp):mlr3 resample autotuner - not show tune parameters?.
我的问题是:
- 我曾经认为聚合结果rr$aggregate是 3 个模型的平均值,但它不是,(0.7584991 + 0.7637077 + 0.7645588) / 3 = 0.7622552,而不是 0.7624477,我误解了聚合结果吗?
- 如何从内部重采样过程中解释具有不同最佳超参数的 3 个模型的聚合性能结果?它是什么不偏不倚的表现?
谢谢!