3

我想重复 in 的超参数调整(和alpha/或lambda)以避免较小数据集的可变性glmnetmlr3

caret中,我可以这样做"repeatedcv"

因为我真的很喜欢mlr3家庭包,所以我想用它们来分析。但是,我不确定如何执行此步骤的正确方法mlr3

示例数据

#library
library(caret)
library(mlr3verse)
library(mlbench)

# get example data
data(PimaIndiansDiabetes, package="mlbench")
data <- PimaIndiansDiabetes

# get small training data
train.data <- data[1:60,]

reprex 包(v1.0.0)于 2021-03-18 创建

caret使用and接近(调整alphaand lambda"cv""repeatedcv"


trControlCv <- trainControl("cv",
             number = 5,
             classProbs = TRUE,
             savePredictions = TRUE,
             summaryFunction = twoClassSummary)

# use "repeatedcv" to avoid variability in smaller data sets
trControlRCv <- trainControl("repeatedcv",
             number = 5,
             repeats= 20,
             classProbs = TRUE,
             savePredictions = TRUE,
             summaryFunction = twoClassSummary)

# train and extract coefficients with "cv" and different set.seed
set.seed(2323)
model <- train(
  diabetes ~., data = train.data, method = "glmnet",
  trControl = trControlCv,
  tuneLength = 10,
  metric="ROC"
)

coef(model$finalModel, model$finalModel$lambdaOpt) -> coef1

set.seed(23)
model <- train(
  diabetes ~., data = train.data, method = "glmnet",
  trControl = trControlCv,
  tuneLength = 10,
  metric="ROC"
)

coef(model$finalModel, model$finalModel$lambdaOpt) -> coef2


# train and extract coefficients with "repeatedcv" and different set.seed
set.seed(13)

model <- train(
  diabetes ~., data = train.data, method = "glmnet",
  trControl = trControlRCv,
  tuneLength = 10,
  metric="ROC"
)

coef(model$finalModel, model$finalModel$lambdaOpt) -> coef3


set.seed(55)
model <- train(
  diabetes ~., data = train.data, method = "glmnet",
  trControl = trControlRCv,
  tuneLength = 10,
  metric="ROC"
)

coef(model$finalModel, model$finalModel$lambdaOpt) -> coef4

reprex 包(v1.0.0)于 2021-03-18 创建

用交叉验证展示不同的系数,用重复的交叉验证展示相同的系数

# with "cv" I get different coefficients
identical(coef1, coef2)
#> [1] FALSE

# with "repeatedcv" I get the same coefficients
identical(coef3,coef4)
#> [1] TRUE

reprex 包(v1.0.0)于 2021-03-18 创建

mlr3一种方法使用cv.glmnet(内部调整lambda

# create elastic net regression
glmnet_lrn = lrn("classif.cv_glmnet", predict_type = "prob")

# define train task
train.task <- TaskClassif$new("train.data", train.data, target = "diabetes")

# create learner 
learner = as_learner(glmnet_lrn)

# train the learner with different set.seed
set.seed(2323)
learner$train(train.task)
coef(learner$model, s = "lambda.min") -> coef1

set.seed(23)
learner$train(train.task)
coef(learner$model, s = "lambda.min") -> coef2

reprex 包(v1.0.0)于 2021-03-18 创建

通过交叉验证展示不同的系数

# compare coefficients
coef1
#> 9 x 1 sparse Matrix of class "dgCMatrix"
#>                        1
#> (Intercept) -3.323460895
#> age          0.005065928
#> glucose      0.019727881
#> insulin      .          
#> mass         .          
#> pedigree     .          
#> pregnant     0.001290570
#> pressure     .          
#> triceps      0.020529162
coef2
#> 9 x 1 sparse Matrix of class "dgCMatrix"
#>                        1
#> (Intercept) -3.146190752
#> age          0.003840963
#> glucose      0.019015433
#> insulin      .          
#> mass         .          
#> pedigree     .          
#> pregnant     .          
#> pressure     .          
#> triceps      0.018841557

reprex 包(v1.0.0)于 2021-03-18 创建

更新 1:我取得的进展

根据下面的评论和这个评论,我可以使用rsmpAutoTuner

这个答案建议不要调整cv.glmnet但是glmnet(当时在 ml3 中不可用)

第二种mlr3方法使用glmnet(重复调整alphalambda

# define train task
train.task <- TaskClassif$new("train.data", train.data, target = "diabetes")

# create elastic net regression
glmnet_lrn = lrn("classif.glmnet", predict_type = "prob")

# turn to learner
learner = as_learner(glmnet_lrn)

# make search space
search_space = ps(
  alpha = p_dbl(lower = 0, upper = 1),
  s = p_dbl(lower = 1, upper = 1)
)

# set terminator
terminator = trm("evals", n_evals = 20)

#set tuner
tuner = tnr("grid_search", resolution = 3)

# tune the learner
at = AutoTuner$new(
  learner = learner,
  rsmp("repeated_cv"),
  measure = msr("classif.ce"),
  search_space = search_space,
  terminator = terminator,
  tuner=tuner)

at
#> <AutoTuner:classif.glmnet.tuned>
#> * Model: -
#> * Parameters: list()
#> * Packages: glmnet
#> * Predict Type: prob
#> * Feature types: logical, integer, numeric
#> * Properties: multiclass, twoclass, weights

开放式问题

我如何证明我的第二种方法是有效的,并且我得到不同种子的相同或相似系数?IE。如何提取最终模型的系数AutoTuner

set.seed(23)
at$train(train.task) -> tune1

set.seed(2323) 
at$train(train.task) -> tune2

reprex 包(v1.0.0)于 2021-03-18 创建

4

1 回答 1

1

glmnet可以使用上述第二种mlr3方法完成重复的超参数调整(alpha 和 lambda)。可以提取系数stats::coef并将其存储在AutoTuner

coef(tune1$model$learner$model, alpha=tune1$tuning_result$alpha,s=tune1$tuning_result$s)
# 9 x 1 sparse Matrix of class "dgCMatrix"
# 1
# (Intercept) -1.6359082102
# age          0.0075541841
# glucose      0.0044351365
# insulin      0.0005821515
# mass         0.0077104934
# pedigree     0.0911233031
# pregnant     0.0164721202
# pressure     0.0007055435
# triceps      0.0056942014
coef(tune2$model$learner$model, alpha=tune2$tuning_result$alpha,s=tune2$tuning_result$s)
# 9 x 1 sparse Matrix of class "dgCMatrix"
# 1
# (Intercept) -1.6359082102
# age          0.0075541841
# glucose      0.0044351365
# insulin      0.0005821515
# mass         0.0077104934
# pedigree     0.0911233031
# pregnant     0.0164721202
# pressure     0.0007055435
# triceps      0.0056942014
于 2021-03-21T22:22:45.760 回答