0

mlr在 R 中使用包在二进制分类任务中比较两个学习器,即随机森林和套索分类器。我想提取特征对最佳分类器的重要性,在这种情况下是随机森林,类似于caret::varImp(). 我遇到了getBMRFeatSelResults(), getFeatureImportance()generateFeatureImportanceData()但似乎没有一个可以解决问题。这是我使用嵌套重采样执行基准实验的代码。理想情况下,我希望基尼系数平均下降。谢谢你。

library(easypackages)

libraries("mlr","purrr","glmnet","parallelMap","parallel")

data = read.table("data_past.txt", h = T)

set.seed(123)

task = makeClassifTask(id = "past_history", data = data, target = "DIAG", positive = "BD")

ps_rf = makeParamSet(makeIntegerParam("mtry", lower = 4, upper = 16),makeDiscreteParam("ntree", values = 1000))

ps_lasso = makeParamSet(makeNumericParam("s", lower = .01, upper = 1),makeDiscreteParam("alpha", values = 1))

ctrl_rf = makeTuneControlRandom(maxit = 10L)

ctrl_lasso = makeTuneControlRandom(maxit = 100L)

inner = makeResampleDesc("RepCV", fold = 10, reps = 3, stratify = TRUE)

lrn_rf = makeLearner("classif.randomForest", predict.type = "prob", fix.factors.prediction = TRUE)

lrn_rf = makeTuneWrapper(lrn_rf, resampling = inner, par.set = ps_rf, control = ctrl_rf, measures = auc, show.info = FALSE)

lrn_lasso = makeLearner("classif.glmnet", predict.type = "prob", fix.factors.prediction = TRUE)

lrn_lasso = makeTuneWrapper(learner = lrn_lasso, resampling = inner, control = ctrl_lasso,  par.set = ps_lasso, measures = auc, show.info = FALSE)

outer = makeResampleDesc("CV", iters = 10, stratify = TRUE)

lrns = list(lrn_rf, lrn_lasso)

parallelStartMulticore(36)

res = benchmark(lrns, task, outer, measures = list(auc, ppv, npv, fpr, tpr, mmce), show.info = FALSE, model = T)

saveRDS(res, file = "res.rds")

parallelStop()

models <- getBMRModels(res, drop = TRUE)
4

1 回答 1

1

既然你说的是简历,

提取特征对最佳分类器的重要性

并不清楚你想做什么。CV 中没有“一个最佳单一模型”,并且通常不会在 CV 中衡量重要性。

CV 旨在估计/比较预测性能,而不是计算/解释特征重要性。

是一个类似问题的答案,可能会有所帮助。

我遇到了 getBMRFeatSelResults()、getFeatureImportance()、generateFeatureImportanceData(),但似乎没有一个能解决问题。

通过做出这样的陈述,这将有助于了解为什么这些函数不能详细地执行您想要的操作,而不仅仅是陈述事实:)

于 2019-12-12T15:17:42.843 回答