1

This is a quick question, just to make sure I'm not doing this the dumb way. I want to use auc as my measure in mlr, and I'm also using LOO due to the small sample size. Of course, in the LOO cross validation scheme the test sample is always only one instance, so auc can't be calculated. We can, of course, calculate it afterwards, looking at the predictions, the problem happens when we want to use it as the measure in the inner loop of nested crossvalidation. Something like this (you must define your own binaryTask):

require(mlr)    
#for example purposes we will decide which one is better, vanilla LDA or
#vanilla SVM, in the task specified below
bls = list(makeLearner("classif.lda"),makeLearner("classif.svm"))
#modelMultiplexer allows us to search whole parameter spaces between models
#as if the models themselves were parameters
lrn = makeModelMultiplexer(bls)
#to calculate AUC we need some continuous output, so we set 
#predictType to probabilities
lrn = setPredictType(lrn, "prob")
lrn = setId(lrn, "Model Multiplexer")
#here we could pass the parameters to be tested to both SVM and LDA,
#let's not pass anything so we test the vanilla classifiers instead
ps = makeModelMultiplexerParamSet(lrn)
#finally, the resample strategy, Leave-One-Out ("LOO") in our case
rdesc = makeResampleDesc("LOO")
#parameter space search strategy, in our case we only have one parameter:
#the model. So, a simple grid search will do the trick
ctrl = makeTuneControlGrid()
#The inner CV loop where we choose the best model in the validation data
tune = makeTuneWrapper(lrn, rdesc, par.set = ps, control = ctrl, measure = auc, show.info = FALSE) 
#The outer CV loop where we obtain the performace of the selected model 
#in the test data. mlR is a great interface, we could have passed a list 
#of classifiers and tasks here instead and do it all in one go 
#(beware your memory limitation!)
res = benchmark(tune, binaryTask, rdesc, measure = auc)

You simply can't use auc like that, in both loops. How could we make mlr evaluate the measure over all the test samples instead of an unique resample each time?

4

1 回答 1

1

您可以对内部循环使用不同的重采样策略,然后使用auc

library(mlr)    

ps = makeParamSet(
  makeNumericLearnerParam(id = "cp", default = 0.01, lower = 0, upper = 1)
)
ctrl = makeTuneControlRandom(maxit = 10)
inner = makeResampleDesc("Subsample")
lrn = makeLearner("classif.rpart", predict.type = "prob")
tune = makeTuneWrapper(lrn, resampling = inner, par.set = ps, control = ctrl, measure = auc)

outer = makeResampleDesc("LOO")
r = resample(tune, bc.task, resampling = outer, extract = getTuneResult, measure = auc)

您也可以只获取重采样结果并对其计算任意性能度量,例如performance(r$pred, auc).

于 2015-10-21T00:12:06.650 回答