我正在使用 Bioconductor 包CMA对微阵列数据集中的 SVM 分类器执行内部蒙特卡罗交叉验证 (MCCV)。CMA 在内部使用 e1071 R 包进行 SVM 工作。
该数据集有 45 个样本(观察)的 387 个变量(属性),属于两个类别之一(标签 0 或 1;比例约为 1:1)。所有数据都是数字,没有 NA。我正在尝试一个 1000 次迭代的 MCCV,它使用limma 统计数据为 SVM 选择了 15 个变量来进行差异基因表达分析。在 MCCV 期间,45 个样本集的一部分用于训练 SVM 分类器,然后用于测试剩余部分,我正在为训练集部分尝试不同的值。CMA 还执行内部循环验证(默认情况下,在训练集中进行 3 倍交叉验证)以微调分类器以用于针对测试集的交叉验证。所有这些都是在 CMA 包中完成的。
有时,对于较小的训练集大小,CMA 会在控制台中显示错误,并暂停执行分类的其余代码。
[snip] 调整迭代 575 调整迭代 576 调整迭代 577 predict.svm(ret, xhold, decision.values = TRUE) 中的错误:模型为空!
即使我使用 limma 以外的测试进行变量选择,或者使用两个而不是 15 个变量进行分类器生成时,也会发生这种情况。我使用的 R 代码应确保训练集始终具有两个类的成员。我将不胜感激对此的任何见解。
下面是我使用的 R 代码,包括 Mac OS X 10.6.6、R 2.12.1、Biobase 2.10.0、CMA 1.8.1、limma 3.6.9 和 WilcoxCV 1.0.2。数据文件 hy3ExpHsaMir.txt 可以从http://rapidshare.com/files/447062901/hy3ExpHsaMir.txt下载。
一切正常,直到for(g in 0:10)循环中的g为 9 (用于改变训练/测试集的大小)。
# exp is the expression table, a matrix; 'classes' is list of known classes
exp <- as.matrix(read.table(file='hy3ExpHsaMir.txt', sep='\t', row.names=1, header=T, check.names=F))
#best is to use 0 and 1 as class labels (instead of 'p', 'g', etc.) with 1 for 'positive' items (positive for recurrence, or for disease, etc.)
classes <- c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
yesPredVal = 1 # class label for 'positive' items in 'classes'
library(CMA)
library(WilcoxCV)
myNumFun <- function(x, y){round(y(as.numeric(x), na.rm=T), 4)}
set.seed(631)
out = ''
out2 = '\nEffect of varying the training-set size:\nTraining-set size\tSuccessful iterations\tMean acc.\tSD acc.\tMean sens.\tSD sens.\tMean spec.\tSD spec.\tMean PPV\tSD PPV\tMean NPV\tSD NPV\tTotal genes in the classifiers\n'
niter = 1000
diffTest = 'limma'
diffGeneNum = 15
svmCost <- c(0.1, 0.2, 0.5, 1, 2, 5, 10, 20, 50)
for(g in 0:10){ # varying the training/test-set sizes
ntest = 3+g*3 # test-set size
result <- matrix(nrow=0, ncol=7)
colnames(result) <- c('trainSetSize', 'iteration', 'acc', 'sens', 'spec', 'ppv', 'npv')
diffGenes <- numeric()
# generate training and test sets
lsets <- GenerateLearningsets(n=ncol(exp), y=classes, method=c('MCCV'), niter=niter, ntrain=ncol(exp)-ntest)
# actual prediction work
svm <- classification(t(exp), factor(classes), learningsets=lsets, genesellist= list(method=diffTest), classifier=svmCMA, nbgene= diffGeneNum, tuninglist=list(grids=list(cost=svmCost)), probability=TRUE)
svm <- join(svm)
# genes in classifiers
svmGenes <- GeneSelection(t(exp), classes, learningsets=lsets, method=diffTest)
actualIters=0
for(h in 1:niter){
m <- ntest*(h-1)
# valid SVM classification requires min. 2 classes
if(1 < length(unique(classes[-lsets@learnmatrix[h,]]))){
actualIters = actualIters+1
tp <- tn <- fp <- fn <- 0
for(i in 1:ntest){
pred <- svm@yhat[m+i]
known <- svm@y[m+i]
if(pred == known){
if(pred == yesPredVal){tp <- tp+1}
else{tn <- tn+1}
}else{
if(pred == yesPredVal){fp <- fp+1}
else{fn <- fn+1}
}
}
result <- rbind(result, c(ncol(exp)-ntest, h, (tp+tn)/(tp+tn+fp+fn), tp/(tp+fn), tn/(tn+fp), tp/(tp+fp), tn/(tn+fn)))
diffGenes <- c(diffGenes, toplist(svmGenes, k=diffGeneNum, iter=h, show=F)$index)
} # end if valid SVM
} # end for h
# output accuracy, etc.
out = paste(out, 'SVM MCCV using ', niter, ' attempted iterations and ', actualIters, ' successful iterations, with ', ncol(exp)-ntest, ' of ', ncol(exp), ' total samples used for training:\nThe means (ranges; SDs) of prediction accuracy, sensitivity, specificity, PPV and NPV in fractions are ',
myNumFun(result[, 'acc'],mean), ' (', myNumFun(result[, 'acc'], min), '-', myNumFun(result[, 'acc'], max), '; ', myNumFun(result[, 'acc'], sd), '), ',
myNumFun(result[, 'sens'], mean), ' (', myNumFun(result[, 'sens'], min), '-', myNumFun(result[, 'sens'], max), '; ', myNumFun(result[, 'sens'], sd), '), ',
myNumFun(result[, 'spec'], mean), ' (', myNumFun(result[, 'spec'], min), '-', myNumFun(result[, 'spec'], max), '; ', myNumFun(result[, 'spec'], sd), '), ',
myNumFun(result[, 'ppv'], mean), ' (', myNumFun(result[, 'ppv'], min), '-', myNumFun(result[, 'ppv'], max), '; ', myNumFun(result[, 'ppv'], sd), '), and ',
myNumFun(result[, 'npv'], mean), ' (', myNumFun(result[, 'npv'], min), '-', myNumFun(result[, 'npv'], max), '; ', myNumFun(result[, 'npv'], sd), '), respectively.\n', sep='')
# output classifier genes
diffGenesUnq <- unique(diffGenes)
out = paste(out, 'A total of ', length(diffGenesUnq), ' genes occur in the ', actualIters, ' classifiers, with occurrence frequencies in fractions:\n', sep='')
for(i in 1:length(diffGenesUnq)){
out = paste(out, rownames(exp)[diffGenesUnq[i]], '\t', round(sum(diffGenes == diffGenesUnq[i])/actualIters, 3), '\n', sep='')
}
# output split-size effect
out2 = paste(out2, ncol(exp)-ntest, '\t', actualIters, '\t', myNumFun(result[, 'acc'], mean), '\t', myNumFun(result[, 'acc'], sd), '\t', myNumFun(result[, 'sens'], mean), '\t', myNumFun(result[, 'sens'], sd), '\t', myNumFun(result[, 'spec'], mean), '\t', myNumFun(result[, 'spec'], sd), '\t', myNumFun(result[, 'ppv'], mean), '\t', myNumFun(result[, 'ppv'], sd),
'\t', myNumFun(result[, 'npv'], mean), '\t', myNumFun(result[, 'npv'], sd), '\t', length(diffGenesUnq), '\n', sep='')
} # end for g
cat(out, out2, sep='')
回溯()的输出:
20: stop("模型为空!") 19: predict.svm(ret, xhold, decision.values = TRUE) 18:预测(ret,xhold,decision.values = TRUE) 17:na.action(预测(ret,xhold,decision.values = TRUE)) 16: svm.default(cost = 0.1, kernel = "linear", type = "C-classification", ... 15: svm(cost = 0.1, kernel = "linear", type = "C-classification", ... 14: do.call("svm", args = ll) 13:函数(X,y,f,learnind,概率,模型 = FALSE,...)... 12:函数(X,y,f,learnind,概率,模型 = FALSE,...)... 11: do.call(分类器, args = c(list(X = X, y = y, learnind = learnmatrix[i, ... 10:分类(X = c(83.5832768669369, 83.146333099001, 94.253534443549, ... 9:分类(X = c(83.5832768669369, 83.146333099001, 94.253534443549, ... 8: do.call("分类", args = c(list(X = Xi, y = yi, learningsets = lsi, ... 7:调(网格=列表(成本= c(0.1、0.2、0.5、1、2、5、10、20、50... 6:调整(网格 = 列表(成本 = c(0.1、0.2、0.5、1、2、5、10、20、50... 5: do.call("tune", args = c(tuninglist, ll)) 4: 分类(X, y = as.numeric(y) - 1, learningsets = learningsets, ... 3: 分类(X, y = as.numeric(y) - 1, learningsets = learningsets, ... 2:分类(t(exp),因子(类),学习集= lsets,... 1:分类(t(exp),因子(类),学习集= lsets,...