假设我的数据集是一个100 x 3
充满分类变量的矩阵。我想对响应变量进行二进制分类。让我们用以下代码组成一个数据集:
set.seed(2013)
y <- as.factor(round(runif(n=100,min=0,max=1),0))
var1 <- rep(c("red","blue","yellow","green"),each=25)
var2 <- rep(c("shortest","short","tall","tallest"),25)
df <- data.frame(y,var1,var2)
数据如下所示:
> head(df)
y var1 var2
1 0 red shortest
2 1 red short
3 1 red tall
4 1 red tallest
5 0 red shortest
6 1 red short
我尝试用两种不同的方法对这些数据进行随机森林和 adaboost。第一种方法是按原样使用数据:
> library(randomForest)
> randomForest(y~var1+var2,data=df,ntrees=500)
Call:
randomForest(formula = y ~ var1 + var2, data = df, ntrees = 500)
Type of random forest: classification
Number of trees: 500
No. of variables tried at each split: 1
OOB estimate of error rate: 44%
Confusion matrix:
0 1 class.error
0 29 22 0.4313725
1 22 27 0.4489796
----------------------------------------------------
> library(ada)
> ada(y~var1+var2,data=df)
Call:
ada(y ~ var1 + var2, data = df)
Loss: exponential Method: discrete Iteration: 50
Final Confusion Matrix for Data:
Final Prediction
True value 0 1
0 34 17
1 16 33
Train Error: 0.33
Out-Of-Bag Error: 0.33 iteration= 11
Additional Estimates of number of iterations:
train.err1 train.kap1
10 16
第二种方法是将数据集转换为宽格式,并将每个类别视为一个变量。我这样做的原因是因为我的实际数据集在 var1 和 var2 中有 500 多个因子,因此,树分区总是将 500 个类别分成 2 个部分。这样做会丢失很多信息。要转换数据:
id <- 1:100
library(reshape2)
tmp1 <- dcast(melt(cbind(id,df),id.vars=c("id","y")),id+y~var1,fun.aggregate=length)
tmp2 <- dcast(melt(cbind(id,df),id.vars=c("id","y")),id+y~var2,fun.aggregate=length)
df2 <- merge(tmp1,tmp2,by=c("id","y"))
新数据如下所示:
> head(df2)
id y blue green red yellow short shortest tall tallest
1 1 0 0 0 2 0 0 2 0 0
2 10 1 0 0 2 0 2 0 0 0
3 100 0 0 2 0 0 0 0 0 2
4 11 0 0 0 2 0 0 0 2 0
5 12 0 0 0 2 0 0 0 0 2
6 13 1 0 0 2 0 0 2 0 0
我将随机森林和 adaboost 应用于这个新数据集:
> library(randomForest)
> randomForest(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2,ntrees=500)
Call:
randomForest(formula = y ~ blue + green + red + yellow + short + shortest + tall + tallest, data = df2, ntrees = 500)
Type of random forest: classification
Number of trees: 500
No. of variables tried at each split: 2
OOB estimate of error rate: 39%
Confusion matrix:
0 1 class.error
0 32 19 0.3725490
1 20 29 0.4081633
----------------------------------------------------
> library(ada)
> ada(y~blue+green+red+yellow+short+shortest+tall+tallest,data=df2)
Call:
ada(y ~ blue + green + red + yellow + short + shortest + tall +
tallest, data = df2)
Loss: exponential Method: discrete Iteration: 50
Final Confusion Matrix for Data:
Final Prediction
True value 0 1
0 36 15
1 20 29
Train Error: 0.35
Out-Of-Bag Error: 0.33 iteration= 26
Additional Estimates of number of iterations:
train.err1 train.kap1
5 10
两种方法的结果是不同的。当我们在每个变量中引入更多级别时,差异更加明显,即var1
和var2
。我的问题是,既然我们使用完全相同的数据,为什么结果不同?我们应该如何解释这两种方法的结果?哪个更可靠?