我正在使用一些名称和其他功能来预测 y(二进制类)。名称特征是名称的子字符串。我正在使用 Python Scikit-learn。
这是 X 的一小部分:
[{'substring=ry': True, 'substring=lo': True, 'substring=oui': True, 'firstLetter-firstName': u'm', 'substring=mar': True, 'avg-length': 5.0, 'substring=bl': True, 'lastLetter-lastName': u'n', 'substring=mary': True, 'substring=lou': True, 'metaphone=MR': True, 'location': u'richmond & wolfe, quebec, canada', 'substring=ma': True, 'substring=ui': True, 'substring=in': True, 'substring=ary': True, 'substring=loui': True, 'firstLetter-lastName': u'b', 'lastLetter-firstName': u'y', 'first-name': u'mary', 'substring=ou': True, 'last-name': u'blouin', 'substring=blo': True, 'substring=uin': True, 'metaphone=PLN': True, 'substring=ar': True, 'name-entity': 2, 'substring=blou': True, 'substring=ouin': True}]
然后我使用 dictvectorization 将 X 向量化为这个......
(0, 0) 5.0
(0, 6798) 1.0
(0, 9944) 1.0
(0, 9961) 1.0
(0, 11454) 1.0
(0, 28287) 1.0
(0, 28307) 1.0
(0, 28483) 1.0
(0, 33376) 1.0
(0, 34053) 1.0
(0, 36901) 2.0
(0, 39167) 1.0
(0, 39452) 1.0
(0, 40797) 1.0
(0, 40843) 1.0
(0, 40853) 1.0
(0, 51489) 1.0
(0, 54903) 1.0
(0, 55050) 1.0
(0, 55058) 1.0
(0, 55680) 1.0
(0, 55835) 1.0
(0, 55856) 1.0
(0, 60698) 1.0
(0, 60752) 1.0
(0, 60759) 1.0
(0, 64391) 1.0
(0, 68278) 1.0
(0, 68318) 1.0
问题是我完全忘记了新 X 代表什么。而且由于我需要提出一个决策树图,所以我只能得到这些我无法解释的结果。
digraph Tree {
0 [label="X[0] <= 4.5000\ngini = 0.5\nsamples = 25000", shape="box"] ;
1 [label="X[39167] <= 0.5000\ngini = 0.0734231704267\nsamples = 891", shape="box"] ;
0 -> 1 ;
2 [label="X[36901] <= 2.5000\ngini = 0.0575468244736\nsamples = 702", shape="box"] ;
1 -> 2 ;
3 [label="X[58147] <= 0.5000\ngini = 0.0359355212331\nsamples = 442", shape="box"] ;
2 -> 3 ;
4 [label="X[9977] <= 0.5000\ngini = 0.0316694756485\nsamples = 396", shape="box"] ;
3 -> 4 ;
5 [label="X[29713] <= 0.5000\ngini = 0.0275525222406\nsamples = 352", shape="box"] ;
4 -> 5 ;
6 [label="X[9788] <= 0.5000\ngini = 0.0244412457957\nsamples = 319", shape="box"] ;
5 -> 6 ;
7 [label="X[46929] <= 0.5000\ngini = 0.0226406785428\nsamples = 300", shape="box"] ;
6 -> 7 ;
8 [label="X[45465] <= 0.5000\ngini = 0.0209286264458\nsamples = 282", shape="box"] ;
7 -> 8 ;
9 [label="X[45718] <= 0.5000\ngini = 0.0194016759597\nsamples = 266", shape="box"] ;
8 -> 9 ;
10 [label="X[28311] <= 0.5000\ngini = 0.0178698827564\nsamples = 250", shape="box"] ;
9 -> 10 ;...
我的python代码:
from sklearn.feature_extraction import DictVectorizer
from sklearn import tree
classifierUsed2 = tree.DecisionTreeClassifier(class_weight="auto")
dv = DictVectorizer()
newX = dv.fit_transform(all_dict)
X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)
classifierUsed2.fit(X_train, y_train)
y_train_predictions = classifierUsed2.predict(X_train)
y_test_predictions = classifierUsed2.predict(X_test)
tree.export_graphviz(classifierUsed2, out_file='graph.dot')