如果这是一个非常基本的问题/错误,我刚刚开始学习 python 并道歉。
我正在做 Kaggle 生物反应教程。我收到此错误
C:\Anaconda\lib\site-packages\sklearn\cross_validation.py:65:DeprecationWarning:indices 参数已弃用,将在 0.17 stacklevel=1 中删除(假设为 True)结果:0.458614231133
有谁知道是什么意思?我已经谷歌它死了,找不到答案。
我正在运行的脚本是:
from sklearn.ensemble import RandomForestClassifier
from sklearn import cross_validation
import logloss
import numpy as np
def main():
#read in data, parse into training and target sets
dataset = np.genfromtxt(open('train.csv','r'), delimiter=',', dtype='f8')[1:]
target = np.array([x[0] for x in dataset])
train = np.array([x[1:] for x in dataset])
#In this case we'll use a random forest, but this could be any classifier
cfr = RandomForestClassifier(n_estimators=100)
#Simple K-Fold cross validation. 5 folds.
#(Note: in older scikit-learn versions the "n_folds" argument is named "k".)
cv = cross_validation.KFold(len(train), n_folds=5, indices=False)
#iterate through the training and test cross validation segments and
#run the classifier on each one, aggregating the results into a list
results = []
for traincv, testcv in cv:
probas = cfr.fit(train[traincv], target[traincv]).predict_proba(train[testcv])
results.append( logloss.llfun(target[testcv], [x[1] for x in probas]) )
#print out the mean of the cross-validated results
print "Results: " + str( np.array(results).mean() )
if __name__=="__main__":
main()
我相信它是这样称呼的:
__author__ = 'nickd'
import scipy as sp
def llfun(act, pred):
epsilon = 1e-15
pred = sp.maximum(epsilon, pred)
pred = sp.minimum(1-epsilon, pred)
ll = sum(act*sp.log(pred) + sp.subtract(1,act)*sp.log(sp.subtract(1,pred)))
ll = ll * -1.0/len(act)
return ll
再一次,如果这是基本的东西,真的很抱歉。我真的从来没有这样做过。