我正在尝试将 sklearn 的 gridsearch 与 xgboost 创建的模型一起使用。为此,我正在创建一个基于 ndcg 评估的自定义记分器。我可以成功地使用 Snippet 1,但它太杂乱/太老套了,我宁愿使用好的旧 sklearn 来简化代码。我尝试实现 GridSearch,结果完全关闭:对于相同的 X 和 y 集,我得到 NDCG@k = 0.8 与 Snippet 1 而 0.5 与 Snippet 2。显然,我没有在这里做一些事情......
以下代码段返回非常不同的结果:
片段1:
kf = StratifiedKFold(y, n_folds=5, shuffle=True, random_state=42)
max_depth = [6]
learning_rate = [0.22]
n_estimators = [43]
reg_alpha = [0.1]
reg_lambda = [10]
for md in max_depth:
for lr in learning_rate:
for ne in n_estimators:
for ra in reg_alpha:
for rl in reg_lambda:
xgb = XGBClassifier(objective='multi:softprob',
max_depth=md,
learning_rate=lr,
n_estimators=ne,
reg_alpha=ra,
reg_lambda=rl,
subsample=0.6, colsample_bytree=0.6, seed=0)
print([md, lr, ne])
score = []
for train_index, test_index in kf:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
xgb.fit(X_train, y_train)
y_pred = xgb.predict_proba(X_test)
score.append(ndcg_scorer(y_test, y_pred))
print('all scores: %s' % score)
print('average score: %s' % np.mean(score))
片段2:
from sklearn.grid_search import GridSearchCV
params = {
'max_depth':[6],
'learning_rate':[0.22],
'n_estimators':[43],
'reg_alpha':[0.1],
'reg_lambda':[10],
'subsample':[0.6],
'colsample_bytree':[0.6]
}
xgb = XGBClassifier(objective='multi:softprob',seed=0)
scorer = make_scorer(ndcg_scorer, needs_proba=True)
gs = GridSearchCV(xgb, params, cv=5, scoring=scorer, verbose=10, refit=False)
gs.fit(X,y)
gs.best_score_
虽然 snippet1 给了我预期的结果,但 Snippet2 返回的分数与 ndcg_scorer 不一致。