我正在尝试使用 GridSearchCV 来调整 LightGBM 模型中的参数,但我对如何在 GridSearchCV 的每次迭代中保存每个预测结果还不够熟悉。
但可悲的是,我只知道如何将结果保存在特定参数中。
这是代码:
param = {
'bagging_freq': 5,
'bagging_fraction': 0.4,
'boost_from_average':'false',
'boost': 'gbdt',
'feature_fraction': 0.05,
'learning_rate': 0.01,
'max_depth': -1,
'metric':'auc',
'min_data_in_leaf': 80,
'min_sum_hessian_in_leaf': 10.0,
'num_leaves': 13,
'num_threads': 8,
'tree_learner': 'serial',
'objective': 'binary',
'verbosity': 1
}
features = [c for c in train_df.columns if c not in ['ID_code', 'target']]
target = train_df['target']
folds = StratifiedKFold(n_splits=10, shuffle=False, random_state=44000)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_df.values, target.values)):
print("Fold {}".format(fold_))
trn_data = lgb.Dataset(train_df.iloc[trn_idx][features], label=target.iloc[trn_idx])
val_data = lgb.Dataset(train_df.iloc[val_idx][features], label=target.iloc[val_idx])
num_round = 1000000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=1000, early_stopping_rounds = 3000)
oof[val_idx] = clf.predict(train_df.iloc[val_idx][features], num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[features], num_iteration=clf.best_iteration) / folds.n_splits
print("CV score: {:<8.5f}".format(roc_auc_score(target, oof)))
print('Saving the Result File')
res= pd.DataFrame({"ID_code": test.ID_code.values})
res["target"] = predictions
res.to_csv('result_10fold{}.csv'.format(num_sub), index=False)
这是数据:
train_df.head(3)
ID_code target var_0 var_1 ... var_199
0 train_0 0 8.9255 -6.7863 -9.2834
1 train_1 1 11.5006 -4.1473 7.0433
2 train_2 0 8.6093 -2.7457 -9.0837
train_df.head(3)
ID_code var_0 var_1 ... var_199
0 test_0 9.4292 11.4327 -2.3805
1 test_1 5.0930 11.4607 -9.2834
2 train_2 7.8928 10.5825 -9.0837
我想保存predictions
GridSearchCV 的每次迭代,我已经搜索了几个类似的问题以及在 LightGBM 中使用 GridSearchCV 的一些其他相关信息。
但我仍然无法正确编码。
所以,如果不介意,有人可以帮助我并提供一些关于它的教程吗?
衷心感谢。