我想知道我当前的程序是否正确,或者我可能有数据泄漏。导入数据集后,我以 80/20 的比例进行拆分。
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.20, random_state=0, stratify=y)
然后在定义 CatBoostClassifier 之后,我使用我的训练集执行 GridSearch 并进行交叉验证。
clf = CatBoostClassifier(leaf_estimation_iterations=1, border_count=254, scale_pos_weight=1.67)
grid = {'learning_rate': [0.001, 0.003, 0.006,0.01, 0.03, 0.06, 0.1, 0.3, 0.6, 0.9],
'depth': [1, 2,3,4,5, 6,7,8,9, 10],
'l2_leaf_reg': [1, 3, 5, 7, 9,11,13,15],
'iterations': [50,150,250,350,450,600, 800,1000]}
clf.grid_search(grid,
X=X_train,
y=y_train, cv=10)
现在我想评估我的模型。我现在可以使用整个数据集来执行 k 折交叉验证,以评估模型吗?(就像下面的代码一样)
kf = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=0)
scoring = ['accuracy', 'f1', 'roc_auc', 'recall', 'precision']
scores = cross_validate(
clf, X, y, scoring=scoring, cv=kf, return_train_score=True)
print("Accuracy TEST: %0.2f (+/- %0.2f) Accuracy TRAIN: %0.2f (+/- %0.2f)" %
(scores['test_accuracy'].mean(), scores['test_accuracy'].std() * 2, scores['train_accuracy'].mean(), scores['train_accuracy'].std() * 2))
print("F1 TEST: %0.2f (+/- %0.2f) F1 TRAIN : %0.2f (+/- %0.2f) " %
(scores['test_f1'].mean(), scores['test_f1'].std() * 2, scores['train_f1'].mean(), scores['train_f1'].std() * 2))
print("AUROC TEST: %0.2f (+/- %0.2f) AUROC TRAIN : %0.2f (+/- %0.2f)" %
(scores['test_roc_auc'].mean(), scores['test_roc_auc'].std() * 2, scores['train_roc_auc'].mean(), scores['train_roc_auc'].std() * 2))
print("recall TEST: %0.2f (+/- %0.2f) recall TRAIN: %0.2f (+/- %0.2f)" %
(scores['test_recall'].mean(), scores['test_recall'].std() * 2, scores['train_recall'].mean(), scores['train_recall'].std() * 2))
print("Precision TEST: %0.2f (+/- %0.2f) Precision TRAIN: %0.2f (+/- %0.2f)" %
(scores['test_precision'].mean(), scores['test_precision'].std() * 2, scores['train_precision'].mean(), scores['train_precision'].std() * 2))
还是我也应该只在训练集上执行 k 折交叉验证?