我正在研究一个多类分类问题,其中 3 个 (1, 2, 3) 类完美分布。(每个类的 70 个实例导致 (210, 8) 数据帧)。
现在我的数据按顺序分布了所有 3 个类,即前 70 个实例是 1 类,接下来 70 个实例是 2 类,最后 70 个实例是 3 类。我知道这种分布会导致在训练集上得分很高但得分很差在测试集上,因为测试集具有模型未见过的类。所以我stratify
在train_test_split
. 以下是我的代码: -
# SPLITTING
train_x, test_x, train_y, test_y = train_test_split(data2, y, test_size = 0.2, random_state =
69, stratify = y)
cross_val_model = cross_val_score(pipe, train_x, train_y, cv = 5,
n_jobs = -1, scoring = 'f1_macro')
s_score = cross_val_model.mean()
def objective(trial):
model__n_neighbors = trial.suggest_int('model__n_neighbors', 1, 20)
model__metric = trial.suggest_categorical('model__metric', ['euclidean', 'manhattan',
'minkowski'])
model__weights = trial.suggest_categorical('model__weights', ['uniform', 'distance'])
params = {'model__n_neighbors' : model__n_neighbors,
'model__metric' : model__metric,
'model__weights' : model__weights}
pipe.set_params(**params)
return np.mean( cross_val_score(pipe, train_x, train_y, cv = 5,
n_jobs = -1, scoring = 'f1_macro'))
knn_study = optuna.create_study(direction = 'maximize')
knn_study.optimize(objective, n_trials = 10)
knn_study.best_params
optuna_gave_score = knn_study.best_value
pipe.set_params(**knn_study.best_params)
pipe.fit(train_x, train_y)
pred = pipe.predict(test_x)
c_matrix = confusion_matrix(test_y, pred)
c_report = classification_report(test_y, pred)
现在的问题是我在每件事上都得到了满分。执行 cv 的 f1 宏得分为 0.898。以下是我的混淆矩阵和分类报告:-
14 0 0
0 14 0
0 0 14
分类报告:-
precision recall f1-score support
1 1.00 1.00 1.00 14
2 1.00 1.00 1.00 14
3 1.00 1.00 1.00 14
accuracy 1.00 42
macro avg 1.00 1.00 1.00 42
weighted avg 1.00 1.00 1.00 42
我是过度拟合还是什么?