我正在研究用于分类的不平衡数据,并且我之前尝试使用合成少数过采样技术 (SMOTE) 对训练数据进行过采样。但是,这一次我想我还需要使用 Leave One Group Out (LOGO) 交叉验证,因为我想在每个 CV 上留下一个主题。
我不确定我是否可以很好地解释它,但是,据我所知,要使用 SMOTE 进行 k-fold CV,我们可以在每个折叠上循环 SMOTE,正如我在另一篇文章的这段代码中看到的那样。下面是在 k-fold CV 上实现 SMOTE 的示例。
from sklearn.model_selection import KFold
from imblearn.over_sampling import SMOTE
from sklearn.metrics import f1_score
kf = KFold(n_splits=5)
for fold, (train_index, test_index) in enumerate(kf.split(X), 1):
X_train = X[train_index]
y_train = y[train_index]
X_test = X[test_index]
y_test = y[test_index]
sm = SMOTE()
X_train_oversampled, y_train_oversampled = sm.fit_sample(X_train, y_train)
model = ... # classification model example
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print(f'For fold {fold}:')
print(f'Accuracy: {model.score(X_test, y_test)}')
print(f'f-score: {f1_score(y_test, y_pred)}')
没有 SMOTE,我尝试这样做来做 LOGO CV。但是通过这样做,我将使用一个超级不平衡的数据集。
X = X
y = np.array(df.loc[:, df.columns == 'label'])
groups = df["cow_id"].values #because I want to leave cow data with same ID on each run
logo = LeaveOneGroupOut()
logo.get_n_splits(X_std, y, groups)
cv=logo.split(X_std, y, groups)
scores=[]
for train_index, test_index in cv:
print("Train Index: ", train_index, "\n")
print("Test Index: ", test_index)
X_train, X_test, y_train, y_test = X[train_index], X[test_index], y[train_index], y[test_index]
model.fit(X_train, y_train.ravel())
scores.append(model.score(X_test, y_test.ravel()))
我应该如何在 leave-one-group-out CV 循环中实现 SMOTE?我对如何为合成训练数据定义组列表感到困惑。