我想更清楚地了解 kfold以及在将其作为交叉验证方法实施后如何选择最佳模型。
根据这个来源:https://machinelearningmastery.com/k-fold-cross-validation/
进行kfold的步骤是:
- 随机打乱数据集
- 将数据集拆分为 k 个组
对于每个唯一组:
将组作为保留或测试数据集
将剩余的组作为训练数据集
在训练集上拟合模型并在测试集上对其进行评估
保留评估分数并丢弃模型
4.使用模型评估分数的样本总结模型的技能
但是,我有一个关于这个过程的问题。
保留评估分数 并丢弃模型应该是什么意思?你怎么做呢?
经过我的研究,我相信它可能与 sklearn 方法有关cross_val_score()
,但是当我尝试实现它时,通过将 my 传递model
给它,它会引发下一个错误:
Traceback (most recent call last):
文件“D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\joblib\parallel.py”,第 797 行,dispatch_one_batch 任务 = self._ready_batches.get(block=False) _queue.Empty
在处理上述异常的过程中,又出现了一个异常:
Traceback (most recent call last):
File "D:\temporary.py", line 187, in <module>
scores = cross_val_score(model, X_test, y_test, cv=kf,scoring="accuracy")
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\sklearn\model_selection\_validation.py", line 390, in cross_val_score
error_score=error_score)
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\sklearn\model_selection\_validation.py", line 236, in cross_validate
for train, test in cv.split(X, y, groups))
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\joblib\parallel.py", line 1004, in __call__
if self.dispatch_one_batch(iterator):
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\joblib\parallel.py", line 808, in dispatch_one_batch
islice = list(itertools.islice(iterator, big_batch_size))
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\sklearn\model_selection\_validation.py", line 236, in <genexpr>
for train, test in cv.split(X, y, groups))
File "D:\ProgramData\Miniconda3\envs\Env_DLexp1\lib\site-packages\sklearn\base.py", line 67, in clone
% (repr(estimator), type(estimator)))
TypeError: Cannot clone object '<keras.engine.sequential.Sequential object at 0x00000267F9C851C8>' (type <class 'keras.engine.sequential.Sequential'>): it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' methods.
根据文档https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html,第一个参数cross_val_score()
必须是估计器,他们将其定义为“实现‘拟合’的估计器对象。用于拟合数据的对象。”
因此,我无法理解异常。
这是我的代码的相关部分:
model = Sequential()
model.add(Embedding(max_words, embedding_dim, input_length=maxlen))
model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))
model.add(BatchNormalization(weights=None, epsilon=1e-06, momentum=0.9))
model.add(MaxPooling1D(pool_size=2))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(Dense(4, activation='softmax'))
print(model.summary())
from sklearn.metrics import precision_recall_fscore_support
from sklearn.model_selection import GridSearchCV,cross_val_score
kf = KFold(n_splits=4, random_state=None, shuffle=True)
print(kf)
for train_index, test_index in kf.split(data):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = data[train_index], data[test_index]
y_train, y_test = labels[train_index], labels[test_index]
Adam=keras.optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
model.compile(optimizer=Adam,
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
history = model.fit(X_train, y_train,
epochs=15,
batch_size=32,
verbose=1,
callbacks=callbacks_list,
validation_data=(X_test, y_test)
)
scores = cross_val_score(model, X_test, y_test, cv=kf,scoring="accuracy")
print(scores)
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std() * 2))
我将不胜感激您能给我的任何帮助。请考虑到我不是数据科学家或开发人员。