好问题。要按照您发布的顺序浏览它们:
- 首先,我很清楚,过采样、欠采样和混合采样是应用于训练集而不是测试/验证集的过程。如果我错了,请在这里纠正我。
那是对的。您当然不想在不代表实际、实时、“生产”数据集的数据上进行测试(无论是在您的数据上test
还是validation
在数据上) 。你真的应该只将此应用于培训。请注意,如果您使用交叉折叠验证等技术,您应该将抽样分别应用于每个折叠,如本 IEEE 论文所示。
- 我浏览了 imblearn Pipeline 代码,但在那里找不到 predict 方法。
我假设你找到了imblearn.pipeline
源代码,所以如果你找到了,你想做的就是看一下fit_predict
方法:
@if_delegate_has_method(delegate="_final_estimator")
def fit_predict(self, X, y=None, **fit_params):
"""Apply `fit_predict` of last step in pipeline after transforms.
Applies fit_transforms of a pipeline to the data, followed by the
fit_predict method of the final estimator in the pipeline. Valid
only if the final estimator implements fit_predict.
Parameters
----------
X : iterable
Training data. Must fulfill input requirements of first step of
the pipeline.
y : iterable, default=None
Training targets. Must fulfill label requirements for all steps
of the pipeline.
**fit_params : dict of string -> object
Parameters passed to the ``fit`` method of each step, where
each parameter name is prefixed such that parameter ``p`` for step
``s`` has key ``s__p``.
Returns
-------
y_pred : ndarray of shape (n_samples,)
The predicted target.
"""
Xt, yt, fit_params = self._fit(X, y, **fit_params)
with _print_elapsed_time('Pipeline',
self._log_message(len(self.steps) - 1)):
y_pred = self.steps[-1][-1].fit_predict(Xt, yt, **fit_params)
return y_pred
在这里,我们可以看到,在您发布的示例中,在管道中pipeline
使用了最终估计器的方法:.predict
scikit-learn's knn
def predict(self, X):
"""Predict the class labels for the provided data.
Parameters
----------
X : array-like of shape (n_queries, n_features), \
or (n_queries, n_indexed) if metric == 'precomputed'
Test samples.
Returns
-------
y : ndarray of shape (n_queries,) or (n_queries, n_outputs)
Class labels for each data sample.
"""
X = check_array(X, accept_sparse='csr')
neigh_dist, neigh_ind = self.kneighbors(X)
classes_ = self.classes_
_y = self._y
if not self.outputs_2d_:
_y = self._y.reshape((-1, 1))
classes_ = [self.classes_]
n_outputs = len(classes_)
n_queries = _num_samples(X)
weights = _get_weights(neigh_dist, self.weights)
y_pred = np.empty((n_queries, n_outputs), dtype=classes_[0].dtype)
for k, classes_k in enumerate(classes_):
if weights is None:
mode, _ = stats.mode(_y[neigh_ind, k], axis=1)
else:
mode, _ = weighted_mode(_y[neigh_ind, k], weights, axis=1)
mode = np.asarray(mode.ravel(), dtype=np.intp)
y_pred[:, k] = classes_k.take(mode)
if not self.outputs_2d_:
y_pred = y_pred.ravel()
return y_pred
- 我还想确保当管道位于 GridSearchCV 内时,这种正确的行为是否有效
这种假设以上两个假设是正确的,我认为这意味着您想要一个完整的、最小的、可重现的在 GridSearchCV 中工作的示例。这里有大量文档scikit-learn
,但我创建的一个示例knn
如下:
import pandas as pd, numpy as np
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import GridSearchCV, train_test_split
param_grid = [
{
'classification__n_neighbors': [1,3,5,7,10],
}
]
X, y = load_digits(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.20)
pipe = Pipeline([
('sampling', SMOTE()),
('classification', KNeighborsClassifier())
])
grid = GridSearchCV(pipe, param_grid=param_grid)
grid.fit(X_train, y_train)
mean_scores = np.array(grid.cv_results_['mean_test_score'])
print(mean_scores)
# [0.98051926 0.98121129 0.97981998 0.98050474 0.97494193]
你的直觉是正确的,干得好:)