非常好的问题。
您只需要再次拟合最佳模型即可获得特征重要性。
best_model.fit(X_train, Y_train)
exctracted_best_model = best_model.fitted_pipeline_.steps[-1][1]
最后一行返回基于 CV 的最佳模型。
然后你可以使用:
exctracted_best_model.fit(X_train, Y_train)
训练它。如果最佳模型具有所需的属性,那么您将能够在之后访问它exctracted_best_model.fit(X_train, Y_train)
更多细节(在我的评论中)和一个玩具示例:
from tpot import TPOTRegressor
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# reduce training features for time sake
X_train = X_train[:100,:]
y_train = y_train[:100]
# Fit the TPOT pipeline
tpot = TPOTRegressor(cv=2, generations=5, population_size=50, verbosity=2)
# Fit the pipeline
tpot.fit(X_train, y_train)
# Get the best model
exctracted_best_model = tpot.fitted_pipeline_.steps[-1][1]
print(exctracted_best_model)
AdaBoostRegressor(base_estimator=None, learning_rate=0.5, loss='square',
n_estimators=100, random_state=None)
# Train the `exctracted_best_model` using THE WHOLE DATASET.
# You need to use the whole dataset in order to get feature importance for all the
# features in your dataset.
exctracted_best_model.fit(X, y) # X,y IMPORTNANT
# Access it's features
exctracted_best_model.feature_importances_
# Plot them using barplot
# Here I fitted the model on X_train, y_train and not on the whole dataset for TIME SAKE
# So I got importances only for the features in `X_train`
# If you use `exctracted_best_model.fit(X, y)` we will have importances for all the features !!!
positions= range(exctracted_best_model.feature_importances_.shape[0])
plt.bar(positions, exctracted_best_model.feature_importances_)
plt.show()
重要提示: *在上面的示例中,基于管道的最佳模型是AdaBoostRegressor(base_estimator=None, learning_rate=0.5, loss='square')
. 这个模型确实有属性feature_importances_
。在最佳模型没有属性的情况下feature_importances_
,完全相同的代码将不起作用。您将需要阅读文档并查看每个返回的最佳模型的属性。例如。如果最好的模型是LassoCV
那么您将使用该coef_
属性。
输出: