43

使用pipelineand确定最佳参数后GridSearchCV,我pickle/joblib这个过程如何在以后重用?当它是一个单一的分类器时,我看到了如何做到这一点......

from sklearn.externals import joblib
joblib.dump(clf, 'filename.pkl') 

pipeline但是,在执行和完成 a 之后,如何以最佳参数整体保存它gridsearch

我试过了:

  • joblib.dump(grid, 'output.pkl')- 但这抛弃了每次网格搜索尝试(许多文件)
  • joblib.dump(pipeline, 'output.pkl')- 但我不认为它包含最好的参数

X_train = df['Keyword']
y_train = df['Ad Group']

pipeline = Pipeline([
  ('tfidf', TfidfVectorizer()),
  ('sgd', SGDClassifier())
  ])

parameters = {'tfidf__ngram_range': [(1, 1), (1, 2)],
              'tfidf__use_idf': (True, False),
              'tfidf__max_df': [0.25, 0.5, 0.75, 1.0],
              'tfidf__max_features': [10, 50, 100, 250, 500, 1000, None],
              'tfidf__stop_words': ('english', None),
              'tfidf__smooth_idf': (True, False),
              'tfidf__norm': ('l1', 'l2', None),
              }

grid = GridSearchCV(pipeline, parameters, cv=2, verbose=1)
grid.fit(X_train, y_train)

#These were the best combination of tuning parameters discovered
##best_params = {'tfidf__max_features': None, 'tfidf__use_idf': False,
##               'tfidf__smooth_idf': False, 'tfidf__ngram_range': (1, 2),
##               'tfidf__max_df': 1.0, 'tfidf__stop_words': 'english',
##               'tfidf__norm': 'l2'}
4

1 回答 1

56
import joblib
joblib.dump(grid.best_estimator_, 'filename.pkl')

如果要将对象转储到一个文件中 - 使用:

joblib.dump(grid.best_estimator_, 'filename.pkl', compress = 1)
于 2015-12-08T21:16:41.807 回答