4

我目前正在尝试使用库 hyperopt 优化梯度提升方法的超参数。当我在自己的计算机上工作时,我使用了该类Trials,并且能够使用库泡菜保存和重新加载我的结果。这让我可以保存我测试的所有参数集。我的代码看起来像这样:

from hyperopt import SparkTrials, STATUS_OK, tpe, fmin
from LearningUtils.LearningUtils import build_train_test, get_train_test, mean_error, rmse, mae
from LearningUtils.constants import MAX_EVALS, CV, XGBOOST_OPTIM_SPACE, PARALELISM
from sklearn.model_selection import cross_val_score
import pickle as pkl

if os.path.isdir(PATH_TO_TRIALS): #we reload the past results
    with open(PATH_TO_TRIALS, 'rb') as trials_file:
        trials = pkl.load(trials_file)
else : # We create the trials file
    trials = Trials()
    
# classic hyperparameters optimization  
def objective(space):
    regressor = xgb.XGBRegressor(n_estimators = space['n_estimators'],
                            max_depth = int(space['max_depth']),
                            learning_rate = space['learning_rate'],
                            gamma = space['gamma'],
                            min_child_weight = space['min_child_weight'],
                            subsample = space['subsample'],
                            colsample_bytree = space['colsample_bytree'],
                            verbosity=0
                            )
    regressor.fit(X_train, Y_train)
    # Applying k-Fold Cross Validation
    accuracies = cross_val_score(estimator=regressor, x=X_train, y=Y_train, cv=5)
    CrossValMean = accuracies.mean()
    return {'loss':1-CrossValMean, 'status': STATUS_OK}

best = fmin(fn=objective,
            space=XGBOOST_OPTIM_SPACE,
            algo=tpe.suggest,
            max_evals=MAX_EVALS,
            trials=trials,
           return_argmin=False)

# Save the trials
pkl.dump(trials, open(PATH_TO_TRIALS, "wb"))

现在,我想让这段代码在具有更多 CPU 的远程服务器上工作,以允许并行化并获得时间。

我看到我可以使用SparkTrialshyperopt 类而不是 ot来简单地做到这一点Trials。但是,SparkTrials 对象不能与泡菜一起保存。您对如何保存和重新加载存储在Sparktrials对象中的试验结果有任何想法吗?

4

1 回答 1

4

所以这可能有点晚了,但是在搞砸了一点之后,我找到了一种 hacky 解决方案:

spark_trials= SparkTrials()
pickling_trials = dict()

for k, v in spark_trials.__dict__.items():
    if not k in ['_spark_context', '_spark']:
        pickling_trials[k] = v
        
pickle.dump(pickling_trials, open('pickling_trials.hyperopt', 'wb'))

SparkTrials 实例的 _spark_context 和 _spark 属性是无法序列化对象的罪魁祸首。事实证明,如果您想重用该对象,则不需要它们,因为如果您想再次重新运行优化,无论如何都会创建一个新的 spark 上下文,因此您可以将试验重新用作:

new_sparktrials = SparkTrials()

for att, v in pickling_trials.items():
    setattr(new_sparktrials, att, v)

best = fmin(loss_func,
    space=search_space,
    algo=tpe.suggest,
    max_evals=1000,
    trials=new_sparktrials)

瞧 :)

于 2020-12-20T10:26:39.193 回答