1

我正在使用 Hyperopt 对神经网络进行超参数优化。这样做时,经过一些迭代,我得到一个 MemoryError 异常

到目前为止,我尝试在使用后清除所有变量(为它们分配 None 或空列表,有没有更好的方法?)并打印所有 locals()、dirs() 和 globals() 及其大小,但是这些数量永远不会增加,而且尺寸很小。

结构如下所示:

def create_model(params):
    ## load data from temp files
    ## pre-process data accordingly
    ## Train NN with crossvalidation clearing Keras' session every time
    ## save stats and clean all variables (assigning None or empty lists to them)

def Optimize():
    for model in models: #I have multiple models
        ## load data
        ## save data to temp files
        trials = Trials()
        best_run = fmin(create_model,
                        space,
                        algo=tpe.suggest,
                        max_evals=100,
                        trials=trials)

在 X 次迭代后(有时它会完成前 100 次并转移到第二个模型),它会引发内存错误。我的猜测是一些变量保留在内存中,我没有清除它们,但我无法检测到它们。

编辑:

Traceback (most recent call last):
  File "Main.py", line 32, in <module>
    optimal = Optimize(training_sets)
  File "/home/User1/Optimizer/optimization2.py", line 394, in Optimize
    trials=trials)
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/fmin.py", line 307, in fmin
    return_argmin=return_argmin,
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/base.py", line 635, in fmin
    return_argmin=return_argmin)
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/fmin.py", line 320, in fmin
    rval.exhaust()
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/fmin.py", line 199, in exhaust
    self.run(self.max_evals - n_done, block_until_done=self.async)
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/fmin.py", line 173, in run
    self.serial_evaluate()
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/fmin.py", line 92, in serial_evaluate
    result = self.domain.evaluate(spec, ctrl)
  File "/usr/local/lib/python3.5/dist-packages/hyperopt/base.py", line 840, in evaluate
    rval = self.fn(pyll_rval)
  File "/home/User1/Optimizer/optimization2.py", line 184, in create_model
    x_train, x_test = x[train_indices], x[val_indices]
MemoryError
4

1 回答 1

2

我花了几天时间才弄清楚这一点,所以我会回答我自己的问题,以节省遇到这个问题的人一些时间。

通常,在使用 Hyperopt for Keras 时,函数的建议returncreate_model这样的:

return {'loss': -acc, 'status': STATUS_OK, 'model': model}

但是在具有许多评估的大型模型中,您不想返回每个模型并将其保存在内存中,您所需要的只是给出最低值的一组超参数loss

通过简单地从返回的字典中删除模型,内存随着每次评估而增加的问题得到解决。

return {'loss': -acc, 'status': STATUS_OK}
于 2019-04-17T05:48:19.563 回答