我正在尝试调整一些参数并且搜索空间非常大。到目前为止,我有 5 个维度,它可能会增加到 10 个左右。问题是,如果我能弄清楚如何对它进行多处理,我认为我可以获得显着的加速,但我找不到任何好的方法来做它。我正在使用hyperopt
,但我不知道如何让它使用超过 1 个核心。这是我没有所有无关内容的代码:
from numpy import random
from pandas import DataFrame
from hyperopt import fmin, tpe, hp, Trials
def calc_result(x):
huge_df = DataFrame(random.randn(100000, 5), columns=['A', 'B', 'C', 'D', 'E'])
total = 0
# Assume that I MUST iterate
for idx_and_row in huge_df.iterrows():
idx = idx_and_row[0]
row = idx_and_row[1]
# Assume there is no way to optimize here
curr_sum = row['A'] * x['adjustment_1'] + \
row['B'] * x['adjustment_2'] + \
row['C'] * x['adjustment_3'] + \
row['D'] * x['adjustment_4'] + \
row['E'] * x['adjustment_5']
total += curr_sum
# In real life I want the total as high as possible, but for the minimizer, it has to negative a negative value
total_as_neg = total * -1
print(total_as_neg)
return total_as_neg
space = {'adjustment_1': hp.quniform('adjustment_1', 0, 1, 0.001),
'adjustment_2': hp.quniform('adjustment_2', 0, 1, 0.001),
'adjustment_3': hp.quniform('adjustment_3', 0, 1, 0.001),
'adjustment_4': hp.quniform('adjustment_4', 0, 1, 0.001),
'adjustment_5': hp.quniform('adjustment_5', 0, 1, 0.001)}
trials = Trials()
best = fmin(fn = calc_result,
space = space,
algo = tpe.suggest,
max_evals = 20000,
trials = trials)
到目前为止,我有 4 个内核,但我基本上可以根据需要获得尽可能多的内核。我怎样才能hyperopt
使用超过 1 个核心,或者是否有一个可以多进程的库?