两件事情:
- 除非我首先腌制/取消腌制自定义函数,否则我无法重现您看到的错误。
- 在将自定义函数传递给求解器之前,无需腌制/取消腌制。
这似乎对我有用。Python 3.6.12 和 scipy 1.5.2:
>>> from scipy.optimize import rosen, differential_evolution
>>> bounds = [(0,2), (0, 2)]
>>>
>>> def Ros_custom(X):
... x = X[0]
... y = X[1]
... a = 1. - x
... b = y - x*x
... return a*a + b*b*100
...
>>> result = differential_evolution(Ros_custom, bounds, updating='deferred',workers=-1)
>>> result.x, result.fun
(array([1., 1.]), 0.0)
>>>
>>> result
fun: 0.0
message: 'Optimization terminated successfully.'
nfev: 4953
nit: 164
success: True
x: array([1., 1.])
>>>
我什至可以在目标中嵌套一个函数custom
:
>>> def foo(a,b):
... return a*a + b*b*100
...
>>> def custom(X):
... x,y = X[0],X[1]
... return foo(1.-x, y-x*x)
...
>>> result = differential_evolution(custom, bounds, updating='deferred',workers=-1)
>>> result
fun: 0.0
message: 'Optimization terminated successfully.'
nfev: 4593
nit: 152
success: True
x: array([1., 1.])
所以,对我来说,至少代码按预期工作。
您应该不需要在函数使用之前对函数进行序列化/反序列化scipy
。是的,该功能需要是可腌制的,但scipy
会为您做到这一点。基本上,幕后发生的事情是您的函数将被序列化,multiprocessing
作为字符串传递,然后分发给处理器,然后解压并在目标处理器上使用。
像这样,对于 4 组输入,每个处理器运行一个:
>>> import multiprocessing as mp
>>> res = mp.Pool().map(custom, [(0,1), (1,2), (4,9), (3,4)])
>>> list(res)
[101.0, 100.0, 4909.0, 2504.0]
>>>
旧版本multiprocessing
难以序列化解释器中定义的函数,并且通常需要在__main__
块中执行代码。如果您在 Windows 上,这种情况仍然经常发生……您可能还需要调用mp.freeze_support()
,具体取决于代码的scipy
实现方式。
我倾向于喜欢dill
(我是作者),因为它可以序列化更广泛的对象pickle
。但是,as scipy
uses multiprocessing
,which uses pickle
... 我经常选择使用mystic
(我是作者),使用multiprocess
(我是作者),使用dill
. 非常粗略的等效代码,但它们都使用dill
而不是pickle
.
>>> from mystic.solvers import diffev2
>>> from pathos.pools import ProcessPool
>>> diffev2(custom, bounds, npop=40, ftol=1e-10, map=ProcessPool().map)
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 42
Function evaluations: 1720
array([1.00000394, 1.00000836])
使用mystic
,您可以获得一些额外的不错的功能,例如监视器:
>>> from mystic.monitors import VerboseMonitor
>>> mon = VerboseMonitor(5,5)
>>> diffev2(custom, bounds, npop=40, ftol=1e-10, itermon=mon, map=ProcessPool().map)
Generation 0 has ChiSquare: 0.065448
Generation 0 has fit parameters:
[0.769543181527466, 0.5810893880113548]
Generation 5 has ChiSquare: 0.065448
Generation 5 has fit parameters:
[0.588156685059123, -0.08325052939774935]
Generation 10 has ChiSquare: 0.060129
Generation 10 has fit parameters:
[0.8387858177101133, 0.6850849855634057]
Generation 15 has ChiSquare: 0.001492
Generation 15 has fit parameters:
[1.0904350077743412, 1.2027007403275813]
Generation 20 has ChiSquare: 0.001469
Generation 20 has fit parameters:
[0.9716429877952866, 0.9466681129902448]
Generation 25 has ChiSquare: 0.000114
Generation 25 has fit parameters:
[0.9784047411865372, 0.9554056558210251]
Generation 30 has ChiSquare: 0.000000
Generation 30 has fit parameters:
[0.996105436348129, 0.9934091068974504]
Generation 35 has ChiSquare: 0.000000
Generation 35 has fit parameters:
[0.996589586891175, 0.9938925277204567]
Generation 40 has ChiSquare: 0.000000
Generation 40 has fit parameters:
[1.0003791956048833, 1.0007133195321427]
Generation 45 has ChiSquare: 0.000000
Generation 45 has fit parameters:
[1.0000170425596364, 1.0000396089375592]
Generation 50 has ChiSquare: 0.000000
Generation 50 has fit parameters:
[0.9999013984263114, 0.9998041148375927]
STOP("VTRChangeOverGeneration with {'ftol': 1e-10, 'gtol': 1e-06, 'generations': 30, 'target': 0.0}")
Optimization terminated successfully.
Current function value: 0.000000
Iterations: 54
Function evaluations: 2200
array([0.99999186, 0.99998338])
>>>
以上所有内容都是并行运行的。
所以,总而言之,代码应该按原样工作(并且没有预先酸洗)——也许除非你在 Windows 上,你可能需要freeze_support
在块中使用和运行代码__main__
。