我有一个简单的优化问题,使用一些特定的数据,使scipy.optimize.minimize忽略该tol
参数。根据我的理解,从文档中tol
确定“终止容差”,即目标函数接受的最大误差(我错了吗?)。然而,在下一个工作示例中,当tol
设置为 0.1 或其他小数字时,即使目标函数 > ,优化也会以“优化成功终止”消息结束tol
。这是 Scipy 方法中的错误还是我在这里误解了什么?
优化问题:我需要对 和 进行线性组合var1
,var2
它们是两个时间序列,通过参数Btd
和缩放它们Bta
。我需要线性组合的平均值接近目标值Target
,一个标量。np.mean(Btd*var1 + Bta*var2)
所以我只是最小化和之间的绝对差异Target
。约束条件是比例系数必须大于 0,并且均值的比率np.mean(Btd*var1)/np.mean(Bta*var2)
应近似于函数gi/(1-gi)
,其中gi
是区间 [0,1] 中的标量。
可重现的代码:
import numpy as np
import scipy.optimize as opt
# The data that exactly reproduce the error:
time = np.arange(1979,2011)
var2=np.array([ 88.95705521, 74.5398773 , 72.08588957, 65.64417178,
50. , 72.39263804, 77.3006135 , 72.08588957,
64.41717791, 96.62576687, 69.93865031, 84.96932515,
86.50306748, 82.20858896, 80.98159509, 73.00613497,
66.25766871, 67.48466258, 79.75460123, 65.64417178,
70.24539877, 84.66257669, 76.3803681 , 83.74233129,
83.74233129, 78.2208589 , 88.03680982, 87.73006135,
100. , 71.16564417, 73.6196319 , 85.58282209])
var1=np.array([300. , 420.89552239, 333.58208955, 355.97014925,
376.11940299, 510.44776119, 420.89552239, 434.32835821,
333.58208955, 394.02985075, 523.88059701, 411.94029851,
353.73134328, 434.32835821, 355.97014925, 398.50746269,
476.86567164, 371.64179104, 445.52238806, 544.02985075,
416.41791045, 427.6119403 , 541.79104478, 579.85074627,
429.85074627, 414.17910448, 420.89552239, 528.35820896,
577.6119403 , 490.29850746, 600. , 454.47761194])
X=np.transpose([var1, var2])
# Global parameters
Target = 3.0
gi = 0.7
# This model is a simple linear combination of the two time series.
def MyModel(modelparams, X, gi):
Bta, Btd = modelparams
Eta = Bta*X[:,0]
Etd = Btd*X[:,1]
Etot = Eta + Etd
return Etot, Eta, Etd
# Objective function
def Obj(modelparams):
Bta, Bdt = modelparams
Etot, Eta, Etd = MyModel([Bta, Bdt], X, gi)
return abs(np.mean(Etot)-Target)
# Ratio constraint
def Ratio(modelparams):
import numpy as np
Bta, Btd = modelparams
Etot, Eta, Etd = MyModel([Bta, Btd], X, gi)
A = np.mean(Etd)/np.mean(Eta)
B = gi/(1-gi)
# The epsilon comes in to loosen a bit only this constraint
epsilon = 0.1
return -abs(abs(A-B)-epsilon)
# This is my solution to make the parameters different from zero.
# The ineq-type constraint makes them >=0.
def TDPos(modelparams):
Bta, Btd = modelparams
return Btd - 10**(-5)
def TAPos(modelparams):
Bta, Btd = modelparams
return Bta - 10**(-5)
constraints=[{'type': 'ineq', 'fun': Ratio},
{'type': 'ineq', 'fun': TDPos},
{'type': 'ineq', 'fun': TAPos}]
# Bounds or Model Parameters
bounds=((0, None), (0, None))
# Minimize
modelparams0=[Target/np.nanmean(var1), Target/np.nanmean(var2)]
result = opt.minimize(Obj, modelparams0,
tol=0.1,
method='SLSQP',
options={'maxiter': 40000 }, #,'ftol': 0.1},
bounds=bounds,
constraints=constraints)
print(result)
打印出来:
fun: 3.0
jac: array([439.92537314, 77.31019938])
message: 'Optimization terminated successfully.'
nfev: 20
nit: 4
njev: 4
status: 0
success: True
x: array([0., 0.])
我的问题:fun: 3.0 > tol: 0.1 这是不想要的。
TL;DR: scipy.optimize.minimize 忽略 stop 参数tol
。为什么?
编辑:此外,最佳解决方案 [0, 0] 忽略了两个 ineq 约束,旨在使这两个参数> 10**(-5)。这是同一个问题的一部分吗?