10

I have a reasonably simple constrained optimization problem but get different answers depending on how I do it. Let's get the import and a pretty print function out of the way first:

import numpy as np
from scipy.optimize import minimize, LinearConstraint, NonlinearConstraint, SR1

def print_res( res, label ):
    print("\n\n ***** ", label, " ***** \n")
    print(res.message)
    print("obj func value at solution", obj_func(res.x))
    print("starting values: ", x0)
    print("ending values:   ", res.x.astype(int) )
    print("% diff", (100.*(res.x-x0)/x0).astype(int) )
    print("target achieved?",target,res.x.sum())

The sample data is very simple:

n = 5
x0 = np.arange(1,6) * 10_000
target = x0.sum() + 5_000   # increase sum from 15,000 to 20,000

Here's the constrained optimization (including jacobians). In words, the objective function I want to minimize is just the sum of squared percentage changes from the initial values to final values. The linear equality constraint is simply requiring x.sum() to equal a constant.

def obj_func(x):
    return ( ( ( x - x0 ) / x0 ) ** 2 ).sum()

def obj_jac(x):
    return 2. * ( x - x0 ) / x0 ** 2

def constr_func(x):
    return x.sum() - target

def constr_jac(x):
    return np.ones(n)

And for comparison, I've re-factored as an unconstrained minimization by using the equality constraint to replace x[0] with a function of x[1:]. Note that the unconstrained function is passed x0[1:] whereas the constrained function is passed x0.

def unconstr_func(x):
    x_one       = target - x.sum()
    first_term  = ( ( x_one - x0[0] ) / x0[0] ) ** 2
    second_term = ( ( ( x - x0[1:] ) / x0[1:] ) ** 2 ).sum()
    return first_term + second_term

I then try to minimize in three ways:

  1. Unconstrained with 'Nelder-Mead'
  2. Constrained with 'trust-constr' (w/ & w/o jacobian)
  3. Constrained with 'SLSQP' (w/ & w/o jacobian)

Code:

##### (1) unconstrained

res0 = minimize( unconstr_func, x0[1:], method='Nelder-Mead')   # OK, but weird note
res0.x = np.hstack( [target - res0.x.sum(), res0.x] )
print_res( res0, 'unconstrained' )    

##### (2a) constrained -- trust-constr w/ jacobian

nonlin_con = NonlinearConstraint( constr_func, 0., 0., constr_jac )
resTCjac = minimize( obj_func, x0, method='trust-constr',
                     jac='2-point', hess=SR1(), constraints = nonlin_con )
print_res( resTCjac, 'trust-const w/ jacobian' )

##### (2b) constrained -- trust-constr w/o jacobian

nonlin_con = NonlinearConstraint( constr_func, 0., 0. )    
resTC = minimize( obj_func, x0, method='trust-constr',
                  jac='2-point', hess=SR1(), constraints = nonlin_con )    
print_res( resTC, 'trust-const w/o jacobian' )

##### (3a) constrained -- SLSQP w/ jacobian

eq_cons = { 'type': 'eq', 'fun' : constr_func, 'jac' : constr_jac }
resSQjac = minimize( obj_func, x0, method='SLSQP',
                     jac = obj_jac, constraints = eq_cons )    
print_res( resSQjac, 'SLSQP w/ jacobian' )

##### (3b) constrained -- SLSQP w/o jacobian

eq_cons = { 'type': 'eq', 'fun' : constr_func }    
resSQ = minimize( obj_func, x0, method='SLSQP',
                  jac = obj_jac, constraints = eq_cons )
print_res( resSQ, 'SLSQP w/o jacobian' )

Here is some simplified output (and of course you can run the code to get the full output):

starting values:  [10000 20000 30000 40000 50000]

***** (1) unconstrained  *****
Optimization terminated successfully.
obj func value at solution 0.0045454545454545305
ending values:    [10090 20363 30818 41454 52272]

***** (2a) trust-const w/ jacobian  *****
The maximum number of function evaluations is exceeded.
obj func value at solution 0.014635854609684874
ending values:    [10999 21000 31000 41000 51000]

***** (2b) trust-const w/o jacobian  *****
`gtol` termination condition is satisfied.
obj func value at solution 0.0045454545462939935
ending values:    [10090 20363 30818 41454 52272]

***** (3a) SLSQP w/ jacobian  *****
Optimization terminated successfully.
obj func value at solution 0.014636111111111114
ending values:    [11000 21000 31000 41000 51000]    

***** (3b) SLSQP w/o jacobian  *****   
Optimization terminated successfully.
obj func value at solution 0.014636111111111114
ending values:    [11000 21000 31000 41000 51000]

Notes:

  1. (1) & (2b) are plausible solutions in that they achieve significantly lower objective function values and intuitively we'd expect the variables with larger starting values to move more (both absolutely and in percentage terms) than the smaller ones.

  2. Adding the jacobian to 'trust-const' causes it to get the wrong answer (or at least a worse answer) and also to exceed max iterations. Maybe the jacobian is wrong, but the function is so simple that I'm pretty sure it's correct (?)

  3. 'SLSQP' doesn't seem to work w/ or w/o the jacobian supplied, but works very fast and claims to terminate successfully. This seems very worrisome in that getting the wrong answer and claiming to have terminated successfully is pretty much the worst possible outcome.

  4. Initially I used very small starting values and targets (just 1/1,000 of what I have above) and in that case all 5 approaches above work fine and give the same answers. My sample data is still extremely small, and it seems kinda bizarre for it to handle 1,2,..,5 but not 1000,2000,..5000.

  5. FWIW, note that the 3 incorrect results all hit the target by adding 1,000 to each initial value -- this satisfies the constraint but comes nowhere near minimizing the objective function (b/c variables with higher initial values should be increased more than lower ones to minimize the sum of squared percentage differences).

So my question is really just what is happening here and why do only (1) and (2b) seem to work?

More generally, I'd like to find a good python-based approach to this and similar optimization problems and will consider answers using other packages besides scipy although the best answer would ideally also address what is going on with scipy here (e.g. is this user error or a bug I should post to github?).

4

2 回答 2

8

以下是如何使用nloptwhich 解决这个问题的方法,这是一个非线性优化库,我对此印象非常深刻。

首先,目标函数和梯度都使用相同的函数定义:

def obj_func(x, grad):
    if grad.size > 0:
        grad[:] = obj_jac(x)
    return ( ( ( x/x0 - 1 )) ** 2 ).sum()

def obj_jac(x):
    return 2. * ( x - x0 ) / x0 ** 2

def constr_func(x, grad):
    if grad.size > 0:
        grad[:] = constr_jac(x)
    return x.sum() - target

def constr_jac(x):
    return np.ones(n)

然后,使用 Nelder-Mead 和 SLSQP 运行最小化:

opt = nlopt.opt(nlopt.LN_NELDERMEAD,len(x0)-1)
opt.set_min_objective(unconstr_func)
opt.set_ftol_abs(1e-15)
xopt = opt.optimize(x0[1:].copy())
xopt = np.hstack([target - xopt.sum(), xopt])
fval = opt.last_optimum_value()
print_res(xopt,fval,"Nelder-Mead");

opt = nlopt.opt(nlopt.LD_SLSQP,len(x0))
opt.set_min_objective(obj_func)
opt.add_equality_constraint(constr_func)
opt.set_ftol_abs(1e-15)
xopt = opt.optimize(x0.copy())
fval = opt.last_optimum_value()
print_res(xopt,fval,"SLSQP w/ jacobian");

结果如下:

 *****  Nelder-Mead  ***** 

obj func value at solution 0.00454545454546
result:  3
starting values:  [ 10000.  20000.  30000.  40000.  50000.]
ending values:    [10090 20363 30818 41454 52272]
% diff [0 1 2 3 4]
target achieved? 155000.0 155000.0


 *****  SLSQP w/ jacobian  ***** 

obj func value at solution 0.00454545454545
result:  3
starting values:  [ 10000.  20000.  30000.  40000.  50000.]
ending values:    [10090 20363 30818 41454 52272]
% diff [0 1 2 3 4]
target achieved? 155000.0 155000.0

在对此进行测试时,我想我发现了最初尝试的问题所在。如果我将函数的绝对容差设置为1e-8scipy 函数默认设置的函数,我会得到:

 *****  Nelder-Mead  ***** 

obj func value at solution 0.0045454580693
result:  3
starting values:  [ 10000.  20000.  30000.  40000.  50000.]
ending values:    [10090 20363 30816 41454 52274]
% diff [0 1 2 3 4]
target achieved? 155000.0 155000.0


 *****  SLSQP w/ jacobian  ***** 

obj func value at solution 0.0146361108503
result:  3
starting values:  [ 10000.  20000.  30000.  40000.  50000.]
ending values:    [10999 21000 31000 41000 51000]
% diff [9 5 3 2 2]
target achieved? 155000.0 155000.0

这正是您所看到的。所以我的猜测是,在 SLSQP 期间,最小化器最终会出现在似然空间中的某个位置,其中下一次跳跃小于上一次跳跃1e-8

于 2019-04-08T16:34:43.480 回答
1

这是我提出的问题的部分答案,以防止问题变得更大,但我仍然希望看到更全面和解释性的答案。这些答案是基于另外两个人的评论,但他们都没有完全写出代码,我认为明确说明是有意义的,所以这里是:

修复 2a (trust-constr with jacobian)

似乎这里关于雅可比和黑森的关键是既不指定也不指定两者(但不是只指定雅可比)。@SubhaneilLahiri 对此效果发表了评论,并且还有一条我最初没有注意到的错误消息:

用户警告:delta_grad == 0.0。检查近似函数是否是线性的。如果函数是线性的,则可以通过将 Hessian 定义为零而不是使用准牛顿近似来获得更好的结果。

所以我通过定义 hessian 函数来修复它:

def constr_hess(x,v):
    return np.zeros([n,n])

并将其添加到约束中

nonlin_con = NonlinearConstraint( constr_func, 0., 0., constr_jac, constr_hess )

固定 3a 和 3b (SLSQP)

正如@user545424 所建议的那样,这似乎只是缩小容差的问题。所以我只是添加options={'ftol':1e-15}到最小化:

resSQjac = minimize( obj_func, x0, method='SLSQP',
                     options={'ftol':1e-15},
                     jac = obj_jac, constraints = eq_cons )
于 2019-04-18T17:43:17.653 回答