0

我正在使用scipy.optimize.minimize最小化一个简单的对数似然函数。Hessian 矩阵似乎表现不佳。

import scipy.optimize as op

def lnlike(theta, n, bhat, fhat, sigb, sigf):
    S, b, f = theta
    mu = f*S + b
    scb2 = ((b-bhat)/sigb)**2
    scf2 = ((f-fhat)/sigf)**2
    return n*np.log(mu) - mu - 0.5*(scb2+scf2)
nll = lambda *args: -lnlike(*args)

myargs=(21.0, 20.0, 0.5, 6.0, 0.1)

如果最初的猜测是最小的,那么迭代不会去任何地方。就参数值而言,这很好,但它也不会触及 Hessian(仍然是身份),所以我不能将它用于不确定性估计。

x0 = [2.0, 20.0, 0.5]  # initial guess is at the minimum
result = op.minimize(nll, x0, args= myargs)
print result

   status: 0
  success: True
     njev: 1
     nfev: 5
 hess_inv: array([[1, 0, 0],
       [0, 1, 0],
       [0, 0, 1]])
      fun: -42.934971192191881
        x: array([  2. ,  20. ,   0.5])
  message: 'Optimization terminated successfully.'
      jac: array([  0.00000000e+00,   0.00000000e+00,   9.53674316e-07])

如果我稍微改变最初的猜测,它似乎会返回一个明智的 hess_inv。

x0 = [2.01, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])

status: 0
  success: True
     njev: 15
     nfev: 75
 hess_inv: array([[  2.16004477e+02,  -7.60588367e+01,  -2.94846112e-02],
       [ -7.60588367e+01,   3.55748024e+01,   2.74064505e-03],
       [ -2.94846112e-02,   2.74064505e-03,   9.98030944e-03]])
      fun: -42.934971191969964
        x: array([  1.99984604,  19.9999814 ,   0.5000001 ])
  message: 'Optimization terminated successfully.'
      jac: array([ -2.38418579e-06,  -5.24520874e-06,   1.90734863e-06])
14.697090757

但是,hess_inv 对初始猜测非常敏感。

x0 = [2.02, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])

   status: 0
  success: True
     njev: 16
     nfev: 80
 hess_inv: array([[  1.82153214e+02,  -6.03482772e+01,  -2.97458789e-02],
       [ -6.03482772e+01,   3.30771459e+01,  -2.53811809e-03],
       [ -2.97458789e-02,  -2.53811809e-03,   9.99052952e-03]])
      fun: -42.934971192188634
        x: array([  1.9999702 ,  20.00000354,   0.50000001])
  message: 'Optimization terminated successfully.'
      jac: array([ -9.53674316e-07,  -4.76837158e-07,  -4.76837158e-07])
13.4964148462

改变最初的猜测多一点

x0 = [2.03, 20.0, 0.5]
result = op.minimize(nll, x0, args= myargs)
print result
print np.sqrt(result.hess_inv[0,0])

   status: 0
  success: True
     njev: 14
     nfev: 70
 hess_inv: array([[  2.30479371e+02,  -7.36087027e+01,  -3.79639119e-02],
       [ -7.36087027e+01,   3.55785937e+01,   3.54182478e-03],
       [ -3.79639119e-02,   3.54182478e-03,   9.97664441e-03]])
      fun: -42.93497119204827
        x: array([  1.99975148,  20.00006366,   0.50000009])
  message: 'Optimization terminated successfully.'
      jac: array([ -9.53674316e-07,  -9.53674316e-07,   4.29153442e-06])
15.1815470484

我错过了什么?这是错误还是功能?

4

2 回答 2

2

我理解优化器的方式是,Hessian 近似于有限差分。在您的情况下,这似乎不是最好的主意。也许,利用 Sympy(在 IPython 中)会产生更有用的结果:

import sympy as sy
import numpy as np
import scipy.optimize as sopt

from IPython.display import display # nice printing

sy.init_printing()  # LaTeX like printing for IPython

def lnlike(theta, n, bhat, fhat, sigb, sigf):
    S, b, f = theta
    mu = f*S + b
    scb2 = ((b-bhat)/sigb)**2
    scf2 = ((f-fhat)/sigf)**2
    return n*sy.log(mu) - mu - (scb2+scf2) / 2

# declare symbols:
th_S, th_b, th_f = sy.symbols("theta_S, theta_b, theta_f", real=True)
theta = (th_S, th_b, th_f)
n, bhat, fhat = sy.symbols("n, \hat{b}, \hat{f}", real=True )
sigb, sigf = sy.symbols("sigma_b, sigma_d", real=True )

# symbolic optimizaton function:
lf = -lnlike(theta, n, bhat, fhat, sigb, sigf)
# Gradient:
dlf = sy.Matrix([lf.diff(th) for th in theta])
# Hessian:
Hlf = sy.Matrix([dlf.T.diff(th) for th in theta])

print("Symbolic Hessian:")
display(Hlf)

# Make numpy functions:
margs = {n:21, bhat:20, fhat:.5, sigb:6, sigf:.1} # parameters
lf_a, dlf_a, Hlf_a = lf.subs(margs), dlf.subs(margs), Hlf.subs(margs)
lf_lam =  sy.lambdify(theta,  lf_a, modules="numpy")
dlf_lam = sy.lambdify(theta, dlf_a, modules="numpy")
Hlf_lam = sy.lambdify(theta, Hlf_a, modules="numpy")

nlf = lambda xx: np.array(lf_lam(xx[0], xx[1], xx[2]))  # function
ndlf = lambda xx: np.array(dlf_lam(xx[0], xx[1], xx[2])).flatten()  # gradient
nHlf = lambda xx: np.array(Hlf_lam(xx[0], xx[1], xx[2]))  # Hessian

x0 = [2.02, 20.0, 0.5]
rs = sopt.minimize(nlf, x0, jac=ndlf, hess=nHlf, method='Newton-CG')
print(rs)
print("Hessian:")
print(nHlf(rs.x))
于 2015-04-04T10:39:29.167 回答
1

如果您使用的是准牛顿方法,从文档中可以看出您是

Quasi-Newton 方法通过将一系列低秩更新应用于完全幼稚的猜测(通常是恒等式的倍数)来构建 Hessian 逆的猜测。使用的低秩更新在某种意义上是使给定方程成立的“最小变化”更新,并且“最小变化”的含义随选择的准牛顿方法而变化。如果您从最小化器开始或非常接近最小化器,优化器将很快解决这个问题,并且它不会在其近似于 Hessian 逆时建立太多信息。

于 2015-04-04T14:17:21.493 回答