16

我有一个该Nelder-Mead方法可以解决的优化问题,但我也想使用BFGSNewton-Raphson 或采用梯度函数的方法来解决该问题,以获得更快的速度,并希望得到更精确的估计。optim我在/optimx文档中的示例之后(我认为)编写了这样一个渐变函数,但是当我将它与BFGS我的起始值一起使用时,要么不要移动(optim()),要么函数完全不运行(optimx()返回Error: Gradient function might be wrong - check it!)。很抱歉,复制此内容涉及一些代码,但这里有:

这是我想要获得参数估计的函数(这是为了平滑老年死亡率,其中 x 是年龄,从 80 岁开始):

    KannistoMu <- function(pars, x = .5:30.5){
      a <- pars["a"]
      b <- pars["b"]
      (a * exp(b * x)) / (1 + a * exp(b * x))
    }

这是一个对数似然函数,用于根据观察到的比率(定义为死亡、.Dx过度暴露、.Exp)来估计它:

    KannistoLik1 <- function(pars, .Dx, .Exp, .x. = .5:30.5){
      mu <- KannistoMu(exp(pars), x = .x.)
      # take negative and minimize it (default optimizer behavior)
      -sum(.Dx * log(mu) - .Exp * mu, na.rm = TRUE) 
    }

你看exp(pars)那里是因为我给log(pars)优化了,为了约束最终ab积极。

示例数据(1962 年日本女性,如果有人好奇的话):

    .Dx <- structure(c(10036.12, 9629.12, 8810.11, 8556.1, 7593.1, 6975.08, 
      6045.08, 4980.06, 4246.06, 3334.04, 2416.03, 1676.02, 1327.02, 
      980.02, 709, 432, 350, 217, 134, 56, 24, 21, 10, 8, 3, 1, 2, 
      1, 0, 0, 0), .Names = c("80", "81", "82", "83", "84", "85", "86", 
      "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", 
      "98", "99", "100", "101", "102", "103", "104", "105", "106", 
      "107", "108", "109", "110"))
    .Exp <- structure(c(85476.0333333333, 74002.0866666667, 63027.5183333333, 
      53756.8983333333, 44270.9, 36749.85, 29024.9333333333, 21811.07, 
      16912.315, 11917.9583333333, 7899.33833333333, 5417.67, 3743.67833333333, 
      2722.435, 1758.95, 1043.985, 705.49, 443.818333333333, 223.828333333333, 
      93.8233333333333, 53.1566666666667, 27.3333333333333, 16.1666666666667, 
      10.5, 4.33333333333333, 3.16666666666667, 3, 2.16666666666667, 
      1.5, 0, 1), .Names = c("80", "81", "82", "83", "84", "85", "86", 
      "87", "88", "89", "90", "91", "92", "93", "94", "95", "96", "97", 
      "98", "99", "100", "101", "102", "103", "104", "105", "106", 
      "107", "108", "109", "110"))

以下适用于该Nelder-Mead方法:

    NMab <- optim(log(c(a = .1, b = .1)), 
      fn = KannistoLik1, method = "Nelder-Mead",
      .Dx = .Dx, .Exp = .Exp)
    exp(NMab$par) 
    # these are reasonable estimates
       a         b 
    0.1243144 0.1163926

这是我想出的渐变函数:

    Kannisto.gr <- function(pars, .Dx, .Exp, x = .5:30.5){
      a <- exp(pars["a"])
      b <- exp(pars["b"])
      d.a <- (a * exp(b * x) * .Exp + (-a * exp(b * x) - 1) * .Dx) /
        (a ^ 3 * exp(2 * b * x) + 2 * a ^ 2 * exp(b * x) + a)
      d.b <- (a * x * exp(b * x) * .Exp + (-a * x * exp(b * x) - x) * .Dx) /
        (a ^ 2 * exp(2 * b * x) + 2 * a * exp(b * x) + 1)
      -colSums(cbind(a = d.a, b = d.b), na.rm = TRUE)
    }

输出是一个长度为 2 的向量,它是关于参数a和的变化b。通过利用 的输出,我也得到了一个更丑陋的版本deriv(),它返回相同的答案,并且我没有发布(只是为了确认导数是正确的)。

如果我optim()按如下方式提供它,BFGS作为方法,估计值不会从起始值移动:

    BFGSab <- optim(log(c(a = .1, b = .1)), 
      fn = KannistoLik1, gr = Kannisto.gr, method = "BFGS",
      .Dx = .Dx, .Exp = .Exp)
    # estimates do not change from starting values:
    exp(BFGSab$par) 
      a   b 
    0.1 0.1

当我查看$counts输出的元素时,它说它KannistoLik1()被调用了 31 次,而Kannisto.gr()只有 1 次。$convergence0,所以我猜它认为它收敛了(如果我给出不太合理的开始,它们也会保持不变)。我降低了容忍度等,没有任何变化。当我尝试相同的调用optimx()(未显示)时,我收到上面提到的警告,并且没有返回任何对象。gr = Kannisto.gr使用"CG". _ 使用该"L-BFGS-B"方法,我得到了与估计相同的起始值,但据报道,函数和梯度都被调用了 21 次,并且有一条错误消息: "ERROR: BNORMAL_TERMINATION_IN_LNSRCH"

我希望梯度函数的编写方式有一些小细节可以解决这个问题,因为后面的警告和optimx行为直言不讳地暗示该函数根本不正确(我认为)。我还尝试了包中的maxNR()最大化maxLik器并观察到类似的行为(起始值不移动)。谁能给我指点?非常感谢

[编辑] @Vincent 建议我与数值近似的输出进行比较:

    library(numDeriv)
    grad( function(u) KannistoLik1( c(a=u[1], b=u[2]), .Dx, .Exp ), log(c(.1,.1)) )
    [1] -14477.40  -7458.34
    Kannisto.gr(log(c(a=.1,b=.1)), .Dx, .Exp)
     a        b 
    144774.0  74583.4 

如此不同的符号,并且相差 10 倍?我更改渐变功能以效仿:

    Kannisto.gr2 <- function(pars, .Dx, .Exp, x = .5:30.5){
      a <- exp(pars["a"])
      b <- exp(pars["b"])
      d.a <- (a * exp(b * x) * .Exp + (-a * exp(b * x) - 1) * .Dx) /
        (a ^ 3 * exp(2 * b * x) + 2 * a ^ 2 * exp(b * x) + a)
      d.b <- (a * x * exp(b * x) * .Exp + (-a * x * exp(b * x) - x) * .Dx) /
        (a ^ 2 * exp(2 * b * x) + 2 * a * exp(b * x) + 1)
      colSums(cbind(a=d.a,b=d.b), na.rm = TRUE) / 10
    }
    Kannisto.gr2(log(c(a=.1,b=.1)), .Dx, .Exp)
    # same as numerical:
      a         b 
    -14477.40  -7458.34 

在优化器中尝试:

    BFGSab <- optim(log(c(a = .1, b = .1)), 
      fn = KannistoLik1, gr = Kannisto.gr2, method = "BFGS",
      .Dx = .Dx, .Exp = .Exp)
    # not reasonable results:
    exp(BFGSab$par) 
      a   b 
    Inf Inf 
    # and in fact, when not exp()'d, they look oddly familiar:
    BFGSab$par
      a         b 
    -14477.40  -7458.34 

按照文森特的回答,我重新调整了梯度函数,并使用abs()而不是exp()保持参数为正。最新的、性能更好的目标函数和梯度函数:

    KannistoLik2 <- function(pars, .Dx, .Exp, .x. = .5:30.5){
      mu <- KannistoMu.c(abs(pars), x = .x.)
      # take negative and minimize it (default optimizer behavior)
      -sum(.Dx * log(mu) - .Exp * mu, na.rm = TRUE) 
    }

    # gradient, to be down-scaled in `optim()` call
    Kannisto.gr3 <- function(pars, .Dx, .Exp, x = .5:30.5){
      a <- abs(pars["a"])
      b <- abs(pars["b"])
      d.a <- (a * exp(b * x) * .Exp + (-a * exp(b * x) - 1) * .Dx) /
        (a ^ 3 * exp(2 * b * x) + 2 * a ^ 2 * exp(b * x) + a)
      d.b <- (a * x * exp(b * x) * .Exp + (-a * x * exp(b * x) - x) * .Dx) /
        (a ^ 2 * exp(2 * b * x) + 2 * a * exp(b * x) + 1)
      colSums(cbind(a = d.a, b = d.b), na.rm = TRUE) 
    }

    # try it out:
    BFGSab2 <- optim(
      c(a = .1, b = .1), 
      fn = KannistoLik2, 
      gr = function(...) Kannisto.gr3(...) * 1e-7, 
      method = "BFGS",
      .Dx = .Dx, .Exp = .Exp
    )
    # reasonable:
    BFGSab2$par
            a         b 
    0.1243249 0.1163924 

    # better:
    KannistoLik2(exp(NMab1$par),.Dx = .Dx, .Exp = .Exp) > KannistoLik2(BFGSab2$par,.Dx = .Dx, .Exp = .Exp)
    [1] TRUE

这个问题的解决速度比我预期的要快得多,而且我学到了不止几个技巧。谢谢文森特!

4

1 回答 1

14

To check if the gradient is correct, you can compare it with a numeric approximation:

library(numDeriv); 
grad( function(u) KannistoLik1( c(a=u[1], b=u[2]), .Dx, .Exp ), c(1,1) ); 
Kannisto.gr(c(a=1,b=1), .Dx, .Exp)

The signs are wrong: the algorithm does not see any improvement when it moves in this direction, and therefore does not move.

You can use some computer algebra system (here, Maxima) to do the computations for you:

display2d: false;
f(a,b,x) := a * exp(b*x) / ( 1 + a * exp(b*x) );
l(a,b,d,e,x) := - d * log(f(a,b,x)) + e * f(a,b,x);
factor(diff(l(exp(a),exp(b),d,e,x),a));
factor(diff(l(exp(a),exp(b),d,e,x),b));

I just copy and paste the result into R:

f_gradient <- function(u, .Dx, .Exp, .x.=.5:30.5) {
  a <- u[1]
  b <- u[1]
  x <- .x.
  d <- .Dx
  e <- .Exp
  c(
    sum( (e*exp(exp(b)*x+a)-d*exp(exp(b)*x+a)-d)/(exp(exp(b)*x+a)+1)^2 ),
    sum( exp(b)*x*(e*exp(exp(b)*x+a)-d*exp(exp(b)*x+a)-d)/(exp(exp(b)*x+a)+1)^2 )
  )  
}

library(numDeriv)
grad( function(u) KannistoLik1( c(a=u[1], b=u[2]), .Dx, .Exp ), c(1,1) )
f_gradient(c(a=1,b=1), .Dx, .Exp)  # Identical

If you blindly put the gradient in the optimization, there is a numeric instability problem: the solution given is (Inf,Inf)... To prevent it, you can rescale the gradient (a better workaround would be to use a less explosive transformation than the exponential, to ensure that the parameters remain positive).

BFGSab <- optim(
  log(c(a = .1, b = .1)), 
  fn = KannistoLik1, 
  gr = function(...) f_gradient(...) * 1e-3, 
  method = "BFGS",
  .Dx = .Dx, .Exp = .Exp
)
exp(BFGSab$par) # Less precise than Nelder-Mead
于 2012-07-24T02:41:45.037 回答