66

我使用scipy.optimize最小化 12 个参数的函数。

我前段时间开始优化,仍在等待结果。

有没有办法强制scipy.optimize显示它的进度(比如已经做了多少,目前的最佳点是什么)?

4

7 回答 7

42

正如 mg007 所建议的那样,一些 scipy.optimize 例程允许回调函数(不幸的是,leastsq 目前不允许这样做)。下面是一个使用“fmin_bfgs”例程的示例,其中我使用回调函数来显示参数的当前值和每次迭代时目标函数的值。

import numpy as np
from scipy.optimize import fmin_bfgs

Nfeval = 1

def rosen(X): #Rosenbrock function
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

def callbackF(Xi):
    global Nfeval
    print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi))
    Nfeval += 1

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              callback=callbackF, 
              maxiter=2000, 
              full_output=True, 
              retall=False)

输出如下所示:

Iter    X1          X2          X3         f(X)      
   1    1.031582    1.062553    1.130971    0.005550
   2    1.031100    1.063194    1.130732    0.004973
   3    1.027805    1.055917    1.114717    0.003927
   4    1.020343    1.040319    1.081299    0.002193
   5    1.005098    1.009236    1.016252    0.000739
   6    1.004867    1.009274    1.017836    0.000197
   7    1.001201    1.002372    1.004708    0.000007
   8    1.000124    1.000249    1.000483    0.000000
   9    0.999999    0.999999    0.999998    0.000000
  10    0.999997    0.999995    0.999989    0.000000
  11    0.999997    0.999995    0.999989    0.000000
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 11
         Function evaluations: 85
         Gradient evaluations: 17

至少通过这种方式,您可以看到优化器跟踪最小值

于 2013-08-27T16:53:32.460 回答
14

遵循@joel 的示例,有一种简洁有效的方法来做类似的事情。下面的例子展示了我们如何摆脱global变量、call_back函数和多次重新评估目标函数

import numpy as np
from scipy.optimize import fmin_bfgs

def rosen(X, info): #Rosenbrock function
    res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2


    # display information
    if info['Nfeval']%100 == 0:
        print '{0:4d}   {1: 3.6f}   {2: 3.6f}   {3: 3.6f}   {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res)
    info['Nfeval'] += 1
    return res

print  '{0:4s}   {1:9s}   {2:9s}   {3:9s}   {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)')   
x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
[xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \
    fmin_bfgs(rosen, 
              x0, 
              args=({'Nfeval':0},), 
              maxiter=1000, 
              full_output=True, 
              retall=False,
              )

这将产生类似的输出

Iter    X1          X2          X3         f(X)     
   0    1.100000    1.100000    1.100000    2.440000
 100    1.000000    0.999999    0.999998    0.000000
 200    1.000000    0.999999    0.999998    0.000000
 300    1.000000    0.999999    0.999998    0.000000
 400    1.000000    0.999999    0.999998    0.000000
 500    1.000000    0.999999    0.999998    0.000000
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.000000
         Iterations: 12
         Function evaluations: 502
         Gradient evaluations: 98

但是,没有免费的发射,这里我用function evaluation times而不是algorithmic iteration times作为计数器。一些算法可能会在一次迭代中多次评估目标函数。

于 2017-10-25T07:34:57.337 回答
10

尝试使用:

options={'disp': True} 

强制scipy.optimize.minimize打印中间结果。

于 2017-11-08T03:17:32.033 回答
9

scipy 中的许多优化器确实缺乏详细的输出(“trust-constr”方法scipy.optimize.minimize是一个例外)。我遇到了类似的问题,并通过在目标函数周围创建一个包装器并使用回调函数来解决它。这里没有执行额外的函数评估,所以这应该是一个有效的解决方案。

import numpy as np

class Simulator:
def __init__(self, function):
    self.f = function # actual objective function
    self.num_calls = 0 # how many times f has been called
    self.callback_count = 0 # number of times callback has been called, also measures iteration count
    self.list_calls_inp = [] # input of all calls
    self.list_calls_res = [] # result of all calls
    self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease
    self.decreasing_list_calls_res = [] # result of calls that resulted in decrease
    self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations
    self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations

def simulate(self, x, *args):
    """Executes the actual simulation and returns the result, while
    updating the lists too. Pass to optimizer without arguments or
    parentheses."""
    result = self.f(x, *args) # the actual evaluation of the function
    if not self.num_calls: # first call is stored in all lists
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
        self.list_callback_inp.append(x)
        self.list_callback_res.append(result)
    elif result < self.decreasing_list_calls_res[-1]:
        self.decreasing_list_calls_inp.append(x)
        self.decreasing_list_calls_res.append(result)
    self.list_calls_inp.append(x)
    self.list_calls_res.append(result)
    self.num_calls += 1
    return result

def callback(self, xk, *_):
    """Callback function that can be used by optimizers of scipy.optimize.
    The third argument "*_" makes sure that it still works when the
    optimizer calls the callback function with more than one argument. Pass
    to optimizer without arguments or parentheses."""
    s1 = ""
    xk = np.atleast_1d(xk)
    # search backwards in input list for input corresponding to xk
    for i, x in reversed(list(enumerate(self.list_calls_inp))):
        x = np.atleast_1d(x)
        if np.allclose(x, xk):
            break
    
    for comp in xk:
        s1 += f"{comp:10.5e}\t"
    s1 += f"{self.list_calls_res[i]:10.5e}"

    self.list_callback_inp.append(xk)
    self.list_callback_res.append(self.list_calls_res[i])

    if not self.callback_count:
        s0 = ""
        for j, _ in enumerate(xk):
            tmp = f"Comp-{j+1}"
            s0 += f"{tmp:10s}\t"
        s0 += "Objective"
        print(s0)
    print(s1)
    self.callback_count += 1

可以定义一个简单的测试

from scipy.optimize import minimize, rosen
ros_sim = Simulator(rosen)
minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True})

print(f"Number of calls to Simulator instance {ros_sim.num_calls}")

导致:

Comp-1          Comp-2          Objective
1.76348e-01     -1.31390e-07    7.75116e-01
2.85778e-01     4.49433e-02     6.44992e-01
3.14130e-01     9.14198e-02     4.75685e-01
4.26061e-01     1.66413e-01     3.52251e-01
5.47657e-01     2.69948e-01     2.94496e-01
5.59299e-01     3.00400e-01     2.09631e-01
6.49988e-01     4.12880e-01     1.31733e-01
7.29661e-01     5.21348e-01     8.53096e-02
7.97441e-01     6.39950e-01     4.26607e-02
8.43948e-01     7.08872e-01     2.54921e-02
8.73649e-01     7.56823e-01     2.01121e-02
9.05079e-01     8.12892e-01     1.29502e-02
9.38085e-01     8.78276e-01     4.13206e-03
9.73116e-01     9.44072e-01     1.55308e-03
9.86552e-01     9.73498e-01     1.85366e-04
9.99529e-01     9.98598e-01     2.14298e-05
9.99114e-01     9.98178e-01     1.04837e-06
9.99913e-01     9.99825e-01     7.61051e-09
9.99995e-01     9.99989e-01     2.83979e-11
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 96
         Gradient evaluations: 24
Number of calls to Simulator instance 96

当然这只是一个模板,它可以根据您的需要进行调整。它没有提供有关优化器状态的所有信息(例如在 MATLAB 的优化工具箱中),但至少您对优化的进度有所了解。

可以在此处找到类似的方法,而无需使用回调函数。在我的方法中,回调函数用于在优化器完成迭代时准确打印输出,而不是每个函数调用。

于 2019-12-13T21:26:10.043 回答
3

您正在使用哪个最小化函数?

大多数功能都建立了进度报告,包括通过使用标志准确显示您想要的数据的多级报告disp(例如参见scipy.optimize.fmin_l_bfgs_b)。

于 2013-08-27T20:38:53.460 回答
2

也可以在要最小化的函数中包含一个简单的 print() 语句。如果您导入该函数,您可以创建一个 wapper。

import numpy as np
from scipy.optimize import minimize


def rosen(X): #Rosenbrock function
    print(X)
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

x0 = np.array([1.1, 1.1, 1.1], dtype=np.double)
minimize(rosen, 
         x0)
于 2020-04-23T09:16:38.753 回答
1

下面是一个适合我的解决方案:

def f_(x):   # The rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def conjugate_gradient(x0, f):
    all_x_i = [x0[0]]
    all_y_i = [x0[1]]
    all_f_i = [f(x0)]
    def store(X):
        x, y = X
        all_x_i.append(x)
        all_y_i.append(y)
        all_f_i.append(f(X))
    optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12})
    return all_x_i, all_y_i, all_f_i

例如:

conjugate_gradient([2, -1], f_)

资源

于 2019-02-20T16:32:21.707 回答