1

I am solving a linear system of equation currently i am using

   numpy.linalg.solve

It returns the solution of the linear system. I want to have a control such that i can execute iterations

Considering another option

    scipy.optimize.minimize

Documentation describes that we can specify a function which is called after every iteration and we could have current parameters. I am not sure if they meant that we could get the current resultant vector. e.g Simply I want to access x after every iteration, while we are solving Ax=b

I am wondering if somebody has worked with it and can explain!

Thanks

4

1 回答 1

2

The callback keyword argument to scipy.optimize.minimize specifies a function that will be called with the current estimate of the argument that minimizes the function at each iteration.

But minimize is intended for a scalar function (one returning a single value) so I don't see how that is applicable to your example. You may want to try scipy.optimize.fsolve instead. It doesn't accept the callback keyword though. To get around that, you could wrap your linear equation in a callable function object (that returns Ax-b) and then just take the argument that is passed to your callable object.

class Ab:
    def __init__(self, A, b):
        self.A = A
        self.b = b
    def __call__(self, x):
        print 'x =', x
        return A.dot(x) - b

Then use it like this:

>>> A = np.array([[2, 3], [4, 9]], float)
>>> b = np.array([5, 5])
>>> f = Ab(A, b)
>>> optimize.fsolve(f, [0, 0])
x = [0 0]
x = [ 0.  0.]
x = [ 0.  0.]
x = [  1.49011612e-08   0.00000000e+00]
x = [  0.00000000e+00   1.49011612e-08]
x = [ 5.         -1.66666667]
x = [ 5.         -1.66666667]
array([ 5.        , -1.66666667])

fsolve reported only taking 5 iterations so it appears the first two calls may be not actually used for convergence.

于 2013-02-21T18:00:52.230 回答