2

I'm trying to implement the ILU preconditioner in this GMRES code I wrote (in order to solve the linear sistem Ax = b. I'm trying with an easy tridiagonal SPD matrix of dimension 25x25. As you can see I'm calculating the preconditioner with spilu method. The code is running without error, but the solution is clearly wrong since, at the end of the code, I'm printing the norm of b and the norm of the product A*x. They are not nearly the same.. The code Run fine without preconditioner and converge with 13 iteration for the same matrix.

This is the code I followed

import numpy as np
import scipy as sp
import matplotlib.pyplot as plt

'Size controller'

matrixSize =25

'Building a tri-diagonal matrix'

def Atridiag(val_0, val_sup, val_inf, mSize):
    cen     = np.ones((1, mSize))*val_0
    sup     = np.ones((1, mSize-1))*val_sup
    inf     = np.ones((1, mSize-1))*val_inf
    diag_cen  = np.diagflat(cen, 0)
    diag_sup  = np.diagflat(sup, 1)
    diag_inf  = np.diagflat(inf, -1)
    return diag_cen + diag_sup + diag_inf

A = Atridiag(2, -1, -1, matrixSize)

A = sp.sparse.csc_matrix (A)

'Plot matrix sparsity'

plt.clf()
plt.spy(A, marker ='.', markersize=2)
plt.show()


'random b and x0 vectors'

b = np.matrix(np.ones((matrixSize, 1)))
x = np.matrix(np.ones((matrixSize, 1)))

'Incomplete LU'

M = sp.sparse.linalg.dsolve.spilu(A)
M1 = lambda x: M.solve(x)
M2=sp.sparse.linalg.LinearOperator((matrixSize,matrixSize),M1)


'Initial Data'

nmax_iter = 30
rstart = 2
tol = 1e-7
e = np.zeros((nmax_iter + 1, 1))
rr = 1

'Starting GMRES'

for rs in range (0, rstart+1):

    'first check on residual'

    if rr < tol :
        break

    else:

        r0 = (b - A.dot(x))
        betha = np.linalg.norm(r0)
        e[0] = betha
        H = np.zeros((nmax_iter + 1, nmax_iter))
        V = np.zeros((matrixSize, nmax_iter+1))
        V[:, 0:1] = r0/betha

    for k in range (1, nmax_iter+1):

        'Appling the Preconditioner'

        t = A.dot(V[:, k-1])
        V[:, k] = M2.matvec(t)

        'Ortogonalizzazione GS'

        for j in range (k):
            H[j, k-1] = np.dot(V[:, k].T, V[:, j])
            V[:, k] = V[:, k] - (np.dot(H[j, k-1], V[:, j]))

        H[k, k-1] = np.linalg.norm(V[:, k])
        V[:, k] = V[:, k] / H[k, k-1] 

        'QR Decomposition'

        n=k
        Q = np.zeros((n+1, n))
        R = np.zeros((n, n))
        R[0, 0] = np.linalg.norm(H[0:n+2, 0])
        Q[:, 0] = H[0:n+1, 0] / R[0,0]
        for j in range (0, n+1):
            t = H[0:n+1, j-1]
            for i in range (0, j-1):
                R[i, j-1] = np.dot(Q[:, i], t)
                t = t - np.dot(R[i, j-1], Q[:, i])
            R[j-1, j-1] = np.linalg.norm(t)
            Q[:, j-1] = t / R[j-1, j-1]

        g = np.dot(Q.T, e[0:k+1]) 

        Z = np.dot(np.linalg.inv(R), g)

        Res = e[0:n] - np.dot(H[0:n, 0:n], Z[0:n])
        rr = np.linalg.norm(Res)

        'second check on residual'

        if rr < tol:
            break

    'Updating the solution'    

    x = x + np.dot(V[:, 0:k], Z)



print(sp.linalg.norm(b))
print(sp.linalg.norm(np.dot(A.todense(),x)))

Really Hope somebody can figure it out!!

4

1 回答 1

2

也许为时已晚,但供将来参考:

更新 x 时忘记乘以调节器:

x = x + M2.dot(np.dot(V[:, 0:k], Z)    # M2.matvec() works the same

这里

通过该修复,算法在 1 次迭代中收敛。


其他的建议:

  • 你可以直接做:M2 = sp.sparse.linalg.LinearOperator((matrixSize,matrixSize),M.solve)
  • 最后,要比较Axb,最好打印差异(残差),因为您将获得更精确的结果:print(sp.linalg.norm(b - np.dot(A.todense(),x)))
于 2018-07-19T14:18:17.357 回答