f

7. EndDo

£

If there are only nonzero components in the vector and an average of nonzero elements

£

H

¯

¯

per column, then the total cost per step will be on the average. Note that the £¤

computation of via (10.39) involves the inner product of two sparse vectors which is

¢

often implemented by expanding one of the vectors into a full vector and computing the

inner product of a sparse vector by this full vector. As mentioned before, in the symmetric

case ILUS yields the Incomplete Cholesky factorization. Here, the work can be halved

since the generation of is not necessary.

p¶

·

’ ”8 ” ’ ¤ %©Y¡Y"¡ $ "p ‚ r ¦¨¥

$ © § © ©8¥ ’ §§ ¥

¤

¡¡© &

$

¢

¡

Also note that a simple iterative procedure such as MR or GMRES(m) can be used

to solve the triangular systems in sparse-sparse mode. Similar techniques will be seen

in Section 10.5. Experience shows that these alternatives are not much better than the

Neumann series approach [53].

““nfh•…tvg—P!U “‚§ x#u”•vF V

˜ •" ˜ • …

v¦

¥

¡

¡

The Incomplete LU factorization techniques were developed originally for -matrices

which arise from the discretization of Partial Differential Equations of elliptic type, usu-

ally in one variable. For the common situation where is inde¬nite, standard ILU fac-

torizations may face several dif¬culties, and the best known is the fatal breakdown due to

the encounter of a zero pivot. However, there are other problems that are just as serious.

Consider an incomplete factorization of the form

uW6 v) ¥ £

¦¥ ‘

w §

¦

¢

!

where is the error. The preconditioned matrices associated with the different forms of

!

preconditioning are similar to

G6 v) ¥ £

¦ ‘

w8

P'd ¦ ufd ¦

f ¬Rd

! d

f f

What is sometimes missed is the fact that the error matrix in (10.41) is not as important !

¦

as the “preconditioned” error matrix shown in (10.42) above. When the matrix

! d ¦

f d

f

¦

is diagonally dominant, then and are well conditioned, and the size of re- ! d

f d

f

mains con¬ned within reasonable limits, typically with a nice clustering of its eigenvalues

around the origin. On the other hand, when the original matrix is not diagonally dominant,

¦ ¦

or may have very large norms, causing the error to be very large and ! f'd

d

f d

f f'd

thus adding large perturbations to the identity matrix. It can be observed experimentally

that ILU preconditioners can be very poor in these situations which often arise when the

matrices are inde¬nite, or have large nonsymmetric parts.

One possible remedy is to try to ¬nd a preconditioner that does not require solving

¡

a linear system. For example, the original system can be preconditioned by a matrix

which is a direct approximation to the inverse of .

¦¦ § ¦B

© 9 U IIIb 5dHca Y ¦W b¨9g5P

6 2 3¢6

2¡ 9 U3

e 8PY f

H B e H WP IIe 9

HB

© CbQ9gf

PeY

A simple technique for ¬nding approximate inverses of arbitrary sparse matrices is to at-

¡

tempt to ¬nd a sparse matrix which minimizes the Frobenius norm of the residual matrix

8

¡£©¨

,

G6 v) ¥ £

¦µ ‘

¬ « § ¡ ¨ 8 &”a ¡`

§

¥p¶

· 8˜ H ˜ ¤ ¨°¡ $ H&’ ”8 ” &’ ¤ ©¦§

©8 $ $

¡¨ ¨¦ ¤

© §¥ c#&) $

©1

¢

¡

¡ ¡`

A matrix whose value is small would be a right-approximate inverse of . Sim-

a

ilarly, a left-approximate inverse can be de¬ned by using the objective function

6 6 v0¥ £

¦ ‘)

¬ « m ¡ ¨ 8 §

§

¦