P ¢

%

Then a little calculation yields

C

7

¦ ¦ y GGP v0¥ £

¦µ ‘)

¦

f ¬

A

¢ f y

f f

C

If one of or is zero, then it is clear that the term in the above relation be-

¢

f

5 y

comes and it must be nonzero since the matrix on the left-hand side is nonsingular.

s¦s ¦

s

f

Incidentally, this relation shows the structure of the last matrix . The

C

¨|¥ m¥

¨

components to of column consist of the vector , the components 1 to ¥

¦ y

G G G

of row make up the vector , and the diagonal elements are the ™s. Consider now ¢ S

y

the expression for from (10.70).

f

y w

£

P

¨ ¨ P

f f ` Hac C £ ` 'd W qa C ¨ £ `

w

£ d ` d

¨ ¢ ¨ ¨ C ¨ f sa

¨ ¢

f c

f f a

¢ fd R w 'd R £ C

¨ f

wf 5 y

¬ ¢ 'd

f

f

¨ 5fy y¡ 5 y ¡¡ f

This perturbation formula is of a second order in the sense that

§ ¢ p C §` C

@ y

§§ ¡ ¡ ² ¡ ¢ d R ¡

f

. It guarantees that is nonzero whenever .

!

a f f

§ U C¦ ¤

¦

IcXU P Y PcAW¤¥ 3e3 9 8 5P

2 3¢6

2¡ fP e W

RUH

eHW

¡

After a computed ILU factorization results in an unsatisfactory convergence, it is dif¬cult

¦

to improve it by modifying the and factors. One solution would be to discard this

factorization and attempt to recompute a fresh one possibly with more ¬ll-in. Clearly, this

may be a wasteful process. A better alternative is to use approximate inverse techniques.

¡

Assume a (sparse) matrix is a preconditioner to the original matrix , so the precondi-

r£

·¢

’ ”8 ” &’ ¤ ¦§ „ ¤ G'¡

$ © ’ ¡Y#

©$

¢

¡ ¡

tioned matrix is

¬ fd ¤

¡

f'd ¡

A sparse matrix is sought to approximate the inverse of . This matrix is then to be

¢

d ¡

used as a preconditioner to . Unfortunately, the matrix is usually dense. However,

f

observe that all that is needed is a matrix such that ¢

¢

¬g¡

¢

¡

Recall that the columns of and are sparse. One approach is to compute a least-squares

approximation in the Frobenius norm sense. This approach was used already in Section

¡

10.5.1 when is the identity matrix. Then the columns of were obtained by approxi- ¢

¢

¢¦

mately solving the linear systems . The same idea can be applied here. Now, the

S S

systems

‘ S ¢¦

± S

¡

±

must be solved instead, where is the -th column of which is sparse. Thus, the

S

coef¬cient matrix and the right-hand side are sparse, as before.

‚nfE–r…vxf¦ ¢ h w£

˜ • •"

v¦

¥

Block preconditioning is a popular technique for block-tridiagonal matrices arising from

the discretization of elliptic problems. It can also be generalized to other sparse matrices.

We begin with a discussion of the block-tridiagonal case.