derived for some common forms of the preconditioner in the special situation where is

symmetric. Write in the form

˜ ! ¨ ! P ©¢ 6 v' £

¦‘

¨ ¨

¨

in which is the strict lower triangular part of and its diagonal. In many cases,

¨

!

¡

the preconditioner can be written in the form

˜ ! ¨ ¨` d ¨a ! ¨ ¨` ¡ GF v' £

¦‘

f a

in which is the same as above and is some diagonal, not necessarily equal to . ¨ ¨

!

#¡

For example, in the SSOR preconditioner with , . Also, for certain types ¨ ¨

G

of matrices, the IC(0) preconditioner can be expressed in this manner, where can be ¨

obtained by a recurrence formula.

Eisenstat™s implementation consists of applying the Conjugate Gradient algorithm to

the linear system

GI v' £

¦‘

´¦ Bf'd a ! ¨

`

¨

with

¬ ´ f'd a ˜ ! &¨` (% ea ˜ ! ¨ GP v' £

¦‘

¦

¨ u

fd

¨

`

¨ ( 'd a !

¨` f

r¡¶

·

’ ”8 ”#’ ¤ ¦§

$ © 8 ¡© ”"£#£"p¥ ¡ 1 I&’ ¤

$ ¥¡ ©8

¡©

$ ¥$

¢

This does not quite correspond to a preconditioning with the matrix (9.5). In order to pro- ¦

duce the same iterates as Algorithm 9.1, the matrix must be further preconditioned with

the diagonal matrix . Thus, the preconditioned CG algorithm, Algorithm 9.1, is ac-

d ¨

f

¡

tually applied to the system (9.6) in which the preconditioning operation is . ¨

d

f

Alternatively, we can initially scale the rows and columns of the linear system and precon-

ditioning to transform the diagonal to the identity. See Exercise 6.

Now note that

˜ ¨ ¨` Bfd (! ¨ ¨` 3¦

˜ ¨ ˜ ¨ ¨ ¨cfd a(! ¨ ¨` f'd a !

¨` a !

`a

ea !

fd !

˜¨

˜!¨ wH

w

¨ ˜ ¨5 fd (! ¨ ¨`

¨ ¨

`

¨

` a !

¨

`¨7$a !

¨ a

d a

f

˜¨ w

w

&¨` f ¨ fd (! ¨ ¨`

¨

¨ ¨` f'd a !

¨` fea ! a

d

(f'd a !

%

H

¨ U ¨ f ¨

¨¦

in which . As a result,

¬ £ f'd a ˜ ! ¨ ¨` w ¢££ d a ˜ ! ¨ ¨` f ¨ w £ ¡ d (! ¨ ¦ ¨` £

fa

f

£

Thus, the vector can be computed by the following procedure:

£ d a ˜ ! ¨ ` P ¨

f

w£

` 'd a ! ¨ `

aPf ¨ ¨

f w

. P

˜

One product with the diagonal can be saved if the matrices and

¨

! d ¨ ! d ¨

f f

¦

£ f'd ¨ ¦ £ ¨ f'd ¨ f ¨

are stored. Indeed, by setting and , the above procedure can be

f

reformulated as follows.

©§¥ © £ ¨ % ¨ ¡ 2 ¥ · ¡w¤h¨ n¦

¤ £¢ ¦

¢

¨ ¦¤

£ d ¨ ¦ £

1.

˜ ! d ¨ f ¨ 8 ` P

¦£

2. ¦ w ¦ d a f

f

£ ` d (! fd ¨ ¨ 8 `

3. aP ¨ fa

P w

f

4. .

˜ ! 'd ¨ ! 'd ¨

Note that the matrices and are not the transpose of one another, so we f

f

actually need to increase the storage requirement for this formulation if these matrices

are stored. However, there is a more economical variant which works with the matrix

« « 'd

and its transpose. This is left as Exercise 7.

¨ ! d ¨

f f

Denoting by the number of nonzero elements of a sparse matrix , the total

4!`