¦ C ” ¢ C

C ¦C

¨

1. Compute ; ; and .

fd

DB¦¬ D% G ¦ C % E n¥

¬¬

2. For , until convergence Do:

% x¡a C % `

`

3. a

w

4. C¦ f ¦ C

C¦ ¦ C 'd ¦ C ¨ ¦ C f

5. f

a % £a f ¦ C % f `

6. `

w ˜ d f

7.

f

8. EndDo

The iterates produced by the above algorithm and Algorithm 9.1 are identical, provided

the same initial guess is used.

¡

Consider now the right preconditioned system (9.2). The matrix is not Hermi- d

f

¡

tian with either the Standard inner product or the -inner product. However, it is Hermi-

¡

tian with respect to the -inner product. If the CG-algorithm is written with respect to

d

f

´

the -variable and for this new inner product, the following sequence of operations would

be obtained, ignoring again the initial step:

£x£ a w C % ´ C ` ´ ¨ a % fd

¡`

!

d £ ¨ C f C

¡ C C f ¡

f

a C % C £ a f w % f C ` ¢

`

f .

f

´ ´ f'd ¡ % ´

Recall that the vectors and the vectors are related by . Since the vectors

´

are not actually needed, the update for in the second step can be replaced by

f f

'd ¡ w

. Then observe that the whole algorithm can be recast in terms of

f

C d ¡

fd ¡¤ P

and . f

` w ¨

C

a !% x£a % P `

C f C !

¡¤ f P C C ¨ f ¡ C d

f

and

f

a % P `£a f % f $`

P

¡¶ ’ ”p£¡¨8 ³¡© &’ ”8 ” ’ ¤ §

8˜ 8¥© $ $ ©

¡¨ ¨¦ ¤

© §¥ &

$

¢ P w

.

f f

Notice that the same sequence of computations is obtained as with Algorithm 9.1, the

left preconditioned Conjugate Gradient. The implication is that the left preconditioned CG

¡

algorithm with the -inner product is mathematically equivalent to the right precondi-

¡

tioned CG algorithm with the -inner product. 'd

f

¥ ¦

CWI5¦¥ P ¢¡H

Y HP

53453

42 2 B XU P QQCWIhIDC 5P

Y9Y H fHT f

W

When applying a Krylov subspace procedure to a preconditioned linear system, an opera-

tion of the form

£ £ d

q ¢

¡ f

or some similar operation is performed at each step. The most natural way to perform this

£ ¡

operation is to multiply the vector by and then apply to the result. However, d

f

¡

since and are related, it is sometimes possible to devise procedures that are more

economical than this straightforward approach. For example, it is often the case that

¨‘¢(¡

¥

¤

in which the number of nonzero elements in is much smaller than in . In this case, the

¤

£ f'd ¤

¡

simplest scheme would be to compute as

£ a ¤ w ¡` d ¡¤ £ d ¢ w£

¬£

¡ ¡ ¤ f'd

f

f

§

¦

This requires that be stored explicitly. In approximate factorization techniques,

¤ ¤

is the matrix of the elements that are dropped during the incomplete factorization. An