7. EndDo

£

u§ # C f ¥ ¥

«§

8. Compute and

C f& f

9. EndDo

S ( S ¥ h ¦ § ¥h £ W %„DB¬B% £ £ h ¡

¡ w ‘f©¨ f h ¨ ¨©f u«C § h ¦¨ ¤ § '¬ % f % ¢ h

h ¬

10. De¬ne ,

§

¨ &

11. Compute , and

h h

h f

12. If satis¬ed Stop, else set and GoTo 1

The Arnoldi loop constructs an orthogonal basis of the left preconditioned Krylov

subspace &W

C C C

h

u ¡ ` „DBBF qf'd ¡

%¬¬¬% % v(

¬

b d

af f'd

#

%

It uses a modi¬ed Gram-Schmidt process, in which the new vector to be orthogonalized

is obtained from the previous vector in the process. All residual vectors and their norms

that are computed by the algorithm correspond to the preconditioned residuals, namely,

¡ ` d ¡

¨ f ¨

, instead of the original (unpreconditioned) residuals . In

P h ah h

addition, there is no easy access to these unpreconditioned residuals, unless they are com-

¡

puted explicitly, e.g., by multiplying the preconditioned residuals by .This can cause

some dif¬culties if a stopping criterion based on the actual residuals, instead of the precon-

ditioned ones, is desired.

¡

Sometimes a Symmetric Positive De¬nite preconditioning for the nonsymmetric

matrix may be available. For example, if is almost SPD, then (9.8) would not take ad-

vantage of this. It would be wiser to compute an approximate factorization to the symmetric

part and use GMRES with split preconditioning. This raises the question as to whether or

not a version of the preconditioned GMRES can be developed, which is similar to Algo-

rithm 9.1, for the CG algorithm. This version would consist of using GMRES with the

¡

-inner product for the system (9.8).

£

At step of the preconditioned GMRES algorithm, the previous is multiplied by

¥

to get a vector

' W' £

¦‘

¬ £

Then this vector is preconditioned to get

0¥v' £

¦) ‘

'd

¬

¢

¡

P f

S£

¡

This vector must be -orthogonalized against all previous ™s. If the standard Gram-

Schmidt process is used, we ¬rst compute the inner products

uD¥v' £

¦¥ ‘

”a S £ % P ¡` a S £ % $P` BBD% G HC% a S £ %

`

S ¥ ¬¬¬ % ¥%

and then modify the vector into the new vector

P

b¥v' £

¦ ‘

P ¦P s¬S £ 2¥

¨ S

S

f

¦P

To complete the orthonormalization step, the ¬nal must be normalized. Because of the

¦P S£

orthogonality of versus all previous ™s, observe that

b¥v' £

¦µ ‘

¡ ` a ¦ P % $P` a ¦ P % ¦ P ` a ¦P % ¬ua ¦ P %

d `

f

r¡¶

¶ ’ ”p£¡¨8 ³¡© &’ ”8 ” ’ ¤ §

8˜ 8¥© $ $ ©

¡¨ ¨¦ ¤

© §¥ &

$

¡

Thus, the desired -norm could be obtained from (9.13), and then we would set

6 ¥v' £

¦‘

« f a

% ¦P ` ¥ ¦ P

£

¥

¬ C f

and

C

f f

a ¦P % ¦P `

One serious dif¬culty with the above procedure is that the inner product as

computed by (9.13) may be negative in the presence of round-off. There are two remedies.

¡

First, this -norm can be computed explicitly at the expense of an additional matrix-vector

S£

¡ ¡

multiplication with . Second, the set of vectors can be saved in order to accumulate

¦P ¦P

¡

inexpensively both the vector and the vector , via the relation

¦P ¬ S £ ¡ S ¥

¡ ¨

S

f

A modi¬ed Gram-Schmidt version of this second approach can be derived easily. The

details of the algorithm are left as Exercise 12.

¥ ¦