© §¥ &

$

£

A consequence of the above proposition is that if , at a certain step, i.e., if P §

the preconditioning is “exact,” then the approximation will be exact provided that

S£ E

z

is nonsingular. This is because would depend linearly on the previous ™s (it is P ¦£

£

equal to ), and as a result the orthogonalization process would yield .

f

A dif¬culty with the theory of the new algorithm is that general convergence results,

such as those seen in earlier chapters, cannot be proved. That is because the subspace of

approximants is no longer a standard Krylov subspace. However, the optimality property

of Proposition 9.2 can be exploited in some speci¬c situations. For example, if within each

outer iteration at least one of the vectors is chosen to be a steepest descent direction

«« HP %¨@Bz‚a o`

§ §

vector, e.g., for the function , then FGMRES is guaranteed to converge

±

independently of .

&

The additional cost of the ¬‚exible variant over the standard algorithm is only in the

(

extra memory required to save the set of vectors . Yet, the added advantage of h

P C 0)0)0)C f

¬‚exibility may be worth this extra cost. A few applications can bene¬t from this ¬‚exibility,

especially in developing robust iterative methods or preconditioners on parallel computers.

Thus, any iterative technique can be used as a preconditioner: block-SOR, SSOR, ADI,

Multi-grid, etc. More interestingly, iterative procedures such as GMRES, CGNR, or CGS

can also be used as preconditioners. Also, it may be useful to mix two or more precondi-

tioners to solve a given problem. For example, two types of preconditioners can be applied

alternatively at each FGMRES step to mix the effects of “local” and “global” couplings in

the PDE context.

¥

4 2 ¡3

2 D3e A8 !R

BH f i

Recall that the DQGMRES algorithm presented in Chapter 6 uses an incomplete orthogo-

nalization process instead of the full Arnoldi orthogonalization. At each step, the current

vector is orthogonalized only against the previous ones. The vectors thus generated are

h ¦§

S y

a £%S £` ”

²7¢¥ ¨v ¡

“locally” orthogonal to each other, in that for . The matrix be-

¡

comes banded and upper Hessenberg. Therefore, the approximate solution can be updated

¨¥

at step from the approximate solution at step via the recurrence

¥

G

£¤

GP v' £

¦‘

'd

f C ©w

¨

£ % GC

%d¦2 S

¨

¥§ S

d

f

¦S

d f

¦§

S C

¨

in which the scalars and are obtained recursively from the Hessenberg matrix .

An advantage of DQGMRES is that it is also ¬‚exible. The principle is the same as

£ 'd

¡

in FGMRES. In both cases the vectors must be computed. In the case of P f

FGMRES, these vectors must be saved and this requires extra storage. For DQGMRES, it

can be observed that the preconditioned vectors only affect the update of the vector P

in the preconditioned version of the update formula (9.27), yielding

£¤

f'd

f¥§ S S C

£ f'd

¡

¨ ¬

G C

¦S

d f

£ d

¡

As a result, can be discarded immediately after it is used to update . The same

f

r¡¶

·

’ ”8 ”#’ ¤ ¦§

$ © ’} ”v&

˜8’ ’ ”p20©

8¥1 )

¡ ¤ ¡©

$ ¨¥ '& #

$© &

$

memory locations can store this vector and the vector . This contrasts with FGMRES

which requires additional vectors of storage.

nfht…vg—P!U

• •" ˜—E”f’@‘%znuz—

•…“ …

¢

"

u

¢

¡

There are several versions of the preconditioned Conjugate Gradient method applied to

the normal equations. Two versions come from the NR/NE options, and three other varia-

tions from the right, left, or split preconditioning options. Here, we consider only the left

preconditioned variants.

The left preconditioned CGNR algorithm is easily derived from Algorithm 9.1. Denote

˜ C

C C C

¨