to Meijerink and van der Vorst [149]. Eisenstat™s implementation was developed in [80] and is of-

ten referred to as Eisenstat™s trick. A number of other similar ideas are described in [153]. Several

¬‚exible variants of nonsymmetric Krylov subspace methods have been developed by several authors

simultaneously; see, e.g., [18], [181], and [211]. There does not seem to exist a similar technique

for left preconditioned variants of the Krylov subspace methods. This is because the preconditioned

– $P ¥

R

¤

operator now changes at each step. Similarly, no ¬‚exible variants have been developed for the

BCG-based methods, because the short recurrences of these algorithms rely on the preconditioned

operator being constant.

–

The CGW algorithm can be useful in some instances, such as when the symmetric part of can

be inverted easily, e.g., using fast Poisson solvers. Otherwise, its weakness is that linear systems with

the symmetric part must be solved exactly. Inner-outer variations that do not require exact solutions

have been described by Golub and Overton [109].

¡¡

¶

f

can be computed explicitly. Instead,

and

Here, it is unlikely that the matrix d ¡ ¡

culations, and subsidiary linear system solutions, may be another form of preconditioning.

d

f

is some complicated mapping that may involve FFT transforms, integral cal-

where ¡

f

Bfd ¡ q'd ¡

(although this is not guaranteed). As another example, solving the linear system

subspace method and may require fewer steps to converge than with the original system

is an explicit form of preconditioning. The resulting system can be solved by a Krylov

example, scaling all rows of a linear system to make the diagonal elements equal to one

original linear system which makes it “easier” to solve by a given iterative method. For

Roughly speaking, a preconditioner is any form of implicit or explicit modi¬cation of an

uu¥¦

•… "

—EX“#…o•

Wy cfCq‚S

‘ YF

qDA qxiViiBIit cRVBf9Y‘WVtCªY r…GRhAGyB9HqBf9@e c D`AqRBD”e G9YUInfl`A•euD`ae Wl

YI ca A a…F c C A S X cQT CF A Y YF Y C YsF9a c c eQ c

teDBWV scY WBWfVabEAGBf9YrD”hARQufF„APIv”SvVUWxjDBpAiqV pbe`Ae cG¡D`AYuye‚GbBrIc …B9Y

CAI clI Cs A XV C Y X V S a F CA { ˜ ‘ CF S e CFA A

9 c9 { S e S T Cs…F ey9s…F c CV A XV Atl … {VI S X lA C l AT I

Da G”©WEVCX ”DA …RvVtDGRba csuqRRBrIRt ci™Bf9Y DHGbA ”quG¤ WVtCbBp cDAR@GPiFfa

teDBWV scY WBWfVbEAG”GA RRDsy‚RD ‘betDBWV csY WBWVfbEAGliRDutuIGrIF•qv“ihVuWV csqWV

CAI clI a Cs j …s SF A CV CAI clI a Cs VVt c c YTV C X eI Ys

A …RiF …BquV•P•eY crSc …—uvy … iBQYtCGp EACGmEADB9PuB9Y`Ae•qWVwABf9um`ABV $ UD`AYuye

T cFpF Y VI …F c F CA Y YF Y YQ Y YF Y ‘ S e

CGbBIrc A•EeCGBsRiEFDBBIvV qcY WBvVbEAGmVbA`u”feB7d GRDb•AYiRQX•eebbaBfQ@YG”SUB9Y

FA … F e… CAIAt F clI a Cs Y l eQ AQ cI9a … Aa e eV A

XD‚v•V”etDBDVa D•AqiBB‚e G”7‘betD`AqRBD”uQvV GsphADwrIcwDb`A@uWV csqEFD`AY UfADBsfuTRQe

V A S e CAp C YsF9a c9 8 C YsF9a e c Cs IA e eI Y C c aF e

BV …RryC™|Bf9™DBWUy … RFba GRoy7GB7d cGRBab`AUIvV qcYuEFDA•Y ªDA•qvRw9Y @{ bABrIGTWfV“e c

p „ A Y XV AIV … cs YjAQ I9 Y C c C YQV IF c l c S a

9 c9 { CA … Y cy Css C c eT yIF eF lAI Al AT IFa CAI c I a C

Da G”™BBp v•Ve`AqF“SqVtDRiF ryGF Wl qcuRfQe qiuvbBp RGuRbiDBWV scY WlBvfVbEADs

‘ I c Y F Y a A s y A A c s A I A YX V j … A { … t I cC C Q e C V e l V Y S A S Ve l

¥ beuvV sq`bGis•Y GfefRl–D`ED™… Divy Wurcse GsoRfv¤ R}{wiu9fA‚f‚v•fBIRF

AEGhFuhAG|eY RfQfeEAHRba qcYEARfVB”™GbBBA ua`e BRFwrYGUDnvV qcuFBIGUWfV–|ufbi{A Gp

C C CF … C …F C A9 8 ‘AaI c lI CF XV I Y cT S a F eF lA c

I XV c S e C A … CF IA ct A … CAI c I a C lVVt t cl c

DA`EYDªe DA•Yuye GfFBrIc A`etGBfseDBp iFBp WV•eVY‚DBWV csY WlBWfVbEAGs iRDFUurIWBIW

2

¢©4076 )( £

¢©

)01£!!¡

¢©

£ )6 ) 6

¤g(7R6

¢

¡

¦¥ ¤£ ¢ ¡

¶

’ ”8 ” &’ ¤ ¦§ &‚ 4 ¨¥ j 7 rj}’ ¤

$ © ’ $ ’ ¡ ¡¡

e¥

¥ ¡¡© &

$

r ¢

¢¡

¡

the iterative processes operate with and with whenever needed. In practice, the 'd

f

¡

preconditioning operation should be inexpensive to apply to an arbitrary vector.

d

f

One of the simplest ways of de¬ning a preconditioner is to perform an incomplete fac-

¦

3

¨

torization of the original matrix . This entails a decomposition of the form ¤

¦

where and have the same nonzero structure as the lower and upper parts of respec-

tively, and is the residual or error of the factorization. This incomplete factorization

¤

known as ILU(0) is rather easy and inexpensive to compute. On the other hand, it of-

ten leads to a crude approximation which may result in the Krylov subspace accelerator

requiring many iterations to converge. To remedy this, several alternative incomplete fac-

¦

torizations have been developed by allowing more ¬ll-in in and . In general, the more

accurate ILU factorizations require fewer iterations to converge, but the preprocessing cost

to compute the factors is higher. However, if only because of the improved robustness,

these trade-offs generally favor the more accurate factorizations. This is especially true

when several systems with the same matrix must be solved because the preprocessing cost

can be amortized.

This chapter considers the most common preconditioners used for solving large sparse

matrices and compares their performance. It begins with the simplest preconditioners (SOR

and SSOR) and then discusses the more accurate variants such as ILUT.

‚m—E–r…vxfP! %HH˜ xnH¥—@”¤£•w¤PXp

˜ • • " ˜ "£ ˜ £

$ v¦

¥

As was seen in Chapter 4, a ¬xed-point iteration for solving a linear system

¢–

takes the general form

u¥W) ¥ £

¦‘

w

f d ¡¤ ufd ¡

f

f

¡

where and realize the splitting of into

W) ¥ £

¦‘

— ¢¢

¬ ¨¡

The above iteration is of the form

µ W) ¥ £

¦‘

¦w

¦§

f