§5 $% )(

& &' 7 01

97

10 Assume is to be minimized, in which is with . Let be the

d‚TQh ¢ e

–© h © ¢ d”–

# T#

0 2

minimizer and . What is the minimizer of , where is an

¢

arbitrary scalar?

NOTES AND REFERENCES. Methods based on the normal equations have been among the ¬rst to

be used for solving nonsymmetric linear systems [130, 58] by iterative methods. The work by Bjork

and El¬ng [27], and Sameh et al. [131, 37, 36] revived these techniques by showing that they have

some advantages from the implementation point of view, and that they can offer good performance

for a broad class of problems. In addition, they are also attractive for parallel computers. In [174], a

few preconditioning ideas for normal equations were described and these will be covered in Chapter

10. It would be helpful to be able to determine whether or not it is preferable to use the normal

equations approach rather than the “direct equations” for a given system, but this may require an

eigenvalue/singular value analysis.

It is sometimes argued that the normal equations approach is always better, because it has a

robust quality which outweighs the additional cost due to the slowness of the method in the generic

elliptic case. Unfortunately, this is not true. Although variants of the Kaczmarz and Cimmino algo-

rithms deserve a place in any robust iterative solution package, they cannot be viewed as a panacea. In

most realistic examples arising from Partial Differential Equations, the normal equations route gives

rise to much slower convergence than the Krylov subspace approach for the direct equations. For

ill-conditioned problems, these methods will simply fail to converge, unless a good preconditioner is

available.

¡

¶

Krylov subspace algorithms already seen, using a generic preconditioner.

ers in detail in the next chapter. This chapter discusses the preconditioned versions of the

particular Krylov subspace accelerators used. We will cover some of these precondition-

various applications, depends much more on the quality of the preconditioner than on the

an iterative solver. In general, the reliability of iterative techniques, when dealing with

system into one which has the same solution, but which is likely to be easier to solve with

duced in Chapter 4, preconditioning is simply a means of transforming the original linear

robustness of iterative techniques can be improved by using preconditioning. A term intro-

tions despite their intrinsic appeal for very large linear systems. Both the ef¬ciency and

solvers. This drawback hampers the acceptance of iterative methods in industrial applica-

Lack of robustness is a widely recognized weakness of iterative solvers, relative to direct

uv

—EX“#…o•

•… "

x‘DA`qRBDaHusyBBf9PIc bEADBBfVaG7… …”ifeAB7d GRDb•AUtuIGvV qcY

C YsF9 Y AI A Y l CAp AT c { Q cI9a Y cI

clI C FlI e A9 ‘ eQ CAI clI a Cs C …Q CF Y YQVTF a caA

w WBvVabEAGsuElCGRBRF`YUB”8r‚lb•AuUetDBWV csY WBWfVbEAGiGF RDa csrYGBs AB9”qWGRv qbse

turIqPqWuf9Y c@@qRkiDbA•GBbEAC i‚iuf9sY“Bp qcuEFDA•Y Bf9}DV‚uvV scetDBbBWV csY Wl

cAT YQV { YQT jIA e ylF …F elV A S A Y C c A Y X eI CAp lAI c

qvfVbEADB9ªef`A•eeuBa`e WpD•AqiBBHe G@buWV csqba c RRi‚`AefBf9Y‚rI”liu9Y‚fADBsfuTRQe

wI a Cs A Y Q cl C YsF9a c9 8‘eI YF …ssF A c e V A S aF e

BV …RryC™D|•efeAbfaBfQemB9‚RVqDA WbEACvurI–yAe –turIGWV scY WBvVfbEAk#ivV qcuF …RrSqce

p „ XV a A Y C X YI cl t c ¤ F c cI clI a C § ‘I Y Q

A c Al cI a …A CV SFIy cQ eF 9a e eI YF …ss …F cs Y S X C

ba GspRa GvtVCYbA DªRPfea cRBRGl l Gp¨©BmDBQ‚uWV scqb ca RRRF Rba GRoy3vtVC•Ae cGF

9 c9 { e S …T Cs C X AaIA CAp a { S X C e Y … … …F CF Yj …

Da G”™”DA RWVtGiihV“bBDGEtBBqIvfVuV f…e WVtCpDAr¦§fQ•V”y DA¥¤c v… i“hAGUsyABf9Gqy … iFfa

w scsYhARfVB9Y|flRBIRvVR… DAi{hACG”etD•AqiBD”uWV GpEAG|rIwDb`A”liu9Y‚SwB9vuvu9Y …@¥

C A Al Q X … F C YsF9a eQ c Cs c IA e e V A A Y 9tQV

'7#$ &R6

) 6 ©

© ) 6

&(7R6 )01£©!!¡

¢

¦¥ ¤£ ¢ ¡

¶

’ ”8 ”#’ ¤ ¦§

$ © 8 ¡© ”"£#£"p¥ ¡ 1 I&’ ¤

$ ¥¡ ©8 ¡

¡©

$ ¥$

¢

“o•vV‚m¢ u”U¢ “ q—P " nfht…vg—P!U

… " … • •"

u

$

Consider a matrix that is symmetric and positive de¬nite and assume that a precondi-

¡ ¡

tioner is available. The preconditioner is a matrix which approximates in some

¡

yet-unde¬ned sense. It is assumed that is also Symmetric Positive De¬nite. From a

¡

practical point of view, the only requirement for is that it is inexpensive to solve linear

¡g¡

systems . This is because the preconditioned algorithms will all require a linear

¡

system solution with the matrix at each step. Then, for example, the following precon-

ditioned system could be solved:

u¥W' £

¦‘

Bd ¢–ufd ¡

f ¡

or

W' £

¦‘

¬ ´ fd ¤ (3 ´ d ¡£

¡ % f

Note that these two systems are no longer symmetric in general. The next section considers

strategies for preserving symmetry. Then, ef¬cient implementations will be described for

particular forms of the preconditioners.

¥ ¨bY Hhf b¦W P ¨eIID33¦

©e f ©B 8 § HBHe

6 34 3

22

¡

When is available in the form of an incomplete Cholesky factorization, i.e., when

˜

¡ %

then a simple way to preserve symmetry is to “split” the preconditioner between left and

right, i.e., to solve

´ ˜ d – ( u'd ´ ˜ d ud µ W' £

¦‘

f

%f %

which involves a Symmetric Positive De¬nite matrix.

However, it is not necessary to split the preconditioner in this manner in order to

d ¡ ¡

preserve symmetry. Observe that is self-adjoint for the -inner product,

f

¡% x` ”a % g¡` a % x` a

since

¬ a qf'd ¡% x` ‚a bud ¡` ¡% x` ”a % x` ”a % ` a %¤ud ¡` f

af

Therefore, an alternative is to replace the usual Euclidean inner product in the Conjugate

¡

Gradient algorithm by the -inner product.

U%¨H3 C