¦ ¦ ¦ ¦

( 'd a

%f

` `

can be solved. However, when is not diagonally dominant the ILU factorization pro-

cess may encounter a zero pivot. Even when this does not happen, the resulting precon-

ditioner may be of poor quality. An incomplete factorization routine with pivoting, such

as ILUTP, may constitute a good choice. ILUTP can be used to precondition either the

original equations or the normal equations shown above. This section explores a few other

options available for preconditioning the normal equations.

§

C¥

73 ¡3¢6

62 2 ¡ 9 U ACW 9 Ce 9¨ RcW 9¡£XpB ¢P

¡eU

P

BY

¡

There are several ways to exploit the relaxation schemes for the Normal Equations seen in

Chapter 8 as preconditioners for the CG method applied to either (8.1) or (8.3). Consider

˜ Ux` £ d a

(8.3), for example, which requires a procedure delivering an approximation to f

£

for any vector . One such procedure is to perform one step of SSOR to solve the system

˜ x` £ £

a ¡

. Denote by the linear operator that transforms into the vector result-

'd

f

ing from this procedure, then the usual Conjugate Gradient method applied to (8.3) can

be recast in the same form as Algorithm 8.5. This algorithm is known as CGNE/SSOR.

Similarly, it is possible to incorporate the SSOR preconditioning in Algorithm 8.4, which

¡

is associated with the Normal Equations (8.1), by de¬ning to be the linear transfor- d

f

£

mation that maps a vector into a vector resulting from the forward sweep of Algorithm

8.2 followed by a backward sweep. We will refer to this algorithm as CGNR/SSOR.

The CGNE/SSOR and CGNR/SSOR algorithms will not break down if is nonsin-

˜ ˜

gular, since then the matrices and are Symmetric Positive De¬nite, as are the

¡

preconditioning matrices . There are several variations to these algorithms. The standard

alternatives based on the same formulation (8.1) are either to use the preconditioner on the

¡˜

right, solving the system , or to split the preconditioner into a forward

d

f

˜

SOR sweep on the left and a backward SOR sweep on the right of the matrix . Sim-

ilar options can also be written for the Normal Equations (8.3) again with three different

ways of preconditioning. Thus, at least six different algorithms can be de¬ned.

¥ £¤

¥P B XU P Q9 @qpHhS9 3X¦dHca Y XU

4 3¡3¢6

2 2 ¡ e

i T feUW

Y

W

The Incomplete Cholesky IC(0) factorization can be used to precondition the Normal

Equations (8.1) or (8.3). This approach may seem attractive because of the success of

incomplete factorization preconditioners. However, a major problem is that the Incom-

plete Cholesky factorization is not guaranteed to exist for an arbitrary Symmetric Pos-

itive De¬nite matrix . All the results that guarantee existence rely on some form of

§

diagonal dominance. One of the ¬rst ideas suggested to handle this dif¬culty was to 8 w

use an Incomplete Cholesky factorization on the “shifted” matrix . We refer to §

˜ ˜

IC(0) applied to as ICNR(0), and likewise IC(0) applied to

§ §

’ ”8 ” ’ ¤ §

$ © } ”vG

’ ˜8’ ’ ”p¥ &)0©

8 1 ¨

§

¡¡© &

$ ¨' %©

¥ $ &

$

¢

¡ ¡

as ICNE(0). Shifted variants correspond to applying IC(0) to the shifted matrix.

§

220

210

200

190

iterations

180

170

160

150

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

alpha

5"0# ¥ ¥£ ¥

§ ¢ § ¤¡

Iteration count as a function of the shift .

One issue often debated is how to ¬nd good values for the shift . There is no easy and

well-founded solution to this problem for irregularly structured symmetric sparse matrices.

One idea is to select the smallest possible that makes the shifted matrix diagonally dom-

inant. However, this shift tends to be too large in general because IC(0) may exist for much

smaller values of . Another approach is to determine the smallest for which the IC(0)

factorization exists. Unfortunately, this is not a viable alternative. As is often observed,