¬S S yw

Gs

ut

y

Again, the scalar is to be selected so that the -th component of the residual vector for

S

(8.9) becomes zero, which yields

¬ E ”a S % a S S y w x` ˜ ¨‚ ˜ x` Ge¥‘ x¤£

¦P

¶

’ ”8 ¤ © ¦r¦§ £¤r

¥’ ’ ’}”W©

˜8 ¡¡

¢ ¡ &

$

¡C

”a S % a S C S y ¨ C ` ˜ ` E

U©¨“

With , this becomes which yields

%

¬ a «« S § S f% u§ ` S y ¦ ¤ ¥‘ x¤£

Then the following algorithm is obtained.

——@u(

˜ ˜

I!# "£t¶¡w¤h¨ n¦ ¤ £¢

¦¢

& % ! 200 #

‚ C

¨

1. Choose an initial , compute .

s®% E D¬Bt Dt¬ % ¢ H % G H y

¬

2. For Do:

I6HH G @ C G8 ¡ iS

3.

67S y @ w

4. S

y C C

S S ¨

5.

6. EndDo

In contrast with Algorithm 8.1, the column data structure of is now needed for the imple-

mentation instead of its row data structure. Here, the right-hand side can be overwritten

C

by the residual vector , so the storage requirement is essentially the same as in the previ-

o ¨

ous case. In the NE version, the scalar is just the -th component of the current

a S ¤% `

£

S

C

“—

¨

residual vector . As a result, stopping criteria can be built for both algorithms

based on either the residual vector or the variation in the solution. Note that the matrices

˜ U ˜

and can be dense or generally much less sparse than , yet the cost of the

above implementations depends only on the nonzero structure of . This is a signi¬cant

advantage of relaxation-type preconditioners over incomplete factorization preconditioners

when using Conjugate Gradient methods to solve the normal equations.

One question remains concerning the acceleration of the above relaxation schemes

by under- or over-relaxation. If the usual acceleration parameter is introduced, then we ¡

y

only have to multiply the scalars in the previous algorithms by . One serious dif¬culty ¡

S

here is to determine the optimal relaxation factor. If nothing in particular is known about

˜ E

the matrix , then the method will converge for any lying strictly between and ¡

H

, as was seen in Chapter 4, because the matrix is positive de¬nite. Moreover, another

unanswered question is how convergence can be affected by various reorderings of the

rows. For general sparse matrices, the answer is not known.

X¦bY Hhf B §¦W P hf ¦¥

¦U f P

5234 31

42 RUa

In a Jacobi iteration for the system (8.9), the components of the new iterate satisfy the

following condition:

¬ E ‚a S %„a S 7S y w o( ˜ ‚ ˜ x` 0¥‘ x¤£

¦'

¨

`

This yields

C

E E

”a S f% a S ‚S w o( ” `

y

”a S % S S y ¨

¨

or

` `

¶ 8˜ ’ ”p20("'&}%#”8³ª%¡"p!¡©’ ”W

8¥1 )© ¥ ’$ ©˜ ’8 © 8¥ ˜ 8©

¡¡

¡¨ ¨¦ ¤

© §¥ &

$

C Gs

ut

U%@

¨

in which is the old residual . As a result, the -component of the new iterate

is given by

G) ‘ ¤x£

¦

vts

u

yw

Cy

S C S

%S S

¬ a «« S §f S f% u§ ` S ¦qW ‘ x¤£

¥

Here, be aware that these equations do not result in the same approximation as that pro-

duced by Algorithm 8.2, even though the modi¬cations are given by the same formula.

y

Indeed, the vector is not updated after each step and therefore the scalars are different S

for the two algorithms. This algorithm is usually described with an acceleration param-

y

eter , i.e., all ™s are multiplied uniformly by a certain . If denotes the vector with

¡ ¡

S ¢

y

®%„BD¬B% G ª % S