$

When implementing a basic relaxation scheme, such as Jacobi or SOR, to solve the linear

system

˜ ¢–U ˜ ' ‘ x¤£

¦

&

%

or

´ ˜ U 0¥‘ x¤£

¦)

( 3

%

˜ ˜ U

it is possible to exploit the fact that the matrices or need not be formed explic-

itly. As will be seen, only a row or a column of at a time is needed at a given relaxation

step. These methods are known as row projection methods since they are indeed projection

˜

methods on rows of or . Block row projection methods can also be de¬ned similarly.

723531

6 42 3rbY¨¤qpHhS9gf3X¦dHcb`XVQS5QIGFDCA¤8

a Y W U T H RP H BE B B @ 9

eUW

B W UP 9 @ i T

It was stated above that in order to use relaxation schemes on the normal equations, only

access to one column of at a time is needed for (8.9) and one row at a time for (8.10).

This is now explained for (8.10) ¬rst. Starting from an approximation to the solution of

(8.10), a basic relaxation-based iterative procedure modi¬es its components in a certain

order using a succession of relaxation steps of the simple form

uD¥‘ x¤£

¦¥

´ ´

vQ

ut s yx

w

S ‚S

y

where is the -th column of the identity matrix. The scalar is chosen so that the -th

S S

component of the residual vector for (8.10) becomes zero. Therefore,

”a S „a S S yx´ ` ˜ ©‚ ` b¥‘ x¤£

¦

E

% w ¨

p¡¶

¶ ’ ”p20("'&}%#”8³ª%¡"p!¡©’ ”W ¡¨8 ¨¦˜ ¤

8¥1 )© ¥ ’$ ©˜ ’8 © 8¥ ˜ 8© © § ¥ &

$

™˜ ©¨‚3 C

´

which, setting , yields,

C

Gq¥‘ x¤£

¦µ

aS % `

¬ «« §S ˜ u§ S y

Denote by the -th component of . Then a basic relaxation step consists of taking

S

˜ % ´i˜ x` ¨ S y 6 ¥‘ x¤£

¦

¬ a S «« § S ˜ u§ iS

Also, (8.11) can be rewritten in terms of -variables as follows:

q¬S ˜ xS y w uvs Ge¥‘ x¤£

¦F

t

´

The auxiliary variable has now been removed from the scene and is replaced by the

i˜

´

original variable .

Consider the implementation of a forward Gauss-Seidel sweep based on (8.15) and

y

(8.13) for a general sparse matrix. The evaluation of from (8.13) requires the inner prod- S

´ ˜ – ˜

˜

uct of the current approximation with , the -th row of . This inner product S

is inexpensive to compute because is usually sparse. If an acceleration parameter ¡

S

y

y

is used, we only need to change into . Therefore, a forward SOR sweep would be as

¡

S S

follows.

——@˜u)n

˜ (

'! $" 5§wE¤©n¦§¥£¢

& % #! ¡ ¨ ¦ ¤ ¢ 2 10£#

0

1. Choose an initial .

s®t „D¬BB% H % G ª y

Fp IG % t)¬B A@89¬ Gd

2. For Do:

EH

DHC 6 @ 7453™S 6 ¡

3.

6 ˜ B S w

y

4. S

5. EndDo

˜

Note that is a vector equal to the transpose of the -th row of . All that is needed is

S

®

the row data structure for to implement the above algorithm. Denoting by the number 0P)

S

of nonzero elements in the -th row of , then each step of the above sweep requires

)® HQ Hw H

)®

operations in line 3, and another operations in line 4, bringing the total to

P P

H wS QS ®H

w

)® P)® P)®

. The total for a whole sweep becomes operations, where represents

P S

the total number of nonzero elements of . Twice as many operations are required for the

Symmetric Gauss-Seidel or the SSOR iteration. Storage consists of the right-hand side, the

vector , and possibly an additional vector to store the 2-norms of the rows of . A better

alternative would be to rescale each row by its 2-norm at the start.

Similarly, a Gauss-Seidel sweep for (8.9) would consist of a succession of steps of the

form

Ge¥‘ x¤£