¬R¦ ¤

R !§ R #V( ˜ R !#( t© R

!(

y

¤ ¤

¥

R R

Let be the matrix

¬R (8& s s £¤( £ s ¢ ¡ ( s ¢ ¡ $

¢¡(

«

PWG

PG

PG

¤¥¤

u¡

where each is the -th column of the identity matrix.

If the block Jacobi and block Gauss-Seidel algorithms, Algorithms 4.1 and 4.2, are

examined carefully, it can be observed that each individual step in the main loop (lines 2 to

¬R £¡

¢

© R £§

5) represents an orthogonal projection process over . Indeed, the equation QH

C

¥

¬¬ R

(4.17) is exactly (5.7) with . This individual projection step modi¬es only the

R

components corresponding to the subspace . However, the general block Jacobi iteration

combines these modi¬cations, implicitly adding them together, to obtain the next iterate

© . Borrowing from the terminology of domain decomposition techniques, this will be

W’QH

U

called an additive projection procedure. Generally, an additive projection procedure can

R

be de¬ned for any sequence of subspaces , not just subspaces spanned by the columns

R

of the identity matrix. The only requirement is that the subspaces should be distinct,

although they are allowed to overlap.

¢£¡

R ©R §

Let a sequence of orthogonal systems be given, with the condition that QH

C

¡

W

R H

w © ¬ W"QUH ©

R bR R #

Xf

y H

¡ V(

( ( ( W “R

vE

¬ ©§ ²

ty¨ § ¬Rb

R

X

can be used for each projection, i.e., Even more generally, a different parameter

R#WT

£R H W

w © ¬ "QUH ©

R bR

Xf #

y H

©¡ (V(

( ( W “R

7E

¬ ©§ ²

tc¨ § ¬Rb

R

X

T

is replaced by

the additive processes are used in conjunction with an acceleration parameter , thus (5.25)

#

, and orthogonal to . Often,

represents the projector onto the subspace spanned by

R R §

W “ R R

§ § ¬R¥

R R

¡

Observe that each of the operators

W

£R

H

¨ ¢ W “ R § R R § ² ¬

£ R

Xf

W ¡

H

¨ R W “ R ²

§ R R § X £Rf ² H ©V§ A¨ ¬

WU

"QH W

"UQH

¨

©²

V§ A¨ ¬ H H

W ¨

, the residual vector at step , then clearly De¬ning

©§

t²A¨ ¬

6. EndDo

W H W"VH

U

RX Set

5.

w©¬ ©

R bR £

EndDo

4.

H

Solve

3.

©§ ²

tA¨ ¬ R bR § y

R

X ¡ (

Do: For

2.

(Q˜ ( ¬

y vE

T

W

( (

until convergence, Do: 1. For

V( ¡Y ¬

3 2

1 )

0 (%&$

'#

5 D PT£§H5 ¤ Y8

6 DQ S E D 5QQ

Y8 ¡ ¥ 5©E Y8 ¥

¢ ¢ 7 £S P¥ ´¡¦

´ A ˜h¤ v ¢

which leads to the following algorithm.

W

R H W

w © ¬ "QUH ©