•” Answer the same questions as in (2) for the case when SOR replaces the Gauss-Seidel itera-

tion.

Generalize the above results to -cyclic matrices, i.e., matrices of the form !

v

„

..

I

A

.

‚

"# „t

v

"

NOTES AND REFERENCES. Two good references for the material covered in this chapter are Varga

[213] and and Young [232]. Although relaxation-type methods were very popular up to the 1960s,

they are now mostly used as preconditioners, a topic which will be seen in detail in Chapters 9

and 10. One of the main dif¬culties with these methods is ¬nding an optimal relaxation factor for

y £ w ¢ ¡y ¡|

|5 | §| ¤£ C

C

general matrices. Theorem 4.4 is due to Ostrowski. For details on the use of Gershgorin™s theorem in

eigenvalue problems, see [180]. The original idea of the ADI method is described in [162] and those

results on the optimal parameters for ADI can be found in [26]. A comprehensive text on this class of

techniques can be found in [220]. Not covered in this book is the related class of multigrid methods;

see the reference [115] for a detailed exposition. Closely related to the multigrid approach is the

Aggregation-Disaggregation technique which is popular in Markov chain modeling. A recommended

book for these methods and others used in the context of Markov chain modeling is [203].

£

¢£ C

orthogonal projection technique, the subspace is the same as . In an oblique projection

There are two broad classes of projection methods: orthogonal and oblique. In an

methods and is known as the Petrov-Galerkin conditions.

be explained below. This simple framework is common to many different mathematical

!

which will be called the subspace of constraints or left subspace for reasons that will

!

thogonal to linearly independent vectors. This de¬nes another subspace of dimension

is constrained to be or-

orthogonality conditions. Speci¬cally, the residual vector t§t¨

©²

!

approximation. A typical way of describing these constraints is to impose (independent)

!

dimension, then, in general, constraints must be imposed to be able to extract such an

!

. If is this subspace of candidate approximants, or search subspace, and if is its «

Aª

techniques is to extract an approximate solution to the above problem from a subspace of

that it represents. The idea of projection

note the matrix and the linear mapping in tª

«

real matrix. In this chapter, the same symbol is often used to de-

where is an § ¤

¥ ¤ §

–

( t§

¨¬©

p±w°„— ®

i Consider the linear system

¡

£R

— “ — “ “ “™ “—

y © m’g• vv gQ¢© v• xff‘

eiED%Bd8pF9P%6w%BI8RFP gts%se8d8fr8}GXF

n u pH D

u ns pn U u BX U F n Xu 8 nH

’D8dID’D’8`QX%B„X rHWFeu•8©§XErBn˜ro8 VqX’cQ8yt98W6i9fX r8 rHWTQ8rB}Y`%XlP’DY rHT9EXrB7F 79DP`•s%BI8F

pq`9P9698’6{9ff9Y%sv98W6ipFPewpaHPIWFytsmYeQ8%B98%TWXu D Hyd8RDPeWu`P9qX rHRD%s8AtUqY Wq89q9895pitxBX’98W6F

n u F X 8 F 8Y H p s H sX 6

8UD sP BX 8 U fp u Hbs F F

A„IXADIpF%sd8DWh8%Bn 9YgWAQPGB``QP%B989sa8˜tx%B98TgPgtsAD’98‚ rH96eud8ld8D’98W6y’Dr8tH%BduDW8Y

%Bd8pF9P%6tD rH67™ri8’%u9P’nRDWbD}P qQX fB %UI8GFRDGx‚rBPe98tsH pA%fqX rHpFb qXI89’6F XG„X rHpFcwUH

nu 5 U D P Xs pD Fs P

8 „

s

P cX `

uP

nP

i qR9Bpo98EX’6%B%FRXn `sn Pz`PBs uR’xts‚WFPSH }hP9u98WBqFsRroX vts`HB }hXf˜’DID’x’8‚QX%B reWn s”„a—X SH–rWFa—ueW•8¨§%BXe E tDIpFPz9se8d8’D¦ H Q8tprBHrpF%nz˜DBb Q8 RDI„’Ds X ’8SHu`FR9PQX%B8e‚nb V9fX SHX˜euWFy©§D •8 %UQX%Bd8GF(DxDn R £

rBeP98tsH p a8uErBP p uRtsrHT qIXyXt’D98‚ rH%6ueI898T rHpFhP9Bd8F ™Peu HrrF%uQPrBn uRtsSHGFD aHro8z98’6A%fgGFrDt

p DB f b s F Hp FX X

20

431) ( ' ¥0„ £0 ! ¡

¤¢

¦¥¤£¢¡

j 75 B!B¥ © £ £ Bl 5 tp¢

„ § ¡|

¢ ¢£ C

¥

£ £

¢

method, is different from and may be totally unrelated to it. This distinction is rather

important and gives rise to different types of algorithms.

$#" ¢

! ! 2© 20A 5Q4©DCA ) 5 £¤#3 & 8 P5 P5

3E

'

I E 'B

F 'H

§ ¥ ¤

¤ Vª

«

Let be an real matrix and and be two -dimensional subspaces of .A !

projection technique onto the subspace and orthogonal to is a process which ¬nds an

© ¥©

approximate solution to (5.1) by imposing the conditions that belong to and that the

¥

new residual vector be orthogonal to ,

a±0 q— ®

i

¡ ¥© ¨¦A¨

§ ¥© § ²

Find such that

(

©

If we wish to exploit the knowledge of an initial guess to the solution, then the approxi-

w §©

mation must be sought in the af¬ne space instead of the homogeneous vector space

. This requires a slight modi¬cation to the above formulation. The approximate problem

should be rede¬ned as

r±a „— ®

i

w ¢¢¥ © ©¦A¨

©¡ § ¥© § ²

Find such that

(

© (¬ ¥ ©

©

Note that if is written in the form , and the initial residual vector is de¬ned

Xw