¥

as

± ¯ q— ®

i

t²A¨ ¬m ¨

( ©§ T

§ X w © § A¨ ²

then the above equation becomes or

X

§ §t ¨

² X

In other words, the approximate solution can be de¬ned as

a±— q— ®

i

¡ ( w (¢¥ © T

©X ¬ X X (

a± q— ®

i

¡ ¥ ¡Y`¬ ¥ ( t ¨

§²

X

(

§ ²g ¨ ¬ « ¨

The orthogonality condition (5.6) imposed on the new residual is illus-

X

trated in Figure 5.1.

§ X ¨

«¨

O

´

7 ¡¤4 ¤2

65 3 DA

C

Interpretation of the orthogonality condition.

This is a basic projection step, in its most general form. Most standard techniques

use a succession of such projections. Typically, a new projection step uses a new pair of

©

subspace and and an initial guess equal to the most recent approximation obtained

µ£ „ ¢

|5¥ qzl 5 ¢ ¤ ¤¥

„ 5| | §

£ C

9 ¡

§ "!

from the previous projection step. Projection methods form a unifying framework for many

of the well known methods in scienti¬c computing. In fact, virtually all of the basic iterative

techniques seen in the previous chapter can be considered projection techniques. Whenever

an approximation is de¬ned via degrees of freedom (subspace ) and constraints

! !

(Subspace ), a projection process results.

¥ ´

7 ©¦§¥¢£

¨ ¤ DA

C

In the simplest case, an elementary Gauss-Seidel step as de¬ned by (4.6)

¬¬ £¡

¢

©R ¡§

is nothing but a projection step with . These projection steps are cycled QH

C

¤WV(R ( y lE

¬

for until convergence. See Exercise 1 for an alternative way of selecting the

sequence of ™s. ¡

Orthogonal projection methods correspond to the particular case when the two sub-

spaces and are identical. The distinction is particularly important in the Hermitian

case since we are guaranteed that the projected problem will be Hermitian in this situa-

tion, as will be seen shortly. In addition, a number of helpful theoretical results are true for

¦

¬

the orthogonal case. When , the Petrov-Galerkin conditions are called the Galerkin

conditions.

¨#" ¢

§ ! B@ 078 I

3A ©DC776PP¡63 63 5

E 'B A 8 A E 5 F 5

¬ ¦¤

¥

Let , an matrix whose column-vectors form a basis of and,

& ¢ ¦ ( $¦ ¥

( !

W

¬ ¤

¥

similarly, , an matrix whose column-vectors form a basis of .

& ¢ ¥ (( $¥

!

W

If the approximate solution is written as

(b w

©¦¬ ©

then the orthogonality condition leads immediately to the following system of equations

b

for the vector :

b ¥

¥

§ ¬ ¨

¥

!

¥ §

If the assumption is made that the matrix is nonsingular, the following

!

©

expression for the approximate solution results,

T¥

r± „— ®

i

X

¥

¥

© ¢¥ ©

¬ §

w

¨

W“

¥

§

In many algorithms, the matrix does not have to be formed since it is available

as a by-product of the algorithm. A prototype projection technique is represented by the

following algorithm.

´£C