However, T T T

y¨ ¬ £¨ ©

X X

² § A¨

²T § ¨ © £

X

¬ A¨

² § © £

¨

T

Thus, the projection of the exact solution has a residual norm with respect to the matrix £

¢

£ X

¬"¢ § ² ©

, which is of the order of . ¨

™cv f ¢ h¢© y™ c ¢ mQfgc¦ “ ‚}

“ “—™ ˜™

— ——™

¢

¡

This section examines simple examples provided by one-dimensional projection processes.

V§ ²z¨ ¬

©

In what follows, the vector denotes the residual vector for the current ¨ ¨

©

approximation . To avoid subscripts, arrow notation is used to denote vector updates.

4 ¡

© © ©

Thus, “© ” means “compute and overwrite the result on the current .”

¨ w ¨ w

(This is known as a SAXPY operation.)

One-dimensional projection processes are de¬ned when

© ¦ § c t© ¥ c

£¬ ¤ ¢¤¬ ¤

and

¢ ( §

¡ ¡

where and are two vectors. In this case, the new approximation takes the form

¦ ¥

© ¥© § ²g¨

and the Petrov-Galerkin condition yields ¥§ X

T

¦ w

X

¥(¨ T a±0 w°„— ®

i

§¬

X

¥ (¤©

¦

Following are three popular choices to be considered.

!$6§ ¢

6P2G¡5 © A ¡6 P0¡F

F5 55 A

AE5)F

§ ¨ 4¥

The steepest descent algorithm is de¬ned for the case where the matrix is Symmetric

¬ ¬

Positive De¬nite. It consists of taking at each step and . This yields an ¦ ¨

iteration described by the following algorithm.

qzl 5 ¢ ¤ ¤¥ ¡µ£ „ ¢ ¢¥ C

„ 5| | § §| 5¥ £ "!

— ¤£A ˜Q¤ v ¢

¢¢

´¡¦ ¤¦

R¢ HUS

5 55 5$ S &95©E

S6 S ¤CR B

Q8 D

' 7$

%# ) 2

1 R

3

1. Until convergence, Do:

tX AT ¨ ¨ T

©§ ²

2. X

¨ ( ¨©§ ¨ ( ¨

3.

¨ w © ¥©

4.

5. EndDo

Each step of the above iteration minimizes T T T T

£

© © ¬ X © ¨ ²© ( X §© ²© § ¬ eX

X

² §©

¨ ¨ (

£ w© ¢²

over all vectors of the form , where is the negative of the gradient direction .

¡

£

The negative of the gradient direction is locally the direction that yields the fastest rate of

§

decrease for . Next, we prove that convergence is guaranteed when is SPD. The result

is a consequence of the following lemma known as the Kantorovich inequality.

´ ’ ¥w

¢ CDA

P#

1 1

(Kantorovich inequality) Let be any Symmetric Positive De¬nite real ¢

R ¢ T © R ¢ ©

matrix and , its largest and smallest eigenvalues. Then,

T T

W©© W £ “ X ¢ X T r©("« ¢ £X

X

¬ ©

© r±Qaw°„— ®

i

R ¢ © w R ¢ ©

( ¥

«

R ¢ © R ¢ © £

( ¡Y

©r"©

(

«

£ ™£¤¢¡6

A

©

Clearly, it is equivalent to show that the result is true for any unit vector . Since