¬

u

two other terms cancel each other by the de¬nition of and the fact that . ¥( ¦

XR y

¥( u¦ T

Consider now the inner product with , T

T E T

"U

W XR

u T ¦ u X R ¥ ( u © T $ W “u X XR

X

¬ § ² ²

¥( u¦ & R ¥( u¦ u S

¦ ¥(

W“

W"U

u S X R ¥ ( u ¦ T $ "U “ u X W XR

¬ W§ ² & ¥( u

T

¦ W

X R X w WR “ R

"U X

W X

¬ ²

w R ¥ R e( u ¦ $ W “ u ¥R

8& R ¥ ( u ¦ u S ¥

S

“

“

"U W"U

W W

W

"U

W

² E

For , all of the inner products in the above expression vanish, by the induction

² X vE y T

¬

hypothesis. For , the inner product is T T

y

X£ u X

² “ ¥ W “X u w W “ u ¥ W T “ u w X u ¥ u eS( T u ¦ $ W"W U “ u ¬ W “ u ¥ ( "U u ¦ u¦ uS u ¥(

X X &

W“ W“

W

u ¥ ( u ¦ u S ² u ¥ ( u ¦ u $S W “ u ¬ X

&

W“

W“ W"U

¡Y ¬

T

XuR T

¬ W"U ¥ ( ¦

It can be proved in exactly the same way that for . Finally, Y E

Xu

¬ ’U ¥ ( "U u ¦

by construction . This completes the induction proof. The proof of the

W

W

y

matrix relations (7.3“7.5) is similar to that of the relations (6.4“6.6) in Arnoldi™s method.

T

The relations (7.3“7.5) allow us to interpret the algorithm. The matrix is the pro- ¢

TX

T

§ §

§

jection of obtained from an oblique projection process onto and orthogo- ¦(

¢

XT X

W

§ §

nally to . Similarly, represents the projection of on and

¥( ¥(

¢ ¢

¢

X

§ W¢ W §

orthogonally to . Thus, an interesting new feature here is that the operators

¦(

W

§

and play a dual role because similar operations are performed with them. In fact, two

§ §

linear systems are solved implicitly, one with and the other with . If there were two

§ §

linear systems to solve, one with and the other with , then this algorithm is suitable.

§

Otherwise, the operations with are essentially wasted. Later a number of alternative

§

techniques developed in the literature will be introduced that avoid the use of .

From a practical point of view, the Lanczos algorithm has a signi¬cant advantage over

Arnoldi™s method because it requires only a few vectors of storage, if no reorthogonali-

¤

zation is performed. Speci¬cally, six vectors of length are needed, plus some storage for

the tridiagonal matrix, no matter how large is. !

On the other hand, there are potentially more opportunities for breakdown with the

uX

nonsymmetric Lanczos method. The algorithm will break down whenever as de¬ned

"U

W

in line 7 vanishes. This is examined more carefully in the next section. In practice, the

dif¬culties are more likely to be caused by the near occurrence of this phenomenon. A

look at the algorithm indicates that the Lanczos vectors may have to be scaled by small

£©

l ¦C£ t© £ lB¥ C! j

5 „5§

¡¢

£

£

¢

quantities when this happens. After a few steps the cumulated effect of these scalings may

introduce excessive rounding errors.

T

Since the subspace from which the approximations are taken is identical to that of £

X

¡²

Arnoldi™s method, the same bounds for the distance are valid. However, £

¢

this does not mean in any way that the approximations obtained by the two methods are

likely to be similar in quality. The theoretical bounds shown in Chapter 5 indicate that the

norm of the projector may play a signi¬cant role.

¨#!$6

§ F ©DC776P5 P¡& I B & PDCA ) 8 6

3

8 )B

E 'B A 8 A E I 5