(2) Write down an alternative to formulas (6.25-6.27) derived from this approach. (3) Compare

the cost of this approach with the cost of using (6.25-6.27).

15 Obtain the formula (6.76) from (6.75).

¨

16 Show that the determinant of the matrix in (6.82) is given by

¢

¨ ¦¤¢

¥£ 9

¢3 v„¤¥§ ¢£ § £

t

17 The Lanczos algorithm is more closely related to the implementation of Algorithm 6.18 of the

¢#¦ ¢#¨

Conjugate Gradient algorithm. As a result the Lanczos coef¬cients and are easier v v

to extract from this algorithm than from Algorithm 6.17. Obtain formulas for these coef¬cients

from the coef¬cients generated by Algorithm 6.18, as was done in Section 6.7.3 for the standard

CG algorithm.

18 Show that if the rotations generated in the course of the GMRES (and DQGMRES) algorithm

are such that

@ ¢ @© A

©

then GMRES, DQGMRES, and FOM will all converge.

£¢ C B0BAB C v C

19 Show the exact expression of the residual vector in the basis for either GMRES

¢ AA v

or DQGMRES. [Hint: A starting point is (6.52).]

5

2 ¦

E

20 Prove that the inequality (6.56) is sharper than (6.53), in the sense that ¢£¢

v

(for ). [Hint: Use Cauchy-Schwarz inequality on (6.56).]

A ¦

y £ w ¢ ¡y ¡|

|5 | §| ¥ £

‚ ‚

21 Denote by the unit upper triangular matrix in the proof of Theorem 6.1 which is ob-

¢

tained from the Gram-Schmidt process (exact arithmetic assumed) applied to the incomplete

¡¢

¢

orthogonalization basis . Show that the Hessenberg matrix obtained in the incomplete

¢

¢ ¢

orthogonalization process is related to the Hessenberg matrix obtained from the (complete)

Arnoldi process by

¢

‚ ‚

¢ ¢ ¢ v Tt ¢

v A

¢

¢

NOTES AND REFERENCES. Lemma 6.1 was proved by Roland Freund [95] in a slightly different

form. Proposition 6.12 is due to Brown [43] who proved a number of other theoretical results, includ-

ing Proposition 6.11. Recently, Cullum and Greenbaum [63] discussed further relationships between

FOM and GMRES and other Krylov subspace methods.

The Conjugate Gradient method was developed independently and in different forms by Lanc-

zos [142] and Hesteness and Stiefel [120]. The method was essentially viewed as a direct solu-

tion technique and was abandoned early on because it did not compare well with other existing

techniques. For example, in inexact arithmetic, the method does not terminate in steps as is ¤

predicted by the theory. This is caused by the severe loss of of orthogonality of vector quantities

generated by the algorithm. As a result, research on Krylov-type methods remained dormant for

over two decades thereafter. This changed in the early 1970s when several researchers discovered

that this loss of orthogonality did not prevent convergence. The observations were made and ex-

plained for eigenvalue problems [158, 106] as well as linear systems [167]. The early to the middle

1980s saw the development of a new class of methods for solving nonsymmetric linear systems

[13, 14, 127, 172, 173, 185, 218]. The works of Faber and Manteuffel [85] and Voevodin [219]

showed that one could not ¬nd optimal methods which, like CG, are based on short-term recur-

rences. Many of the methods developed are mathematically equivalent, in the sense that they realize

the same projection process, with different implementations.

The Householder version of GMRES is due to Walker [221]. The Quasi-GMRES algorithm

described in Section 6.5.7 was initially described by Brown and Hindmarsh [44], although the direct

version DQGMRES was only discussed recently in [187]. The proof of Theorem 6.1 can be found in

[152] for the QMR algorithm.

The non-optimality of the Chebyshev polynomials on ellipses in the complex plane was estab-

lished by Fischer and Freund [90]. Prior to this, a 1963 paper by Clayton [59] was believed to have

established the optimality for the special case where the ellipse has real foci and is real. ¢

Until recently, little attention has been given to block Krylov methods. In addition to their at-

traction for solving linear systems with several right-hand sides [177, 196], these techniques can also

help reduce the effect of the sequential inner products in parallel environments and minimize I/O

costs in out-of-core implementations. The block-GMRES algorithm is analyzed by Simoncini and

Gallopoulos [197] and in [184]. Alternatives to GMRES which require fewer inner products have

been proposed by Sadok [188] and Jbilou [125]. Sadok investigated a GMRES-like method based

on the Hessenberg algorithm [227], while Jbilou proposed a multi-dimensional generalization of

Gastinel™s method seen in Exercise 2 of Chapter 5.

¡ £

9

W W

¢

©

(

(

¥W “ ¥ ¥(

¥ § QH

C £¡

¢

§ W§( W §

¢X

¬ XW

T T and

W

WW

© ¢

¦“ ¦ ¦(

C ¢¡

W§(( W §W§( W ¦ § QH ¬

¢

§

X

T

gonal bases for the two subspaces

The algorithm proposed by Lanczos for nonsymmetric matrices builds a pair of biortho-

I H BA ©' & 8 50A

3 H $#!$6

!

quences instead of orthogonal sequences.

is quite different in concept from Arnoldi™s method because it relies on biorthogonal se-

Arnoldi procedure, has already been seen. However, the nonsymmetric Lanczos algorithm

the symmetric Lanczos algorithm seen in the previous chapter. One such extension, the

The Lanczos biorthogonalization algorithm is an extension to nonsymmetric matrices of

£R¡ £

Q¢© c v g g• © ’Q˜‘ — l z k

“ “ “

p B F ps F 6

pix p `ePu HrWFQ8’X89’6c’8¦ x 9P`PgIXk%B8QYrB9PmQ8rBP

pFbkes’D8 SHFt%Ba8qXErB}uRwsH `’Pa8mAqIXl98RT9PzWx9867wi9Ps„%XruRXW6EF•Xpq„RXgx p ePu rHRDtsHWBpFtsVQ8rBP

n n p nnP 8 U D 6 5 p B s s p H

F Y U s n P 65 s © F bY U B p s s

pF9P’6aD`XR’6rF8cz„X rHWFeu•8©§XErBQ8rBm8dWD98Cie%D’X¦ 9u%P'IXf89¦W6F HrXu PV„X SHRFP’¦ H p9P„%Xu

q%XRW6EFBX rH „¢’Yd89D9P 9YeP8dGFDRwsQ8rBP 96u rHAVaD`RX’6WFAe89uP9’nRD’bD %TX xE‚¨ 9fmGD9DP 9pu

P sX H 6 Y 8U pB X

T8 Y B P l ¢8 s` U b s n F 6 B

XP X

tsP H}r8EDBtHIX%BWFud’u’D98p 9TtpH7pXc%BBtx ¨ I8Fp98`’6n 9PFfu%6zX9f D rHs„67X HrpF5 `eiP ¦s„H Xp SH9PpFs„b p„%XXIruD RXW6I8FEFRAtU9faHrBEXeoX UnBn X fzcP qXI8ID pF„VqeYU ’X8 89puH hIXB A%6%Bu 8rHQYyX

aD`RX6’WFAU e89u9P’nDR’bD 9TX tx¨ %fc%Br8Vs ’eYQ8%BY SHRD„’Xf%Bd8pFP9%6}RD„X rHWTQ8%B8975

Y8 p B X Ub P 8 s u n u b n 6

! © ¡

0 ©¨ 0¤ ¢!

231) ( ' ¡ 1¦ $ ¥£1¡

¦¥¤£¢¡

´ £