t v # £C

X !%

all of them. Similarly, must be orthogonalized only against the previous ™s. ¢"

¥ t v

` Write down the algorithm completely. Show the orthogonality relations satis¬ed by the vec-

#

£C

tors and . Show also relations similar to (7.3) and (7.4).

¥

#

£C

•” G

We now assume that the two sets of vectors and have different block sizes. Call the

¥

block-size for the ™s. Line 2 of the above formal algorithm is changed into:

¥

# #

C BCAB0 C v C

X

2a. Orthogonalize versus ( ). Call the resulting vector.

A A

¢ ©t ¥

¨ ¥

v

µ£ „ ¢

|5¥ „yq¢| ¢ £ ¥§

5| j C§¦£¥

5

¨

¢£

£ ¡

§ "–© ¡"

§

1B

I

¦

9# #£

¥ £ G3

C

and the rest remains unchanged. The initial vectors are again biorthogonal:

5 5 #

¡

C

5 G

for and . Show that now needs only to be orthogonalized against the

¢"

! t v #

£¥

!EG previous ™s instead of all of them. Show a simlar result for the ™s. ¥

Show how a block version of BCG and QMR can be developed based on the algorithm

resulting from question (c).

NOTES AND REFERENCES. At the time of this writing there is still much activity devoted to the class

of methods covered in this chapter. Two of the starting points in this direction are the papers by Son-

neveld [201] and Freund and Nachtigal [97]. The more recent BICGSTAB [210] has been developed

to cure some of the numerical problems that plague CGS. There have been a few recent additions

and variations to the basic BCG, BICGSTAB, and TFQMR techniques; see [42, 47, 113, 114, 192],

among many others. A number of variations have been developed to cope with the breakdown of

the underlying Lanczos or BCG algorithm; see, for example, [41, 20, 96, 192, 231]. Finally, block

methods have also been developed [5].

Many of the Lanczos-type algorithms developed for solving linear systems are rooted in the

theory of orthogonal polynomials and Pad´ approximation. Lanczos himself certainly used this view-

e

point when he wrote his breakthrough papers [140, 142] in the early 1950s. The monogram by

Brezinski [38] gives an excellent coverage of the intimate relations between approximation theory

and the Lanczos-type algorithms. Freund [94] establishes these relations for quasi-minimal resid-

ual methods. A few optimality properties for the class of methods presented in this chapter can be

proved using a variable metric, i.e., an inner product which is different at each step [21]. A recent

survey by Weiss [224] presents a framework for Krylov subspace methods explaining some of these

optimality properties and the interrelationships between Krylov subspace methods. Several authors

discuss a class of techniques known as residual smoothing; see for example [191, 234, 224, 40].

¡

These techniques can be applied to any iterative sequence to build a new sequence of iterates

¡ ¡

by combining with the difference . A remarkable result shown by Zhou and Walker

„t „t

v v

[234] is that the iterates of the QMR algorithm can be obtained from those of the BCG as a particular

case of residual smoothing.

A number of projection-type methods on Krylov subspaces, other than those seen in this chapter

and the previous one are described in [1]. The group of rank-¦ update methods discussed by Eirola

and Nevanlinna [79] and Deuf¬‚hard et al. [70] is closely related to Krylov subspace methods. In

fact, GMRES can be viewed as a particular example of these methods. Also of interest and not

covered in this book are the vector extrapolation techniques which are discussed, for example, in the

books Brezinski [38], Brezinski and Radivo Zaglia [39] and the articles [199] and [126]. Connections

between these methods and Krylov subspace methods, have been uncovered, and are discussed by

Brezinski [38] and Sidi [195].

·¶

pp¶

¬

µ ‘ x¤£

¦ 3 ´ ˜ U

:

´

and solves the following equation for

A similar well known alternative sets

i˜ ¢f

´ .

,

determined systems, i.e., when is a rectangular matrix of size

®³!± –¯°®

²±

Note that (8.1) is typically used to solve the least-squares problem (8.2) for over-

minimize

WBªU©‚B§

¬«§ ¨

‘ x¤£

¦

equations associated with the least-squares problem,

which is Symmetric Positive De¬nite. This system is known as the system of the normal

¦

u¥‘ x¤£ ˜ ¢¡ ˜

equivalent system

is nonsymmetric, we can solve the when In order to solve the linear system

!U

uu™

˜—E–”f@‘%#m…

• …“ ’

beuWV csqB7bA

‘ I YFQd

…i‚oSCRuB9P•VUfl`AqF DEAPy f…Y u ca RrSkR‚y …fYfahACWpDBf9Y u‚EACGFDa G”{iliu9Y‚UABp qcuEFDA

F VI A Y Y Y … C c …s c CV clCA cA 9 c9 e V A S Y C

wY UetDBBfVa}D•AqiBB“e G9@‚f‘feB7d cGRDab•AfADBfseuRQ|DV Rry™„rRFBfQuABf9|•VmflhAoCDAXEAGs

c CAp C YsF9a c 8 AQ I9 Y aF T e p … C … eQ Y Y C C

e ‚Y Da G‚Ir@euvV qcYuba c RRiDBpEAGEADB9fWj‚bAbRBIpbuIvV qcuBQY csA‚vV•UrIU•AYuB7d

c c 9 c9 { c I F …ssF IA A CF CA Y l Al ‘e YF e S e c FQ

swARDFGwfy‚wDBtVGsRRHuWV csqB7bRR‚SoiVuUBf9YvxjDBspiuVkr‘URFBf9bBWV scY WBvfVa

l AT F S 9aF C sF eI YFQdA…F C I A CA A { ˜ – I Y lAI clI

V 9aQ S c – YF S YI a A A Y eQ aAT A a Cs c lA cV F

•AEeCR}{|DBe z– ˜ ny cCu‚qDA cusxifVawBf9`AuRFbbGmba qcYBhFDvrIubRl RupuUe c

9aF C sF c Y jI X ’ ‘e FQ A …F CVI A Y …Fa j – e d ˜ S e

DDGtVGsRRPe G9fpiD•AhYHrbuIvV qcYuB7fdpi‚SoRunB9fmflA … RbkDhi˜‚gf”–™‚–—DA•Yuye

CGFbBrIc ”qDA RBp G7bAB9@feBp …vV‚B7d GRDaf`ADBfQ“BI7‘GBIvVa cCY‚URyVqrID`AY

A … YI …F cQd A Y A e AQ cI9 Y 9a e A ’ A A S S e F Y c S

weuyiGbBIrc ‚a c„CY‚SRyxeqvuPPurIscrYDBqWfV7ihVgefB7d GRDbA`7DWDGTURRPHEAG@EADB@8

eCFA … A S wIVI F t CApI aC X AQ cI9a YXVCA SQI F CF CA9