$

and using the same notation as in Proposition 6.9, referring to (6.37)

£ b £ £ b ¢

&

¡

²W ¬ ²

£ £

w0

¡

0

¡S ¢ ¢ ¢ ¢

"U

W

¢

`b¢

² ¬

in which by the minimization procedure. In addition, by (6.40) we have

& Y

¢

£¬

¢ ¢

¡ ¡

¢

¢

W"U W W

The result follows immediately using (7.19).

£

The following simple upper bound for can be used to estimate the residual

¢

"U

W

norm:

¤¢

£

¡

R ¦ W"U f ¢ W£

£ ¢

£

¢

"U

W R

W

Observe that ideas similar to those used for DQGMRES can be exploited to obtain

a better estimate of the residual norm. Also, note that the relation (6.57) for DQGMRES

holds. More interestingly, Theorem 6.1 is also valid and it is now restated for QMR.

Av ¡˜¤

¥¦ ¥ DA

C

' P

1

Assume that the Lanczos algorithm does not break down on or before

step and let be the Lanczos basis obtained at step . Let and be the residual ¨ (¢ ¨

¢

! ! ¢

W"U

norms obtained after steps of the QMR and GMRES algorithms, respectively. Then, T

!

£ ¨ (¨

X

¢

¢ £ £

¢ ¢

’U

W

The proof of this theorem is essentially identical with that of Theorem 6.1. Note that ¢

"U

W

is now known to be of full rank, so we need not make this assumption as in Theorem 6.1.

v© Emy cm ‚c— ¢ y

— “ ™™ ˜™ —

©

S £

¡

Each step of the Biconjugate Gradient algorithm and QMR requires a matrix-by-vector

¨R

§ § u¥

product with both and . However, observe that the vectors or generated with ¡

§ do not contribute directly to the solution. Instead, they are used only to obtain the

u uS

scalars needed in the algorithm, e.g., the scalars and for BCG. The question arises

µ£ „ ¢

|5¥ „yq¢| ¢ £ ¥§

5| j C§¦£¥

5

¨

t¤£

9C ¡

§ "–© ¡"

§

1B

I

§

as to whether or not it is possible to bypass the use of the transpose of and still generate

iterates that are related to those of the BCG algorithm. One of the motivations for this ques-

§

tion is that, in some applications, is available only through some approximations and not

§

explicitly. In such situations, the transpose of is usually not available. A simple exam-

T

ple is when a CG-like algorithm is used in the context of Newton™s iteration for solving

X T

¬

. The linear system that arises at each Newton step can be solved without having

£

' $

Y X

to compute the Jacobian at the current iterate explicitly, by using the difference

£ £

¦

H TH

T

formula T X X

¦ ²

£ £

w

X ' '

¬

£

¦ ¦ H H

H

T

This allows the action of this Jacobian to be computed on an arbitrary vector . Unfortu- ¦ X

nately, there is no similar formula for performing operations with the transpose of . £

¦

H

¦¢

¦