’U

W "U

W

£

column of the matrix

± ¯ — „ ®

i

! ¤ W"U ¢

¢ ¢

"U

W

£

It is an easy exercise to see that this last column can be updated from and . Indeed,

¦ ¢ ¢

W"U

¬ W

¤ & ’U ¢ ¦ (

¢ %

$

¢ ¢

¢

"U

W W “

| 7¥

CC

§

£¬ & "U

¤ ¢¦(

¢ $ ¢

¢

W

“ W

¬ & "U

¢¦( ¢ $ T T

¢

W X X

¥

where all the matrices related to the rotation are of size . The result is

w w

! !

y y

that

a±—— „ ®

i

¦ ¢ w

¢¢ ² ¬

¢ ¢ ¢

’U

W W"U

¤£

R

The ™s can be updated at the cost of one extra vector in memory and operations at

˜

¤

each step. The norm of can be computed at the cost of operations and the exact ¢

"U

W

residual norm for the current approximate solution can then be obtained by multiplying

this norm by . ¡

0 0

¢

"U

W

Because this is a little expensive, it may be preferred to just “correct” the estimate

provided by by exploiting the above recurrence relation,

¡

¢

W"U

£ 0 v0 ¢ 0 w £

0¢

¢ ¢ ¢

"U

W

¤ £

If , then the following recurrence relation holds,

!

¢ ¢

a± — „ ®

i

¤ ¤

0 ¢ 0 w ¢ 0 ¢ 0¢

¢

"U

W

The above relation is inexpensive to update, yet provides an upper bound that is sharper

than (6.53); see Exercise 20.

An interesting consequence of (6.55) is a relation between two successive residual

vectors:

¬ "U

¡

¨ ¢ ¢ ¢

W "U

W

¢¦ ¢ w ¢ ¢

¬ ¢ Q²

¡

$

& ¢

a± — „ ®

i

"U £

W

’U

W w

¬ ¢ ¡

¢¨ ¢

¢¦

¢¢

W“

"U "U W

W

²¬ ¬

u¨

u

u

This exploits the fact that and .

¢

¡ ¡ ¡