„D¬BBC % % C ª

%¬ ¬

2. For until convergence Do:

a S GS ¡a S % G S ` ªS

3. % `

w

S "S 2 2C

4. S

S

¨ S C f S

C C 2 S C C f

5. S

a S % S £a f S C % f S ` S

6. `

w˜

¢ S

7. S S S

f f

8. EndDo

We now explore the optimality properties of this algorithm, as was done for CGNR.

´ i˜ — h

´

±

The approximation related to the variable by is the actual -th CG

h h h

approximation for the linear system (8.3). Therefore, it minimizes the energy norm of the

´

error on the Krylov subspace . In this case, minimizes the function h h

´ ¨ 5 ´ ` „a ´ ¨ 5 ´ ` ˜ U` a ´ ` ¦

aa

%

´

over all vectors in the af¬ne Krylov subspace,

&W

¬ ¬ C ˜ % C %$#! w ´ ”a C % ˜ U` h C w ´

h˜

¬7( C d

a Ux` % ¬DBB% f

¨ ™˜ U¨‚¢

´

U©‚

Notice that . Also, observe that

« 5 § ´ ¨ ´ ˜ ´ ´ ˜ ´ ¦

% « §ª©¨qz‚a a “5 ` f% a ¨P5 ` x` ”a `

´™˜ –

where . Therefore, CGNE produces the approximate solution in the subspace

˜ ` h w ”a C % ˜ x` h ˜ w

C˜

a

%

w

which has the smallest 2-norm of the error. In addition, note that the subspace 3

% ˜ x`

C˜

is identical with the subspace found for CGNR. Therefore, the two meth-

h a

ods ¬nd approximations from the same subspace which achieve different optimality prop-

erties: minimal residual for CGNR and minimal error for CGNE.

p¡¶

·

r 8 $ p§ Rc"¨¨0

’ ’ w© ¥

¡ Y©0'¢ ¦§

¡

˜ iw£¤“ o%“&)™v" "Vª˜

… • (

¦u™

¥

Now consider the equivalent system

@ ˜ 8 E 7 A C

7 7

B

A A

C

©”3

¨

with . This system can be derived from the necessary conditions applied to the

C ¨

constrained least-squares problem (8.6“8.7). Thus, the 2-norm of is minimized

C˜ E

implicitly under the constraint . Note that does not have to be a square matrix.

This can be extended into a more general constrained quadratic optimization problem

as follows:

¦ ¤ ‘ x¤£

¦

o f o @a % Ux`

¨

minimize a` a ¤`

GH %

¬R© ˜ ' ‘ x¤£

¦

subject to § ¨

The necessary conditions for optimality yield the linear system

7 7

7 A ) µ ‘ x¤£

¦

˜ §

@ A A

§ ¨

C

s%

in which the names of the variables are changed into for notational convenience. %

It is assumed that the column dimension of does not exceed its row dimension. The §

Lagrangian for the above optimization problem is

˜

w

b o ”a s¤` GH ”a % o`

¨ % a a ¨v

`

§ ¨

a%` %`

and the solution of (8.30) is the saddle point of the above Lagrangian. Optimization prob-

lems of the form (8.28“8.29) and the corresponding linear systems (8.30) are important and

arise in many applications. Because they are intimately related to the normal equations, we

discuss them brie¬‚y here.

In the context of ¬‚uid dynamics, a well known iteration technique for solving the linear

system (8.30) is Uzawa™s method, which resembles a relaxed block SOR iteration.

% ¤¦

# % “ " ¡w¤h¨ n¦ ¤ £¢ & ¥¨ 0

¦¢

%

1. Choose % E

BD¬B

¬¬

2. For until convergence Do:

%% „