w f

f P f

9. S S S

S

10. EndDo

±

In Chapter 6, the approximation produced at the -th step of the Conjugate Gra-

h

dient algorithm was shown to minimize the energy norm of the error over an af¬ne Krylov

¶

’ p

8 8”

}

’ ’ ”p¥ &)0©

8 1 ¥¡

¤

¨¥ '& $ ¨4 ¡© ¨£ £" ¥ ¢1 Q$ ¤

¡ ¡¥

$¥ $ ¥ ¡©

$

subspace. In this case, minimizes the function

h

©¨ 5 o` % a ©¨ 5 o`( ˜ x` a x` ¦

aa

over all vectors in the af¬ne Krylov subspace &3W

f'd b ˜ ` „¬DB¬D% U ˜ % $#! w ‘‚a f ˜ ` h C w

C˜

C˜

C˜

C˜ h

%¬

a %

%( %

–}3 C

¨

3– ˜

in which is the initial residual with respect to the original equations ,

˜

˜

U

and is the residual with respect to the normal equations . However,

observe that

¬ «« H‚Bz‚a a ©U¦x` „a ©¨5o` ` ‚a o` ¦

§ ¨ § ¨ 5 %

Therefore, CGNR produces the approximate solution in the above subspace which has the

smallest residual norm with respect to the original linear system . The difference

with the GMRES algorithm seen in Chapter 6, is the subspace in which the residual norm

is minimized.

¥

5 ¥¨

§ © §

Table 8.1 shows the results of applying the CGNR algorithm with no pre-

conditioning to three of the test problems described in Section 3.7.

Matrix Iters K¬‚ops Residual Error

F2DA 300 4847 0.23E+02 0.62E+00

F3D 300 23704 0.42E+00 0.15E+00

ORS 300 5981 0.30E+02 0.60E-02

¤

5 ¥¥

§ ©

A test run of CGNR with no preconditioning.

See Example 6.1 for the meaning of the column headers in the table. The method

failed to converge in less than 300 steps for all three problems. Failures of this type, char-

acterized by very slow convergence, are rather common for CGNE and CGNR applied to

problems arising from partial differential equations. Preconditioning should improve per-

formance somewhat but, as will be seen in Chapter 10, normal equations are also dif¬cult

to precondition.

HcA¤¥

53¨31

42 §2 W8

A similar reorganization of the CG algorithm is possible for the system (8.3) as well.

Applying the CG algorithm directly to (8.3) and denoting by the conjugate directions, S

´

the actual CG iteration for the variable would be as follows:

˜ % ˜ £a C % C ` ”a !% ˜ x£a C % C `

© a

`

`

w ´ ´

©

˜ ¨ C f C

© C C f

C C

© a % £a % `

`

f

f

¥¡¶ 8˜ ’ ”p20("'&}%#”8³ª%¡"p!¡©’ ”W

8¥1 )© ¥ ’$ ©˜ ’8 © 8¥ ˜ 8©

¡¨ ¨¦ ¤

© §¥ &

$

C w

©

.

f f

´ ´˜

w

— S

a ¨ S `

Notice that an iteration can be written with the original variable

˜ 1

S

by introducing the vector . Then, the residual vectors for the vectors and

S S

´ are the same. No longer are the vectors needed because the ™s can be obtained as

S S S

˜

C w

. The resulting algorithm described below, the Conjugate Gradient

f f

for the normal equations (CGNE), is also known as Craig™s method.

9¥¨©0 —¦§¥£¢£ %D! ¡m

& ¤

" wE¤©n¦§¥£¢

¡ ¨¦ ¤¢ !

¢

˜

¨‚¢ C E C