) 6$42 " ¨

( ' &#%#©! © ¨

© © $"

§

¦¥ ¤£ ¢ ¡

¶ 8˜ ’ ”p20("'&}%#”8³ª%¡"p!¡©’ ”W

8¥1 )© ¥ ’$ ©˜ ’8 © 8¥ ˜ 8©

£¡

¢ ¡¨ ¨¦ ¤

© §¥ &

$

´

Once the solution is computed, the original unknown could be obtained by multiplying

´ ˜ ´

by . However, most of the algorithms we will see do not invoke the variable explic-

itly and work with the original variable instead. The above system of equations can be

used to solve under-determined systems, i.e., those systems involving rectangular matrices

v¯°®

± ± °®

² ± 4®

of size , with . It is related to (8.1) in the following way. Assume that 3

³

and that has full rank. Let be any solution to the underdetermined system .

'

5

Then (8.3) represents the normal equations for the least-squares problem,

¬ « § ´ ˜ ¨5q§ 6 ‘ x¤£

¦

minimize

i˜

´

Since by de¬nition , then (8.4) will ¬nd the solution vector that is closest to

±3² ® ´

in the 2-norm sense. What is interesting is that when there are in¬nitely many

5

¡

solutions to the system , but the minimizer of (8.4) does not depend on the

5

particular used. 5

The system (8.1) and methods derived from it are often labeled with NR (N for “Nor-

mal” and R for “Residual”) while (8.3) and related techniques are labeled with NE (N

for “Normal” and E for “Error”). If is square and nonsingular, the coef¬cient matrices

of these systems are both Symmetric Positive De¬nite, and the simpler methods for sym-

metric problems, such as the Conjugate Gradient algorithm, can be applied. Thus, CGNE

denotes the Conjugate Gradient method applied to the system (8.3) and CGNR the Conju-

gate Gradient method applied to (8.1).

There are several alternative ways to formulate symmetric linear systems having the

same solution as the original system. For instance, the symmetric linear system

E 7 A CD7 B@ ˜ 97

8

GF ‘ x¤£

¦

A

A

C

fH¢

¨

with , arises from the standard necessary conditions satis¬ed by the solution of

the constrained optimization problem,

GI ‘ x¤£

¦

«« W¨ C §

§

minimize GH

¬E C˜ ¦GP ‘ x¤£

subject to

The solution to (8.5) is the vector of Lagrange multipliers for the above problem.

Another equivalent symmetric system is of the form

7 7 7

@ ˜ @ ¬ A 3A ˜

A

The eigenvalues of the coef¬cient matrix for this system are , where is an arbitrary RDQ

S 2R

S

singular value of . Inde¬nite systems of this sort are not easier to solve than the origi-

nal nonsymmetric system in general. Although not obvious immediately, this approach is

similar in nature to the approach (8.1) and the corresponding Conjugate Gradient iterations

applied to them should behave similarly.

A general consensus is that solving the normal equations can be an inef¬cient approach

˜

in the case when is poorly conditioned. Indeed, the 2-norm condition number of is

given by YW0VT

XU

¬Ru§vfeb ˜ xc§B§m ˜ u&”b ˜ 0«

« da ` « § a `

pi R § ˜

aDx` e&« h g « m u§

b` e&h R

a pi

Now observe that where is the largest singular value of

¶

’ ”8 ¤ © ¦r¦§ £¤r

¥’ ’ ’}”W©

˜8 ¨¡

§

¢ ¡ &

$

which, incidentally, is also equal to the 2-norm of . Thus, using a similar argument for

˜ `

the inverse yields Y0UVT 'd b

fa

XW Y0VT

XWU

«« i'd u§ «« nuz‚b ˜ x`0« ¦ ¤ ‘ x¤£

¬ub` ««

§f § § a a

˜ X2cVT

WU

The 2-norm condition number for is exactly the square of the condition number of

E

©

vb` «

a

, which could cause dif¬culties. For example, if originally , then an

G

iterative method may be able to perform reasonably well. However, a condition number of

E

can be much more dif¬cult to handle by a standard iterative method. That is because

f

G

any progress made in one step of the iterative procedure may be annihilated by the noise

due to numerical errors. On the other hand, if the original matrix has a good 2-norm condi-

tion number, then the normal equation approach should not cause any serious dif¬culties. 8

¢

In the extreme case when is unitary, i.e., when , then the normal equations are

clearly the best approach (the Conjugate Gradient method will converge in zero step!).

|#wuª‘!fE–“!

˜" … •…