coordinates , the following algorithm results.

R( ©¨§¦¥¤ wE¤©n¦§¥£¢

£ ¥ £ ¡ ¨¦ ¤¢

C%

U©¨‚3

1. Choose initial guess . Set

2. Until convergence Do:

E IG t st%„D¢¬ BD% G ª y

® ¬¬

3. For Do:

HH 6 @ C G8 ¡ ™S

4.

6@

5. EndDo s

C

w

y

S m¢

6. where

S S ¢ C

f

¨

¢

7.

8. EndDo

C

Notice that all the coordinates will use the same residual vector to compute the

'W

y &

³¡

updates . When , each instance of the above formulas is mathematically equivalent

S

G #°

30

to performing a projection step for solving with , and . It )S

$!"

%# ( 1

3W

&

is also mathematically equivalent to performing an orthogonal projection step for solving

z ˜ ¢– ˜

with . 4S

$2

% #! (

˜ x` ¡ ”a S % C ` ¡ S y

It is interesting to note that when each column is normalized by its 2-norm, i.e., if

C c

S

®s„D¬BB% G H% G Uu§ S u§

«

%¬ ¬ C

, then . In this situation,

aS %

˜ ”¡ m¢

¨ ˜

a ‚ ` ”¡

and the main loop of the algorithm takes the vector form

C˜

”¡ ¢

w

C

C

¢

g¨

¬¢

Each iteration is therefore equivalent to a step of the form

˜ ¨ ˜ w ‘ vus t

7 ‚ 56¡

which is nothing but the Richardson iteration applied to the normal equations (8.1). In

particular, as was seen in 4.1, convergence is guaranteed for any which satis¬es, ¡

H

GG ‘ x¤£

¦

E

²¡² pei&h 8

¶

’ ”8 ¤ © ¦r¦§ £¤r

¥’ ’ ’}”W©

˜8 ¡¡

¢ ¡ &

$

˜

e&h 8

where is the largest eigenvalue of . In addition, the best acceleration parameter

pi

is given by

H

H§¥¤£¡ e&h 8 s S h 8

¦¢ w

pi

˜

s h8

in which, similarly, is the smallest eigenvalue of . If the columns are not nor-

S

malized by their 2-norms, then the procedure is equivalent to a preconditioned Richardson

iteration with diagonal preconditioning. The theory regarding convergence is similar but

involves the preconditioned matrix or, equivalently, the matrix obtained from by nor- ¨©

malizing its columns.

The algorithm can be expressed in terms of projectors. Observe that the new residual

satis¬es s C

¬S a «« S § S % u§ ` D ‘ x¤£

¦µ

uvs C

¨C

t

¡

S

f

Each of the operators

C

a «« S f§ S % u§ ` ¨ R! ‘ x¤£

¦6

C C

S S S

is an orthogonal projector onto , the -th column of . Hence, we can write

c

S

s

F ‘ x¤£

¦

Gs C 8 C

ut

¨ ¬

¡ S

S

f

There are two important variations to the above scheme. First, because the point Jacobi

&

iteration can be very slow, it may be preferable to work with sets of vectors instead. Let

H