%

GF 6 v0¥ £

¦ ‘)

¬ «§ ¦ ¨ 8§

In the following, only (10.43) and(10.45) are considered. The case (10.44) is very

similar to the right preconditioner case (10.43). The objective function (10.43) decouples

into the sum of the squares of the 2-norms of the individual columns of the residual matrix

8

¡ ¨

, s

GI 6 v0¥ £

¦ ‘)

« § ¡£©¨ 8 #”a ¡ ` «« § U©¨ §

§ ±

f

¡

±

in which and are the -th columns of the identity matrix and of the matrix ,

¥

respectively. There are two different ways to proceed in order to minimize (10.46). The

¡

function (10.43) can be minimized globally as a function of the sparse matrix , e.g., by

a gradient-type method. Alternatively, the individual functions

GP 6 v0¥ £

¦ ‘)

«« U©¨ &”a x±` ¦ s®% BBD% H % G m¥

§± § ¬¬¬

%

can be minimized. The second approach is appealing for parallel computers, although there

is also parallelism to be exploited in the ¬rst approach. These two approaches will be

discussed in turn.

Xrb¨9¨IH Y PhS9I U T 8

4 2 ¡3¢6

2 ¡ T

W UP Y e

¡

The global iteration approach consists of treating as an unknown sparse matrix and

using a descent-type method to minimize the objective function (10.43). This function is a H

s

f¯ ®

®

quadratic function on the space of matrices, viewed as objects in . The proper ¡

inner product on the space of matrices, to which the squared norm (10.46) is associated, is

u4 ˜ ` ” % ¦ ¤ 6 v0¥ £

‘)

¬a £¤¢

¡ ¥

¡

«® ®m–®

¯

In the following, an array representation of an vector means the matrix whose

®

column vectors are the successive -vectors of .

Gs

ut

¡

In a descent algorithm, a new iterate is de¬ned by taking a step along a selected

direction , i.e.,¦

w ¢ vs ¡

¡ ut

¦

vs

ut

¡`

in which is selected to minimize the objective function . From results seen a

in Chapter 5, minimizing the residual norm is equivalent to imposing the condition that ¦ §

¦

g¤

¨

be orthogonal to with respect to the inner product. Thus, the optimal %

¥

¦ ¦ £

is given by

˜ ˜ ¤`

fx¤

G(6 v0¥ £

¦' ‘)

¦

a

% ¨¦

£

¬ ¦

¦ ` `

f%V¦ a a

¨¦

£

pp¶

··

’ ”8 ” ’ ¤ %©Y¡Y"¡ $ "p ‚ r ¦¨¥

$ © § © ©8¥ ’ §§ ¡¡© &

$

¢

¡

« v¥u§ ¡

§¦

The denominator may be computed as . The resulting matrix will tend to be-

come denser after each descent step and it is therefore essential to apply a numerical drop-

¡

ping strategy to the resulting . However, the descent property of the step is now lost,

Gs

ut

¡` ¡`

i.e., it is no longer guaranteed that . An alternative would be to apply 3Da a

numerical dropping to the direction of search before taking the descent step. In this case, ¦

¡

the amount of ¬ll-in in the matrix cannot be controlled.

The simplest choice for the descent direction is to take it to be equal to the residual ¦

¡ f¨ 8 ¡

matrix , where is the new iterate. Except for the numerical dropping step,

¤

the corresponding descent algorithm is nothing but the Minimal Residual (MR) algorithm, 8

« w¯ « ® ¡

®

seen in Section 5.3.2, on the linear system . The global Minimal Residual

algorithm will have the following form.

¤ % ¡ & £ ¦ 0 ¥% ¨ £ " ¢ ¤ ¨ © 0 £ ¦ 0

¥'©¦£ ¥% ¢

"0§ ¡w¤h¨ n¦ ¤ £¢

¦¢ ¤ ¤ ¤