7

P ¦ ¤ I v) ¥ £

‘

w y

' I v) ¥ £

¦‘

‚ £ ¨

7 ¨

f P ¬

f f

Note that the formula (10.69) exploits the fact that either the system (10.67) is solved

exactly (middle expression) or the system (10.68) is solved exactly (second expression) or

both systems are solved exactly (either expression). In the realistic situation where neither

of these two systems is solved exactly, then this formula should be replaced by

) P v) ¥ £

¦‘

y w

£

¨ ¨ ¬P

P

f f

The last row/column pairs of the approximate factored inverse can be obtained by solving

two sparse systems and computing a few dot products. It is interesting to note that the only

difference with the ILUS factorization seen in Section 10.4.5 is that the coef¬cient matrices

for these systems are not the triangular factors of , but the matrix itself.

To obtain an approximate factorization, simply exploit the fact that the matrices are

sparse and then employ iterative solvers in sparse-sparse mode. In this situation, formula

y

(10.70) should be used for . The algorithm would be as follows.

f

£ % 0 ¦! 0 ¨ £ " ¢ ¤ v¦ ) ¨

I!"2 ¥D5§"0§ ¡w¤h¨ n¦ ¤ £¢ ! !

•© ¨0

¦¢

2 § ¢ %

£

¥ ¡

¥

®%„BD¬B% G

¬¬

1. For Do:

2. Solve (10.67) approximately;

3. Solve (10.68) approximately;

y w

£

¨ ¨ P

4. Compute P

f f

5. EndDo

˜

A linear system must be solved with in line 2 and a linear system with in line 3.

This is a good scenario for the Biconjugate Gradient algorithm or its equivalent two-sided

Lanczos algorithm. In addition, the most current approximate inverse factors can be used

to precondition the linear systems to be solved in steps 2 and 3. This was termed “self

preconditioning” earlier. All the linear systems in the above algorithm can be solved in

parallel since they are independent of one another. The diagonal can then be obtained at ¨

the end of the process.

This approach is particularly suitable in the symmetric case. Since there is only one

factor, the amount of work is halved. In addition, there is no problem with the existence

y

in the positive de¬nite case as is shown in the following lemma which states that is

E f

always when is SPD, independently of the accuracy with which the system (10.67)

is solved.

% g¢

¢¥ 5"0§

§¢ y

Let be SPD. Then, the scalar as computed by (10.70) is positive.

f

8˜ H ˜ ¤ ¨°¡ $ H&’ ”8 ” &’ ¤ ©¦§

©8 $ $

£

¢ ¡¨ ¨¦ ¤

© §¥ c#&) $

©1

¢

¡

§©¨¥¦£ ˜ £ w

y

§

In the symmetric case, . Note that as computed by formula (10.70)

f ˜

w w

is the element of the matrix . It is positive because is

`

%

a

f

f f

G G f

SPD. This is independent of the accuracy for solving the system to obtain . P

y

In the general nonsymmetric case, there is no guarantee that will be nonzero,

f

unless the systems (10.67) and (10.68) are solved accurately enough. There is no practical

y

problem here, since is computable. The only question remaining is a theoretical one:

f

y

Can be guaranteed to be nonzero if the systems are solved with enough accuracy?

f

Intuitively, if the system is solved exactly, then the matrix must be nonzero since it is ¨

equal to the matrix of the exact inverse factors in this case. The minimal assumption to

¨

5 y

make is that each is nonsingular. Let be the value that would be obtained if at least

f

one of the systems (10.67) or (10.68) is solved exactly. According to equation (10.69), in

this situation this value is given by

qWP v0¥ £

¦¥ ‘)

5 y ¬‚ £ fd R ¨

f

f E

5 y

¦ ¢

If is nonsingular, then . To see this refer to the de¬ning equation (10.66)

C

f f

f

and compute the product in the general case. Let and be the residuals ¢

f f

obtained for these linear systems, i.e.,

GGP v0¥ £

¦ ‘)

C

£