in which is a diagonal of and are strictly lower triangular matrices. Then a

¨

f

« «

sparse representation of and is used in which, typically, and are stored in the

f f

CSR format and is stored separately.

¨

Incomplete Factorization techniques may be developed for matrices in this format

without having to convert them into the CSR format. Two notable advantages of this ap-

proach are (1) the savings in storage for structurally symmetric matrices, and (2) the fact

that the algorithm gives a symmetric preconditioner when the original matrix is symmetric.

Consider the sequence of matrices

7 £

%A

f

f

s

where . If is nonsingular and its LDU factorization

¦ ¨

is already available, then the LDU factorization of is

¦7f

7 7

E E

P

E¨ E

A A A

¢

f

f

G G

in which

P µ v) ¥ £

¦‘

£ 'd 'd © P

f¨

f

¦ ¤ µ v) ¥ £

‘

¦

f'd ¨ 'd

f

p¶

· 8˜ H ˜ ¤ ¨°¡ $ H&’ ”8 ” &’ ¤ ©¦§

©8 $ $

¡

¡¨ ¨¦ ¤

© §¥ c#&) $

©1

¢

¡

G' µ v0¥ £

¦ ‘)

¢ ¨ 7 7

¬P

¨

f f

Hence, the last row/column pairs of the factorization can be obtained by solving two unit

lower triangular systems and computing a scaled dot product. This can be exploited for

sparse matrices provided an appropriate data structure is used to take advantage of the

¦ £

sparsity of the matrices , as well as the vectors , , , and . A convenient data P

˜ £ %

structure for this is to store the rows/columns pairs as a single row in sparse mode.

All these pairs are stored in sequence. The diagonal elements are stored separately. This is

called the Unsymmetric Sparse Skyline (USS) format. Each step of the ILU factorization

based on this approach will consist of two approximate sparse linear system solutions and

a sparse dot product. The question that arises is: How can a sparse triangular system be

solved inexpensively? It would seem natural to solve the triangular systems (10.37) and

(10.38) exactly and then drop small terms at the end, using a numerical dropping strategy.

However, the total cost of computing the ILU factorization with this strategy would be

« o®`

@

operations at least, which is not acceptable for very large problems. Since only an

a

approximate solution is required, the ¬rst idea that comes to mind is the truncated Neumann

series,

G(6 v0¥ £

¦) ‘)

BD¬ w «

w8

d ©w P £ a ¤

w w

| £ 'd

f¨ f ¬¬

` 'd ¨

f ! ! !

8

¨

in which . In fact, by analogy with ILU( ), it is interesting to note that the

!

powers of will also tend to become smaller as increases. A close look at the structure

!

£ ¤

of shows that there is indeed a strong relation between this approach and ILU( ) in

!

the symmetric case. Now we make another important observation, namely, that the vector

£

can be computed in sparse-sparse mode, i.e., in terms of operations involving prod-

!

ucts of sparse matrices by sparse vectors. Without exploiting this, the total cost would still

« ®o`

@ £

be . When multiplying a sparse matrix by a sparse vector , the operation can

a

best be done by accumulating the linear combinations of the columns of . A sketch of the

resulting ILUS algorithm is as follows.

U“i•

˜

¤ "0§ wE¤©n¦§¥£¢

¢ ¡ ¨¦ ¤¢ ¡

!

¢

¦

f © f

1. Set , £

¨ ff f f G

¨®„¬BD¬B% G ª

%¬

2. For Do:

G

3. Compute by (10.40) in sparse-sparse mode P

4. Compute in a similar way

5. Apply numerical dropping to and P

¢