¥

An¢ j

a5 ¥ I

˜ ˜

„

£ ¦

a

in which is diagonal, but can be arbitrary. This amounts to a less restrictive form of

„

multicoloring, in which a set of vertices in the adjacency graph is found so that no equation

in the set involves unknowns from the same set. A few algorithms for ¬nding independent

set orderings of a general sparse graph were discussed in Chapter 3.

The rows associated with an independent set can be used as pivots simultaneously.

When such rows are eliminated, a smaller linear system results, which is again sparse.

Then we can ¬nd an independent set for this reduced system and repeat the process of

™

}|{ p w— x {— o wp7k…

{7 { | { |} &U

£¤¢

¡ e

“

#

¡

¤ ¤

reduction. The resulting second reduced system is called the second-level reduced system.

The process can be repeated recursively a few times. As the level of the reduction increases,

the reduced systems gradually lose their sparsity. A direct solution method would continue

the reduction until the reduced system is small enough or dense enough to switch to a dense

Gaussian elimination to solve it. This process is illustrated in Figure 12.6. There exists a

number of sparse direct solution techniques based on this approach.

P% # 3¡

!

Illustration of two levels of multi-elimination for

sparse linear systems.

After a brief review of the direct solution method based on independent set orderings,

we will explain how to exploit this approach for deriving incomplete LU factorizations by

incorporating drop tolerance strategies.

¦ CA

¤ ¤

#D£$"

¢# ! G3

1 F 2)

A ( G3 F

F 5 ¡C

9F (

˜˜

We start by a discussion of an exact reduction step. Let be the matrix obtained at the )

˜

-th step of the reduction, with . Assume that an independent set

¢ XH

H D dRiii¢t

˜ ithhh D

˜

ordering is applied to and that the matrix is permuted accordingly as follows:

)

¥

un j

) £a

)„ ˜

T˜) 2 2) w) D

)5 ¦)

where is a diagonal matrix. Now eliminate the unknowns of the independent set to get

)„

the next reduced matrix,

sn j

a ˜

˜

˜ ) £ c v ) „ ) 5 #) D c r)

h

This results, implicitly, in a block LU factorization )

¥ ¥ ¥

)

¢c ) £a )£

)„ )„

˜) 2 ¢

2) w) D D

˜

¢

)5 v) „ ) 5 ¦) ¦ c r)

¦

˜ ˜

with de¬ned above. Thus, in order to solve a system with the matrix , both a

) )

cr

forward and a backward substitution need to be performed with the block matrices on the

right-hand side of the above system. The backward solution involves solving a system with

˜

the matrix . ) cr

This block factorization approach can be used recursively until a system results that is

small enough to be solved with a standard method. The transformations used in the elimina-

c

tion process, i.e., the matrices and the matrices must be saved. The permutation )£

)5 v) „

™ —t

7 —— p w7 z p — |w z

˜ { { | |

¡

“£§

¢ ¢

¡

4 ¢

$#

! § "#

matrices can also be saved. Alternatively, the matrices involved in the factorization at

)2

each new reordering step can be permuted explicitly.

&D£$!"

!# ¢# AF 41

3

The successive reduction steps described above will give rise to matrices that become more

and more dense due to the ¬ll-ins introduced by the elimination process. In iterative meth-

ods, a common cure for this is to neglect some of the ¬ll-ins introduced by using a simple

dropping strategy as the reduced systems are formed. For example, any ¬ll-in element in-

troduced is dropped, whenever its size is less than a given tolerance times the 2-norm of

the original row. Thus, an “approximate” version of the successive reduction steps can be

˜

c c

used to provide an approximate solution to for any given . This can be used v v