¡¡© &

$ ¨' %©

¥ $ &

$

¢

¡ ¡

¨eA8hH Y D¤¦ XC¥ i T P RcW Y 5P fc!¥ Gf

¨3¡3¢6

§2 2 ¡ WP HT f U

9 a BE

R

9

s

£¤„BDB7¤% f £

%¬¬¬%«£

Consider a general sparse matrix and denote its rows by . The (complete)

LQ factorization of is de¬ned by

¡

¢

%

˜ 8

where is a lower triangular matrix and is unitary, i.e., . The factor in the

˜ g

above factorization is identical with the Cholesky factor of the matrix . Indeed, §

¡

¢

if where is a lower triangular matrix having positive diagonal elements, then

˜ U¢#§ ˜˜ ¬˜

¢¡

The uniqueness of the Cholesky factorization with a factor having positive diagonal ele-

ments shows that is equal to the Cholesky factor of . This relationship can be exploited §

to obtain preconditioners for the Normal Equations.

Thus, there are two ways to obtain the matrix . The ¬rst is to form the matrix §

explicitly and use a sparse Cholesky factorization. This requires forming the data structure

˜

of the matrix , which may be much denser than . However, reordering techniques

can be used to reduce the amount of work required to compute . This approach is known

as symmetric squaring.

A second approach is to use the Gram-Schmidt process. This idea may seem undesir-

able at ¬rst because of its poor numerical properties when orthogonalizing a large number

of vectors. However, because the rows remain very sparse in the incomplete LQ factoriza-

tion (to be described shortly), any given row of will be orthogonal typically to most of the

previous rows of . As a result, the Gram-Schmidt process is much less prone to numerical

dif¬culties. From the data structure point of view, Gram-Schmidt is optimal because it does

not require allocating more space than is necessary, as is the case with approaches based

on symmetric squaring. Another advantage over symmetric squaring is the simplicity of

the orthogonalization process and its strong similarity with the LU factorization. At every

step, a given row is combined with previous rows and then normalized. The incomplete

Gram-Schmidt procedure is modeled after the following algorithm.

’ ¨ % £ " ¨

5§"0§ ¡w¤h¨ n¦ ¤ £¢

¦¢ £%

! ¤¥ © £

¢

H

®s„% D S¬ ¬BB%

¬

1. For Do:

H

G a %S £` m¥

™„DBB%

¨ %¬¬¬

2. Compute , for % %

S S G G

¨ B§ S & SS

«§

3. Compute , and S

f d S £

SS E f

SS ! S

4. If then Stop; else Compute .

S

5. EndDo

£

If the algorithm completes, then it will result in the factorization where the rows

of and are the rows de¬ned in the algorithm. To de¬ne an incomplete factorization, a

dropping strategy similar to those de¬ned for Incomplete LU factorizations must be incor-

¤

porated. This can be done in very general terms as follows. Let and be the chosen

zero patterns for the matrices , and , respectively. The only restriction on is that &

¬v(¥ ¢

¡ a ¥ %07`

8˜ H ˜ ¤ ¨°¡ $ H&’ ”8 ” &’ ¤ ©¦§

©8 $ $

¨ ¡¨ ¨¦ ¤

§ © §¥ c#&) $

©1

¢

¡

¤

As for , for each row there must be at least one nonzero element, i.e., U

& &

¦ ® BBD% H % G ¢

¤ ¬@s®„DBB% G @¤

¬¬¬

%¬¬¬

©

a ¥ % 7` ¡ ¥ ( %( %

These two sets can be selected in various ways. For example, similar to ILUT, they can be

determined dynamically by using a drop strategy based on the magnitude of the elements

generated. As before, denotes the -th row of a matrix and its -th entry. ¦

S S a ¥ 07`

%

•© ¨ & £ ¥ £ u˜(

0 ¨© ¤0 2

§5"0§ wE¤©n¦§¥£¢

¢ ¡ ¨¦ ¤¢ £ %D!

¢

¥ ¥

ª „% ¬B S D¬B%

®¬

1. For Do: H

G a %S £` m ¦ i % BDB