I am grateful to a number of colleagues who proofread or reviewed different versions of

the manuscript. Among them are Randy Bramley (University of Indiana at Bloomingtin),

Xiao-Chuan Cai (University of Colorado at Boulder), Tony Chan (University of California

at Los Angeles), Jane Cullum (IBM, Yorktown Heights), Alan Edelman (Massachussett

Institute of Technology), Paul Fischer (Brown University), David Keyes (Old Dominion

University), Beresford Parlett (University of California at Berkeley) and Shang-Hua Teng

(University of Minnesota). Their numerous comments, corrections, and encouragements

were a highly appreciated contribution. In particular, they helped improve the presenta-

tion considerably and prompted the addition of a number of topics missing from earlier

versions.

This book evolved from several successive improvements of a set of lecture notes for

the course “Iterative Methods for Linear Systems” which I taught at the University of Min-

nesota in the last few years. I apologize to those students who used the earlier error-laden

and incomplete manuscripts. Their input and criticism contributed signi¬cantly to improv-

ing the manuscript. I also wish to thank those students at MIT (with Alan Edelman) and

UCLA (with Tony Chan) who used this book in manuscript form and provided helpful

feedback. My colleagues at the university of Minnesota, staff and faculty members, have

helped in different ways. I wish to thank in particular Ahmed Sameh for his encourage-

ments and for fostering a productive environment in the department. Finally, I am grateful

to the National Science Foundation for their continued ¬nancial support of my research,

part of which is represented in this work.

Yousef Saad

¥ £ ¨¡

©

©¢

©§ ( ¨ ¦© ¤( ) # ©¢)

#¡

§ ¥) £

# £©¨

This book can be used as a text to teach a graduate-level course on iterative methods for

linear systems. Selecting topics to teach depends on whether the course is taught in a

mathematics department or a computer science (or engineering) department, and whether

the course is over a semester or a quarter. Here are a few comments on the relevance of the

topics in each chapter.

For a graduate course in a mathematics department, much of the material in Chapter 1

should be known already. For non-mathematics majors most of the chapter must be covered

or reviewed to acquire a good background for later chapters. The important topics for

the rest of the book are in Sections: 1.8.1, 1.8.3, 1.8.4, 1.9, 1.11. Section 1.12 is best

treated at the beginning of Chapter 5. Chapter 2 is essentially independent from the rest

and could be skipped altogether in a quarter course. One lecture on ¬nite differences and

the resulting matrices would be enough for a non-math course. Chapter 3 should make

the student familiar with some implementation issues associated with iterative solution

procedures for general sparse matrices. In a computer science or engineering department,

this can be very relevant. For mathematicians, a mention of the graph theory aspects of

sparse matrices and a few storage schemes may be suf¬cient. Most students at this level

should be familiar with a few of the elementary relaxation techniques covered in Chapter

4. The convergence theory can be skipped for non-math majors. These methods are now

often used as preconditioners and this may be the only motive for covering them.

Chapter 5 introduces key concepts and presents projection techniques in general terms.

Non-mathematicians may wish to skip Section 5.2.3. Otherwise, it is recommended to

start the theory section by going back to Section 1.12 on general de¬nitions on projectors.

Chapters 6 and 7 represent the heart of the matter. It is recommended to describe the ¬rst

algorithms carefully and put emphasis on the fact that they generalize the one-dimensional

methods covered in Chapter 5. It is also important to stress the optimality properties of

those methods in Chapter 6 and the fact that these follow immediately from the properties

of projectors seen in Section 1.12. When covering the algorithms in Chapter 7, it is crucial

to point out the main differences between them and those seen in Chapter 6. The variants

such as CGS, BICGSTAB, and TFQMR can be covered in a short time, omitting details of

the algebraic derivations or covering only one of the three. The class of methods based on

the normal equation approach, i.e., Chapter 8, can be skipped in a math-oriented course,

especially in the case of a quarter system. For a semester course, selected topics may be

Sections 8.1, 8.2, and 8.4.

Currently, preconditioning is known to be the critical ingredient in the success of it-

erative methods in solving real-life problems. Therefore, at least some parts of Chapter 9

and Chapter 10 should be covered. Section 9.2 and (very brie¬‚y) 9.3 are recommended.

From Chapter 10, discuss the basic ideas in Sections 10.1 through 10.3. The rest could be

skipped in a quarter course.

Chapter 11 may be useful to present to computer science majors, but may be skimmed

or skipped in a mathematics or an engineering course. Parts of Chapter 12 could be taught

primarily to make the students aware of the importance of “alternative” preconditioners.

Suggested selections are: 12.2, 12.4, and 12.7.2 (for engineers). Chapter 13 presents an im-

¦¤¨¢

© ¢ ¥£ © ¡

portant research area and is primilarily geared to mathematics majors. Computer scientists

or engineers may prefer to cover this material in less detail.

To make these suggestions more speci¬c, the following two tables are offered as sam-

ple course outlines. Numbers refer to sections in the text. A semester course represents

approximately 30 lectures of 75 minutes each whereas a quarter course is approximately

20 lectures of 75 minutes each. Different topics are selected for a mathematics course and

a non-mathematics course.

Semester course

Weeks Mathematics Computer Science / Eng.

1.9 “1.13 1.1 “ 1.6 (Read)

1“3 2.1 “ 2.5 1.7 “ 1.13, 2.1 “ 2.2

3.1 “ 3.3, 3.7 3.1 “ 3.7

4.1 “ 4.3 4.1 “ 4.2

4“6 5. 1 “ 5.4 5.1 “ 5.2.1

6.1 “ 6.3 6.1 “ 6.3

6.4 “ 6.7 (Except 6.5.2) 6.4 “ 6.5 (Except 6.5.5)

7“9 6.9 “ 6.11 6.7.1, 6.8“6.9, 6.11.3.

7.1 “ 7.3 7.1 “ 7.3

7.4.1; 7.4.2 “ 7.4.3 (Read) 7.4.1; 7.4.2 “ 7.4.3 (Read)

10 “ 12 8.1, 8.2, 8.4; 9.1 “ 9.3 8.1 “ 8.3; 9.1 “ 9.3

10.1 “ 10.3 10.1 “ 10.4

10.5.1 “ 10.5.6 10.5.1 “ 10.5.4

13 “ 15 10.6 ; 12.2 “ 12.4 11.1 “ 11.4 (Read); 11.5 “ 11.6

13.1 “ 13.6 12.1 “ 12.2; 12.4 “ 12.7

Quarter course

Weeks Mathematics Computer Science / Eng.

1“2 1.9 “ 1.13, 3.1 “ 3.2 1.1 “ 1.6 (Read); 3.1 “ 3.7

4.1 “ 4.3 4.1

3“4 5.1 “ 5.4 5.1 “ 5.2.1

6.1 “ 6.4 6.1 “ 6.3

5“6 6.4 “ 6.7 (Except 6.5.2) 6.4 “ 6.5 (Except 6.5.5)

6.11, 7.1 “ 7.3 6.7.1, 6.11.3, 7.1 “ 7.3

7“8 7.4.1; 7.4.2 “ 7.4.3 (Read) 7.4.1; 7.4.2 “ 7.4.3 (Read)

9.1 “ 9.3; 10.1 “ 10.3 9.1 “ 9.3; 10.1 “ 10.3

9 “ 10 10.6 ; 12.2 “ 12.4 11.1 “ 11.4 (Read); 11.5 “ 11.6

13.1 “ 13.4 12.1 “ 12.2; 12.4 “ 12.7

)1)uCYyG ˜ 1)1u`‘d H… — H… ‚ H…

and are matrices of size , and , where Addition:

h˜

˜

– ˜ ™— – •

operations with matrices are the following:

. The main matrices is a complex vector space denoted by

The set of all

z’“–

”‘ ˜

i1)1‘}d ˜ 111)‘dhYp…„‚

array of complex numbers

matrix is an

otherwise stated. A complex ‚!˜