& Show that is self-adjoint with respect to to .

¥ $%# '!

)( Show that is self-adjoint with respect to to for any integer .

¥

0 2# "1

$ 3

4 Show that the Neumann series expansion preconditioner de¬ned by the right-hand side of

(12.3) leads to a preconditioned matrix that is self-adjoint with respect to the -inner prod-

uct.

¡5 Describe an implementation of the preconditioned CG algorithm using this preconditioner.

2 The development of the Chebyshev iteration algorithm seen in Section 12.3.2 can be exploited to

derive yet another formulation of the conjugate algorithm from the Lanczos algorithm. Observe

that the recurrence relation (12.8) is not restricted to scaled Chebyshev polynomials.

The scaled Lanczos polynomials, i.e., the polynomials , in which is the 6 %998

6@#7 CAB

# 6 98

#7

0 0 0

& I#H¤B G¦ & D

polynomial such that in the Lanczos algorithm, satisfy a relation of the

6

D

F0

E

0

form (12.8). What are the coef¬cients and in this case?

P Q

0

& Proceed in the same manner as in Section 12.3.2 to derive a version of the Conjugate Gradient

algorithm.

3 Show that as de¬ned by (12.7) has a limit . What is this limit? Assume that Algorithm 12.1

P P

0

is to be executed with the ™s all replaced by this limit . Will the method converge? What is

P P

0

the asymptotic rate of convergence of this modi¬ed method?

& &

4 Derive the least-squares polynomials for for the interval for .

( SR

¦ ¦U

T Y`'WA V

X 9)%)%cba3

fedX¦

Check that these results agree with those of the table shown at the end of Section 12.3.3.

5 Consider the mesh shown below. Assume that the objective is to solve the Poisson equation with

Dirichlet boundary conditions.

Consider the resulting matrix obtained (before boundary conditions are applied) from order-

ing the nodes from bottom up, and left to right (thus, the bottom left vertex is labeled 1 and

the top right vertex is labeled 13). What is the bandwidth of the linear system? How many

memory locations would be needed to store the matrix in Skyline format? (Assume that the

matrix is nonsymmetric so both upper and lower triangular parts must be stored).

™ —t

7 —— p w7 z p — |w z

˜ { { | |

¡§

“£§

¢ ¢

¡

4 ¢

$#

! § "#

& Is it possible to ¬nd a 2-color ordering of the mesh points? If so, show the ordering, or

otherwise prove that it is not possible.

)( Find an independent set of size 5. Show the pattern of the matrix associated with this inde-

pendent set ordering.

4 Find a multicolor ordering of the mesh by using the greedy multicolor algorithm. Can you

¬nd a better coloring (i.e., a coloring with fewer colors)? If so, show the coloring [use letters

to represent each color].

¢

6 A linear system where is a 5-point matrix, is reordered using red-black ordering as

¡¤

¦ ¤

§¥¦

¥ ¥

£

& ¥

¤ ¦ ¨

¦ ¦ ¦

Write the block Gauss-Seidel iteration associated with the above partitioned system (where

the blocking in block Gauss-Seidel is the same as the above blocking).

¥

& Express the iterates, independently of the iterates, i.e., ¬nd an iteration which involves

¥

only -iterates. What type of iteration is the resulting scheme?

$('9$ ¢ I%#!¦ ©

7 Consider a tridiagonal matrix . Find a grouping of the rows such that

& $ " #

rows in each group are structurally orthogonal, i.e., orthogonal regardless of the values of the en-

try. Find a set of three groups at most. How can this be generalized to block tridiagonal matrices

such as those arising from 2-D and 3-D centered difference matrices?

8 Why are the Winget regularized matrices de¬ned by (12.25) positive de¬nite when the

42 1) ¤

30

matrix is obtained from by a diagonal scaling from ?

5¤ ¤ ¤

NOTES AND REFERENCES. As vector processing appeared in the middle to late 1970s, a number

of efforts were made to change algorithms, or implementations of standard methods, to exploit the

new architectures. One of the ¬rst ideas in this context was to perform matrix-by-vector products

by diagonals [133]. Matrix-by-vector products using this format can yield excellent performance.

Hence, came the idea of using polynomial preconditioning. Polynomial preconditioning was ex-

ploited independently of supercomputing, as early as 1952 in a paper by Lanczos [141], and later

for eigenvalue problems by Stiefel who employed least-squares polynomials [204], and Rutishauser

[171] who combined the QD algorithm with Chebyshev acceleration. Dubois et al. [75] suggested us-

ing polynomial preconditioning, speci¬cally, the Neumann series expansion, for solving Symmetric

Positive De¬nite linear systems on vector computers. Johnson et al. [129] later extended the idea by

exploiting Chebyshev polynomials, and other orthogonal polynomials. It was observed in [129] that

least-squares polynomials tend to perform better than those based on the uniform norm, in that they

lead to a better overall clustering of the spectrum. Moreover, as was already observed by Rutishauser

[171], in the symmetric case there is no need for accurate eigenvalue estimates: It suf¬ces to use the

simple bounds that are provided by Gershgorin™s theorem. In [175] it was also observed that in some

cases the least-squares polynomial approach which requires less information than the Chebyshev

approach tends to perform better.

The use of least-squares polynomials over polygons was ¬rst advocated by Smolarski and Saylor

[200] and later by Saad [176]. The application to the inde¬nite case was examined in detail in [174].

Still in the context of using polygons instead of ellipses, yet another attractive possibility proposed

by Fischer and Reichel [91] avoids the problem of best approximation altogether. The polygon can

be conformally transformed into a circle and the theory of Faber polynomials yields a simple way of

deriving good polynomials from exploiting speci¬c points on the circle.

Although only approaches based on the formulation (12.5) and (12.11) have been discussed,

8(# 7 % ( @ X 6

there are other lesser known possibilities based on minimizing . There has been 76

™

˜ ˜ ‚ —

{ ˜ wU

7p #§

¡

§

x"

!

very little work on polynomial preconditioning or Krylov subspace methods for highly non-normal