respect to the inner-product, write an algorithm similar to Algorithm 9.1 for solving the pre-

h $P ¥e ‚– $P

R ¤ dR –

¤

conditioned linear system , using the -inner product. The algorithm should

employ only one matrix-by-vector product per CG step.

2 In Section 9.2.1, the split-preconditioned Conjugate Gradient algorithm, Algorithm 9.2, was de-

rived from the Preconditioned Conjugate Gradient Algorithm 9.1. The opposite can also be done.

Derive Algorithm 9.1 starting from Algorithm 9.2, providing a different proof of the equivalence

of the two algorithms.

p¡¶

¶ ’ ”p£¡¨8 ³¡© &’ ”8 ” ’ ¤ §

8˜ 8¥© $ $ ©

¡¨ ¨¦ ¤

© §¥ &

$

3 Six versions of the CG algorithm applied to the normal equations can be de¬ned. Two versions

come from the NR/NE options, each of which can be preconditioned from left, right, or on

two sides. The left preconditioned variants have been given in Section 9.5. Describe the four

other versions: Right P-CGNR, Right P-CGNE, Split P-CGNR, Split P-CGNE. Suitable inner

products may be used to preserve symmetry.

4 When preconditioning the normal equations, whether the NE or NR form, two options are avail-

able in addition to the left, right and split preconditioners. These are “centered” versions:

e d e ¡£ – SP 2–

h R ¤ –

¢£ RSP ¤

for the NE form, and

h SP ¤ £ – |”– RSP ¤ £ –

ed

R

for the NR form. The coef¬cient matrices in the above systems are all symmetric. Write down

the adapted versions of the CG algorithm for these options.

– ¤

5 Let a matrix and its preconditioner be SPD. The standard result about the rate of conver-

gence of the CG algorithm is not valid for the Preconditioned Conjugate Gradient algorithm,

¤

Algorithm 9.1. Show how to adapt this result by exploiting the -inner product. Show how to

derive the same result by using the equivalence between Algorithm 9.1 and Algorithm 9.2.

£

6 In Eisenstat™s implementation of the PCG algorithm, the operation with the diagonal causes

some dif¬culties when describing the algorithm. This can be avoided.

£

&'% Assume that the diagonal of the preconditioning (9.5) is equal to the identity matrix.

What are the number of operations needed to perform one step of the PCG algorithm with

Eisenstat™s implementation? Formulate the PCG scheme for this case carefully.

&1 £

¤

The rows and columns of the preconditioning matrix can be scaled so that the matrix

of the transformed preconditioner, written in the form (9.5), is equal to the identity matrix.

¤

What scaling should be used (the resulting should also be SPD)?

–

&32 Assume that the same scaling of question b is also applied to the original matrix . Is the

resulting iteration mathematically equivalent to using Algorithm 9.1 to solve the system (9.6)

£

preconditioned with the diagonal ?

£ ¥$P £

¤¥SP £

R ¤R

7 In order to save operations, the two matrices and must be stored when comput-

–

§¦

¨

ing by Algorithm 9.3. This exercise considers alternatives.

4

–

£¦ £4 ©

&'% Consider the matrix . Show how to implement an algorithm similar to 9.3 for

RSP £ ¤

§

multiplying a vector by . The requirement is that only must be stored.

4

–

&1 4

The matrix in the previous question is not the proper preconditioned version of by

the preconditioning (9.5). CG is used on an equivalent system involving but a further

preconditioning by a diagonal must be applied. Which one? How does the resulting algorithm

compare in terms of cost and storage with an Algorithm based on 9.3?

– ¦ RSP £

&32 It was mentioned in Section 9.2.2 that needed to be further preconditioned by . Con-

sider the split-preconditioning option: CG is to be applied to the preconditioned system as-

e – e

© R £ ¦ © R £ ¦¤ © RSP £ ¤ © $P £

R

sociated with . De¬ning show that,

£ 85

C

9

C85

9 85

C

9

C85

9

e £ R$P ¦ ¤ T

P ¦¤ T RSP ¦ ¤ T

£ P ¦¤ T £

©

£

©£

where is a certain matrix to be determined. Then write an analogue of Algorithm 9.3

using this formulation. How does the operation count compare with that of Algorithm 9.3? 95 7

– e

¦

7

8 Assume that the number of nonzero elements of a matrix is parameterized by .

How small should be before it does not pay to use Eisenstat™s implementation for the PCG

– £

algorithm? What if the matrix is initially scaled so that is the identity matrix?

¶

”}$

8’

¨(c2 ¤ ¡" ¡© ¡¡

©

$ ¥ © c"

©

£ ¡¤

¢e –

9 Let be a preconditioner for a matrix . Show that the left, right, and split precondi-

tioned matrices all have the same eigenvalues. Does this mean that the corresponding precon-

ditioned iterations will converge in (a) exactly the same number of steps? (b) roughly the same

number of steps for any matrix? (c) roughly the same number of steps, except for ill-conditioned

matrices?

#

¤

10 Show that the relation (9.17) holds for any polynomial and any vector .

11 Write the equivalent of Algorithm 9.1 for the Conjugate Residual method.

¤

12 Assume that a Symmetric Positive De¬nite matrix is used to precondition GMRES for solv-

ing a nonsymmetric linear system. The main features of the P-GMRES algorithm exploiting

this were given in Section 9.2.1. Give a formal description of the algorithm. In particular give a

§§

¤

Modi¬ed Gram-Schimdt implementation. [Hint: The vectors ™s must be saved in addition to

§

§

the ™s.] What optimality property does the approximate solution satisfy? What happens if the

–

original matrix is also symmetric? What is a potential advantage of the resulting algorithm?