¡

¢ BCBB v 0FR ¨¦1

AAA © §¥

. Write down the corresponding projection step ( is modi¬ed into ).

v v

¢

£

Similarly, write the projection step for the second half of the vectors, i.e., when

£¢ 0FR ¨¦1

¢ © §¥ T 0BBA

AA

, . v

` Consider an iteration procedure which consists of performing the two successive half-steps

described above until convergence. Show that this iteration is equivalent to a (standard)

Gauss-Seidel iteration applied to the original system.

¢

•” Now consider a similar idea in which is taken to be the same as before for each half-step

¢

£

and . Write down the iteration procedure based on this approach. Name another

technique to which it is mathematically equivalent.

§ ¡

6 Consider the linear system , where is a Symmetric Positive De¬nite matrix. We de¬ne

a projection method which uses a two-dimensional space at each step. At a given step, take

¢

£

! R ¨¦1

© §¥ ! 1& §

, where is the current residual. !

¢

For a basis of use the vector and the vector obtained by orthogonalizing against ! ! !

!

with respect to the -inner product. Give the formula for computing (no need to normalize !

the resulting vector).

` Write the algorithm for performing the projection method described above.

¥

¤

•” Will the algorithm converge for any initial guess ? Justify the answer. [Hint: Exploit the

convergence results for one-dimensional projection techniques.]

7 Consider projection methods which update at each step the current solution with linear combi-

nations from two directions: the current residual and . ! !

¢

£

! R §T¦1

©¥ !

Consider an orthogonal projection method, i.e., at each step . As-

suming that is Symmetric Positive De¬nite, establish convergence of the algorithm.

¢ £

`

! FR §T¦1

©¥ !

Consider a least-squares projection method in which at each step and

¢

. Assuming that is positive de¬nite (not necessarily symmetric), establish convergence

of the algorithm.

[Hint: The convergence results for any of the one-dimensional projection techniques can be

exploited.]

£ ¦ E &8

8 The “least-squares” Gauss-Seidel relaxation method de¬nes a relaxation step as ¦ Q

(same as Gauss-Seidel), but chooses to minimize the residual norm of .

Write down the resulting algorithm.

` Show that this iteration is mathematically equivalent to a Gauss-Seidel iteration applied to

§ X ¡ X

the normal equations .

9 Derive three types of one-dimensional projection algorithms in the same manner as was done in

£

Section 5.3, by replacing every occurrence of the residual vector by a vector , a column of !

the identity matrix.

y £ w ¢ ¡y ¡|

|5 | §| C tC

9

10 Derive three types of one-dimensional projection algorithms in the same manner as was done in

£ ¡

Section 5.3, by replacing every occurrence of the residual vector by a vector , a column of !

the matrix . What would be an “optimal” choice for at each projection step? Show that the

method is globally convergent in this case.

11 A minimal residual iteration as de¬ned in Section 5.3.2 can also be de¬ned for an arbitrary

search direction , not necessarily related to in any way. In this case, we still de¬ne .

" ! "

Write down the corresponding algorithm.

` Under which condition are all iterates de¬ned?

2 ¢ ¡! 2 22

•” Under which condition on does the new iterate make no progress, i.e., ?

" ¢!

v

Write a general suf¬cient condition which must be satis¬ed by at each step in order to

"

guarantee convergence.

§

12 Consider the following real-valued functions of the vector variable , where and are the

¤

¡

§ §

v„t £

coef¬cient matrix and right-hand system of a given linear system and .

2

2

G3 )

9 £

2 2

G3 ¥

9 1& §

2

2

G¥

9 3 ¡ X & § X

G3 ¦

9 A ¡q3 49 § 3 %

9

Calculate the gradients of all four functions above.

¥

` ¥

How is the gradient of related to that of ?

¥ ¦

•”

How is the gradient of related to that of when is symmetric?

¦

£

How does the function relate to the -norm of the error when is Symmetric

Positive De¬nite?

13 The block Gauss-Seidel iteration can be expressed as a method of successive projections. The

¢

subspace used for each projection is of the form

¢ " ¢ £ BB0AB v ¢ £ £ 0R §T¦1

© ¥