For large matrices, the determinant of a matrix is almost never a good indication of

20(

1)

B

“near” singularity or degree of sensitivity of the linear system. The reason is that is

C

the product of the eigenvalues which depends very much on a scaling of a matrix, whereas

1

the condition number of a matrix is scaling-invariant. For example, for the deter-

Q

‘ C B C B

2)0(

1

minant is , which can be very small if , whereas for any of

the standard norms.

In addition, small eigenvalues do not always give a good indication of poor condition-

ing. Indeed, a matrix can have all its eigenvalues equal to one yet be poorly conditioned.

¥

£U

T

§¦

¥ ©

The simplest example is provided by matrices of the form

— ‘

G ¤ 0 ¥

¤

‘

‘

for large . The inverse of is

0‘

G ¤ 0 § 3

¤

&

‘

£

and for the -norm we have

— y

‘ 0 §

¡ ¡

&

‘

so that

6 C — •% C ‘ B

¡ B

For a large , this can give a very large condition number, whereas all the eigenvalues of

‘ are equal to unity.

When an iterative procedure is used for solving a linear system, we typically face the

problem of choosing a good stopping procedure for the algorithm. Often a residual norm,

'

¡

3 ¦

W¦

3

¦ ¦

is available for some current approximation and an estimate of the absolute error

¦ (

¦

3 ¦

or the relative error is desired. The following simple relation is helpful in this

regard,

' C §

3

¦

¦ B

#

¦

§

B

It is necessary to have an estimate of the condition number in order to exploit the

C

above relation.

d¥ © g¢ ¤© ©

n¥ u¡ § £ ¥

©

) ") ¨ ¢

£ §

1 Verify that the Euclidean inner product de¬ned by (1.4) does indeed satisfy the general de¬nition

of inner products on vector spaces.

2 Show that two eigenvectors associated with two distinct eigenvalues are linearly independent.

In a more general sense, show that a family of eigenvectors associated with distinct eigenvalues

forms a linearly independent family.

3 Show that if is any nonzero eigenvalue of the matrix , then it is also an eigenvalue of the

¡ ¤¢

£

matrix . Start with the particular case where and are square and is nonsingular, then

¥£

¢ ¢ £ £

consider the more general case where may be singular or even rectangular (but such that

¨§¢

£¦

and are square).

¤¢

£ ©£

¢

4 Let be an orthogonal matrix, i.e., such that , where is a diagonal matrix.

¢

¢

¢

Assuming that is nonsingular, what is the inverse of ? Assuming that , how can be

¢ #!

" ¢

transformed into a unitary matrix (by operations on its rows or columns)?

5 Show that the Frobenius norm is consistent. Can this norm be associated to two vector norms

via (1.7)? What is the Frobenius norm of a diagonal matrix? What is the -norm of a diagonal $

matrix (for any )? $

6 Find the Jordan canonical form of the matrix:

31)'

20 ( &

( & " %¢

56 (

4"

"

Same question for the matrix obtained by replacing the element by 1.

8@97

8

7 Give an alternative proof of Theorem 1.3 on the Schur form by starting from the Jordan canonical

form. [Hint: Write and use the QR decomposition of .]

IGED%A¢

HF BC B B

8 Show from the de¬nition of determinants used in Section 1.2 that the characteristic polynomial

is a polynomial of degree for an matrix.