ñòð. 39 |

CHAPTER 10. LINEAR ALGEBRA

118

Problem d: Expression (10.60) holds for every vector p. Use this to show that A can

be written as:

X (n) (n)T

N

A = nv v : ^^ (10.61)

n=1

Problem e: Show that with the de nition (10.55) this result can also be written as:

A = V VT (10.62)

where is a matrix that has the eigenvalues on the diagonal and whose other

elements are equal to zero:

0 1

B 01 02 0

C

B C:

0

= B .. .. . . C (10.63)

B. . . C

...

@ A

00 N

Hint: let (10.62) act on a arbitrary vector, use the de nition (10.56) and see what

happens.

10.6 Computing a function of a matrix

The expansion (10.61) (or equivalently (10.62)) is very useful because it provides a way

to compute the inverse of a matrix and to complete complex functions of a matrix such

as the exponential of a matrix. Let us rst use (10.61) to compute the inverse A;1 of the

matrix. In order to do this we must know the e ect of A;1 on the eigenvectors v(n) .

^

Problem a: Use the relation v(n) = I^(n) = A;1A^(n) to show that v(n) is also an

^ v v ^

;1 with eigenvalue 1= n :

eigenvector of the inverse A

A;1 v(n) = 1 v(n) :

^ ^ (10.64)

n

Problem b: Use this result and the eigenvector decomposition (10.59) to show that the

e ect of A;1 on a vector p can be written as

;1p = X 1 v(n) v(n) p :

N

A ^^ (10.65)

n=1 n

Also show that this implies that A;1 can also be written as:

A;1 = V ;1VT (10.66)

0 1

with

1= 1 0 0

B 0 1= 2 0C

B C

;1= B . .. . . . .. C : (10.67)

B .. .C

@ A

.

0 0 1= N

10.6. COMPUTING A FUNCTION OF A MATRIX 119

This is an important result, it means that once we have computed the eigenvectors and

eigenvalues of a matrix, we can compute the inverse matrix very e ciently. Note that

this procedure gives problems when one of the eigenvalues vanishes because for such an

eigenvalue 1= n is not de ned. However, this makes sense when one (or more) of the

eigenvalues vanishes the matrix is singular and the inverse does not exist. Also when one

of the eigenvalues is nonzero but close to zero, the corresponding term 1= n is very large,

in practice this gives rise to numerical instabilities. In this situation the inverse of the

matrix exist, but the result is very sensitive to computational (and other) errors. Such a

matrix is called poorly conditioned.

In general, a function of a matrix, such as the exponent of a matrix, is not de ned.

However, suppose we have a function f(z) that operates on a scalar z and that this function

can be written as a power series:

X

ap zp :

f(z) = (10.68)

p

P

For example, when f(z) = exp (z), then f(z) = 1 (1=p!)z p . Replacing the scalar z by

p=0

the matrix A the power series expansion can be used to de ne the e ect of the function

f when it operates on the matrix A:

X

ap Ap :

f(A) (10.69)

p

Although this may seem to be a simple rule to compute f(A), it is actually not so useful

because in many applications the summation (10.69) consists of in nitely many terms and

the computation of Ap can computationally be very demanding. Again, the eigenvalue

decomposition (10.61) or (10.62) allows us to simplify the evaluation of f(A).

Problem c: Show that v(n) is also an eigenvector of Ap with eigenvalue ( n)p, i.e. show

^

that

Apv(n) = ( n)p v(n) :

^ ^ (10.70)

Hint, rst compute A2 v(n) = A A^(n) , then A3 v(n) , etc.

^ v ^

Problem d: Use this result to show that (10.62) can be generalized to:

Ap= V pVT (10.71)

with p given by 0 1

p 0 0

1

B C

p

p= B C:

0 0

B C

2

(10.72)

B C

.. .. . . . ...

@ A

. .

p

0 0 N

Problem e: Finally use (10.69) to show that f(A) can be written as:

f(A) = Vf ( ) VT (10.73)

CHAPTER 10. LINEAR ALGEBRA

120

with f ( ) given by

0 1

B f (0 1) f (0 2) 0

C

B C

0

f ( ) = B .. C (10.74)

B. C

.. ..

...

@ A

. .

f ( N)

0 0

Problem f: In order to revert to an explicit eigenvector expansion, show that (10.73) can

be written as:

X

N

f ( n) v(n) v(n)T :

^^

f(A) = (10.75)

n=1

With this expression (or the equivalent expression (10.73)) the evaluation of f(A) is simple

once the eigenvectors and eigenvalues of A are known, because in (10.75) the function f

only acts on the eigenvalues, but not on the matrix. Since the function f normally acts

on a scalar (such as the eigenvalues), the eigenvector decomposition has obviated the

need for computing higher powers of the matrix A. However, from a numerical point of

view computing functions of matrices can be a tricky issue. For example, Moler and van

ñòð. 39 |