ñòð. 41 |

P^

(the 3 v(n) terms). The e ect that the force has on each normal mode is given by

n=1

the inner product (^(n) F). This is nothing but the component of the force F along the

v ;2

eigenvector v(n) , see equation (10.2). The term 1= !n !2 gives the sensitivity of the

^ ;

system to a driving force with frequency !, this term can be called a sensitivity term.

10.8. SINGULAR VALUE DECOMPOSITION 123

;

When the driving force is close to one of the eigenfrequencies, 1= !n !2 is very large.

2 ;

In that case the system is close to resonance and the resulting displacement will be very

large. On the other hand, when the frequency of the driving force is very far from the

;2

eigenfrequencies of the system, 1= !n !2 will be small and the system will give a very

;

small response. The total response can be seen as a combination of three basic operations:

eigenvector expansion, projection and multiplication with a response function. Note that

the same operations were used in the explanation of the action of a matrix A below

equation (10.60).

10.8 Singular value decomposition

In section (10.5) the decomposition of a square matrix in terms of eigenvectors was treated.

In many practical applications, such as inverse problems, one encounters a system of

equations that is not square:

A x y

|{z} = |{z} (10.84)

|{z}

MN N M

matrix rows rows

Consider the example that the vector x has N components and that there are M equations.

In that case the vector y has M components and the matrix A has M rows and N columns,

i.e. it is an M N matrix. A relation such as (10.53) which states that A^(n) = n v(n)

v ^

cannot possibly hold because when the matrix A acts on an N-vector it produces an

M-vector whereas in (10.53) the vector in the right hand side has the same number of

components as the vector in the left hand side. It will be clear that the theory of section

(10.5) cannot be applied when the matrix is not square. However, it is possible to generalize

the theory of section (10.5) when A is not square. For simplicity it is assumed that A is

a real matrix.

In section (10.5) a single set of orthonormal eigenvectors v(n) was used to analyze the

^

problem. Since the vectors x and y in (10.84) have di erent dimensions, it is necessary to

expand the vector x in a set of N orthogonal vectors v(n) that each have N components and

^

to expand y in a di erent set of M orthogonal vectors u(m) that each have M components.

^

Suppose we have chosen a set v ^ (n) , let us de ne vectors u(n) by the following relation:

^

A^(n) = nu(n) :

v ^ (10.85)

The constant n should not be confused with an eigenvalue, this constant follows from the

requirement that v(n) and u(n) are both vectors of unit length. At this point, the choice

^ ^

of v(n) is still open. The vectors v(n) will now be constrained that they satisfy in addition

^ ^

to (10.85) the following requirement:

AT u(n) = nv(n)

^ ^ (10.86)

where AT is the transpose of A.

Problem a: In order to nd the vectors v(n) and u(n) that satisfy both (10.85) and

^ ^

(10.86), multiply (10.85) with A T and use (10.86) to eliminate u(n) . Do this to show

^

CHAPTER 10. LINEAR ALGEBRA

124

that v(n) satis es:

^

AT A v(n) =

^ n nv

^ (n) : (10.87)

Use similar steps to show that u(n) satis es

^

AAT u(n) =

^ n n u(n) :

^ (10.88)

These equations state that the v(n) are the eigenvectors of AT A and that the u(n) are the

^ ^

eigenvectors of AAT .

Problem b: Show that both AT A and AAT are real symmetric matrices and show that

this implies that the basis vectors v(n) (n = 1

^ N) and u(m) (m = 1

^ M) are

both orthonormal:

v(n) v(m) = u(n) u(m) =

^^ ^^ nm : (10.89)

Although (10.87) and (10.88) can be used to nd the basis vectors v(n) and u(n) , these

^ ^

expressions cannot be used to nd the constants n and n , because these expressions

state that the product n n is equal to the eigenvalues of AT A and AAT . This implies

that only the product of n and n is de ned.

Problem c: In order to nd the relation between n and n, take the inner product of

(10.85) with u(n) and use the orthogonality relation (10.89) to show that:

^

u(n) A^(n) :

^ v

n= (10.90)

Problem d: Show that for arbitrary vectors p and q that

(p Aq) = AT p q : (10.91)

Problem e: Apply this relation to (10.90) and use (10.86) to show that

n= n: (10.92)

This is all the information we need to nd both n and n . Since these quantities are

equal, and since by virtue of (10.87) these eigenvectors are equal to the eigenvectors of

AT A, it follows that both n and n are given by the square-root of the eigenvalues of

AT A. Note that is follows from (10.88) that the product n n also equals the eigenvalues

of AAT . This can only be the case when AT A and AAT have the same eigenvalues.

Before we proceed let us show that this is indeed the case. Let the eigenvalues of AT A

be denoted by n and the eigenvalues of AAT by n, i.e. that

AT A^(n) = nv(n)

v ^ (10.93)

and

AAT u(n) = nu(n) :

^ ^ (10.94)

10.8. SINGULAR VALUE DECOMPOSITION 125

Problem f: Take the inner product of (10.93) with v(n) to show that n = v(n) AT A^(n) ,

^ ^ v

use the properties (10.91) and ATT = A and (10.85) to show that 2 = n . Use

n

2 = n. With (10.92) this implies that AAT and AT A

similar steps to show that n

have the same eigenvalues.

The proof that AAT and AT A have the same eigenvalues was not only given as a check

of the consistency of the theory, the fact that AAT and AT A have the same eigenvalues

has important implications. Since AAT is an M M matrix, it has M eigenvalues, and

since AT A is an N N matrix it has N eigenvalues. The only way for these matrices

to have the same eigenvalues, but to have a di erent number of eigenvalues is that the

number of nonzero eigenvalues is given by the minimum of N and M. In practice, some

of the eigenvalues of AAT may be zero, hence the number of nonzero eigenvalues of AAT

can be less that M. By the same token, the number of nonzero eigenvalues of AT A can

be less than N. The number of nonzero eigenvalues will be denoted by P. It is not know

a-priori how many nonzero eigenvalues there are, but it follows from the arguments above

that P is smaller or equal than M and N. This implies that

P min(N M) (10.95)

where min(N M) denotes the minimum of N and M. Therefore, whenever a summation

over eigenvalues occurs, we need to take only P eigenvalues into account. Since the

ordering of the eigenvalues is arbitrary, it is assumed in the following that the eigenvectors

are ordered by decreasing size: 1 2 N . In this ordering the eigenvalues for

n > P are equal to zero so that the summation over eigenvalues runs from 1 to P.

Problem g: The matrices AAT and AT A have the same eigenvalues. When you need

the eigenvalues and eigenvectors, would it be from the point of view of computational

ñòð. 41 |