ñòð. 42 |

of AAT ? Consider the situations M > N and M < N separately.

Let us now return to the task of making an eigenvalue decomposition of the matrix A. The

vectors v(n) form a basis in N-dimensional space. Since the vector P is N-dimensional,

^ x

every vector x can de decomposed according to equation (10.59): x = N v(n) v(n) x .

n=1 ^ ^

Problem h: Let the matrix A act on this expression and use (10.85) to show that:

X (n) (n)

P

Ax = nu v x :^^ (10.96)

n=1

Problem i: This expression must hold for any vector x. Use this property to deduce

that:

X (n) (n)T

P

A = nu v :^^ (10.97)

n=1

Problem j: The eigenvectors v(n) can be arranged in an N N matrix V de ned in

^

(10.55). Similarly the eigenvectors u(n) can be used to form the columns of an

^

CHAPTER 10. LINEAR ALGEBRA

126

M M matrix U: 0 .. .. .. 1

. . .C

B (1) (2)

U =B u u u(M) C :

@^ ^ ^A (10.98)

.. .. .

..

. .

Show that A can also be written as:

A = U VT (10.99)

with the diagonal matrix de ned in (10.63).

This decomposition of A in terms of eigenvectors is called the Singular Value Decomposi-

tion of the matrix. This is frequently abbreviated as SVD.

Problem k: You may have noticed the similarity between the expression (10.97) and the

equation (10.62) for a square matrix and expression (10.99) and equation (10.61).

Show that for the special case M = N the theory of this section is identical to the

eigenvalue decomposition for a square matrix presented in section (10.5). Hint: what

are the vectors u(n) when M = N?

^

Let us now solve the original system of linear equations (10.84) for the unknown vector x.

In order to P this, expand the vector y in the vectors u(n) that span the M-dimensional

^

do

space: y = m=1 ^

M u(m) u(m) y , and expand the vector x in the vectors v(n) that span

^ ^

the N-dimensional space:

X (n)

N

x = cn v : ^ (10.100)

n=1

Problem l: At this point the coe cients cn are unknown. Insert the expansions for y

and x and the expansion (10.97) for the matrix A in the linear system (10.84) and

use the orthogonality properties of the eigenvectors to show that cn = u(n) y = n

^

so that

X 1 (n)

P

x= u y v(n) :

^ ^ (10.101)

n=1 n

Note that although in the original expansion (10.100) of x a summation is carried out

over all N basisvectors, whereas in the solution (10.101) a summation is carried out over

the rst P basisvectors only. The reason for this is that the remaining eigenvectors have

eigenvalues that are equal to zero so that they could be left out of the expansion (10.97)

of the matrix A. Indeed, these eigenvalues would give rise to problems because if they

were retained they would lead to in nite contributions 1= in the solution (10.101).

!1

In practice, some eigenvalues may be nonzero, but close to zero so that the term 1= gives

rise to numerical instabilities. In practice, one therefore often leaves out nonzero but small

eigenvalues as well in the summation (10.101).

This may appear to be a objective procedure for de ning solutions for linear problems

that are undetermined or for problems that are otherwise ill-conditioned, but there is a

price once pays for leaving out basisvectors in the construction of the solution. The vector

10.8. SINGULAR VALUE DECOMPOSITION 127

x is N-dimensional, hence one needs N basisvectors to construct an arbitrary vector x,

see equation (10.100). The solution vector given in (10.101) is build by superposing only

P basisvectors. This implies that the solution vector is constrained to be within the P-

dimensional subspace spanned by the rst P eigenvectors. Therefore, there it is not clear

that the solution vector in (10.101) is identical to the true vector x. However, the point

of using the singular value decomposition is that the solution is only constrained by the

linear system of equations (10.84) within the subspace spanned by the rst P basisvectors

v(n) . The solution (10.101) ensures that only the components of x within that subspace

^

are a ected by the right-hand side vector y. This technique is extremely important in the

analysis of linear inverse problems.

CHAPTER 10. LINEAR ALGEBRA

128

Chapter 11

Fourier analysis

Fourier analysis is concerned with the decomposition of signals in sine and cosine waves.

This technique is of obvious relevance for spectral analysis where one decomposes a time

signal in its di erent frequency components. As an example, the spectrum of a low-C on

Low C on soprano saxophone

âˆ’20

âˆ’40

Power (db)

âˆ’60

âˆ’80

âˆ’100

0 1000 2000 3000 4000 5000 6000

Frequency (Hz)

Figure 11.1: The energy of the sound made by the author playing a low C on his soprano

saxophone as a function of frequency. The unit used for the horizontal axis is Hertz

(number of oscillations per second), the unit on the vertical axis is decibels (a logarithmic

measure of energy).

a soprano saxophone shown in gure 11.1. However, the use of Fourier analysis goes far

beyond this application because Fourier analysis can also be used for nding solutions of

di erential equations and a large number of other applications. In this chapter the real

Fourier transform on a nite interval is used as a starting point. From this the complex

Fourier transform and the Fourier transform on an in nite interval are derived. In several

129

CHAPTER 11. FOURIER ANALYSIS

130

stages of the analysis, the similarity of Fourier analysis and linear algebra will be made

apparent.

11.1 The real Fourier series on a nite interval

Consider a function f(x) that is de ned on the interval < x L. This interval is of

;L

length 2L, and let us assume that f(x) is periodic with period 2L. This means that if one

translates this function over a distance 2L the value does not change:

f(x + 2L) = f(x) : (11.1)

We want to expand this function in a set of basis functions. Since f(x) is periodic with

period 2L, these basis functions must be periodic with the same period.

Problem a: Show that the functions cos (n x=L) and sin (n x=L) with integer n are

periodic with period 2L, i.e. show that these functions satisfy (11.1).

ñòð. 42 |