Normal and Orthonormal Matrices

A normal matrix is a square matrix A for which A† A = AA† , where A† is the conjugate transpose of

A. Normal matrices are special because they have m eigenvectors that form a linear basis for the vector

space. Note that Hermitian matrices are normal.

An orthonormal matrix is a square matrix A such that its conjugate transpose A† is its inverse, that

is, AA† = A† A = I.

Linear Analysis 415

Singular Value Decomposition2

Any m — n matrix A can be given a Singular Value Decomposition (SVD)

A = USV † (B.4)

where U is m — n, S is n — n, V is n — n, and V † is the conjugate transpose of V. The ¬rst min(m, n)

columns of U and V are orthonormal vectors of dimension n and m and are called left and right

singular vectors, respectively. Matrix S is a diagonal matrix with non-negative elements sii = si ,

i = 1, . . . , min(m, n), called singular values. All other elements of S are zero.

When m ≥ n:

• U † U = I n , where I n is the n — n identity matrix,

• V † V = VV † = I n ,

• S = diag(s1 , . . . , sn ).

Note that

A† U = VSU † U = VS

(B.5)

AV = USV † V = US.

Therefore

AA† U = USV † VS = US 2

(B.6)

A† AV = VSU † US = VS 2 .

That is, the columns of V are the eigenvectors of A† A, the squares of the singular values si are

the eigenvalues of A† A, and the columns of U are the eigenvectors of AA† that correspond to these

eigenvalues.

When m < n:

A similar singular value decomposition can be constructed when m < n. We ¬rst write

A† = U S V †

where U is n — m, S is m — m, and V is m — m, all with properties as described above. Thus

A = V S U †.

Now construct an m — n matrix U = (V |0 · · · 0) by adding n ’ m columns of zeros to V , construct an

n — n matrix S by placing S in the upper left corner and padding the rest of the matrix with zeros, and

construct an n — n matrix V = (U |g 1 · · · g n’m ), where g 1 , . . . , g n’m are chosen so that the columns of

V form an orthonormal basis for the n-dimensional vector space. Then we again have a decomposition

in the form of equation (B.4) that has properties analogous to those described for the m ≥ n case.

The algorithms in the Numerical Recipes [322] or other software libraries can be used to perform an

SVD, or ¬rst solve one of the eigen-equations (B.6) and then calculate the other set of singular vectors

from (B.5). Navarra [290] points out that the ¬rst approach is numerically more robust than the second.

An interesting byproduct of this subsection is that the eigenvectors and eigenvalues of a matrix of the

form AA† may be derived through an SVD of the matrix A. When estimating Empirical Orthogonal

Functions (see Section 13.3), the eigenvalues of the estimated covariance matrix must be calculated.

This estimated covariance matrix can be written as n X X † , where X is an m — n matrix with m the

1

dimension of the random vector and n the number of realizations of the vector in the sample. The

columns of X consist of deviations from the vector of sample means.

2 See also Navarra™s summary [290] or Golub and van Loan™s [143] detailed presentation of the topic .

C Fourier Analysis and Fourier Transform

Fourier Analysis and Fourier Transform

Fourier analysis and the Fourier transform are mathematically different and can not be applied to the

same objects. The two approaches should not be confused.

Fourier analysis is a geometrical concept. It offers two equivalent (i.e., isomorphic) descriptions of a

discrete or continuous periodic function.

• In case of discrete functions (X 0 , . . . , X T ’1 ) with X T = X 0 and T even, the trigonometric

expansion is

n’1

Xt = ak ei2π kt/T (C.1)

k=’n

for t = 0, . . . , T ’ 1, and the coef¬cients are given by

T ’1

1

X t e’i2π kt/T

ak = (C.2)

T t=0

for k = ’n, . . . , n ’ 1. A similar formula holds for odd T .

• Very similar formulae hold for continuous periodic functions, namely

∞

Xt = ak ei2π kt/T (C.3)

k=’∞

for t ∈ [0, T ], with coef¬cients

T

1

X t e’i2πkt/T dt

ak = (C.4)

T 0

for k = 0, ±1, ±2, . . . , ±∞.

Note that Fourier analysis can not be applied to a summable function, such as the auto-covariance

function, since such a function can not be periodic.

The Fourier transform is a mapping from a set of discrete, summable series to the set of real functions

de¬ned on the interval [’ 1 , 1 ]. The auto-covariance function is summable in all ordinary cases, but

22

stationary time series are not summable. If s is such a summable discrete series, then its Fourier

transform F {s} is a function that, for any real ω ∈ [’ 1 , 1 ], takes the value

22

∞

s j e’i2π ωj .

F {s}(ω) = (C.5)

j=’∞

416

Fourier Analysis and Transform 417

The variable ω is usually named ˜frequency™. The Fourier transform mapping is invertible,

1

2

F {s}(ω)ei2π ωj dω,

sj = (C.6)

’1

2

so that the in¬nite series s and the function F {s} are isomorphic and represent the same information.

Note that a Fourier transform can not be obtained for a periodic function.

The de¬nition of the Fourier transform is arbitrary in detail. In the present de¬nition there no minus

sign in the exponent of the ˜reconstruction™ equation (C.6). One could insert a minus sign in equation

(C.6), but then the minus in the ˜decomposition™ equation (C.5) must be removed.

Some Properties of the Fourier Transform

The following computational rules are easily derived from the de¬nition of the Fourier transform.

• The Fourier transform is linear, that is, if f and g are summable series and if ± is a real number,

then: