radiative properties; and its interaction with the land, water, and ice surfaces of the planet. Most

models include at least a crude interactive land surface processes model. In addition, some AGCMs

are coupled to thermodynamic models of sea ice and the mixed layer of the ocean, while others have

been coupled to fully dynamic Ocean GCMs (OGCMs). AGCMs, OGCMs, and coupled GCMs are

essential tools of climate research.

• AIC: Akaike information Criterion.

• ARMA: auto-regressive moving average.

• BIC: Bayesian information Criterion.

• CCA: Canonical Correlation Analysis. See Chapter 14.

• DJF, MAM, JJA and SON: December-January-February, March-April-May, etc.

• EBW: equivalent bandwidth

• EDF: equivalent degrees of freedom.

• EEOF or simply EOF: (Extended) Empirical Orthogonal Function. See Chapter 13.

Appendix A: Notation

412

• MCA: Maximum Covariance Analysis. See [14.1.7].

• MJO: Madden-and-Julian Oscillation. See footnote 10 in [1.2.3].

• MLE: Maximum Likelihood Estimator.

• MOS: model output statistics.

• MSSA: Multichannel Singular Spectrum Analysis.

• NAO: North Atlantic Oscillation.

• PIP: Principal Interaction Pattern.

• PNA: Paci¬c“North American pattern. See [13.5.5] and Section 17.4.

• POP: Principal Oscillation Pattern. See Chapter 15.

• QBO: Quasi-Biennial Oscillation.

• SLP: sea-level pressure.

• SO and ENSO: Southern Oscillation and El Ni˜ o/Southern Oscillation. See footnote 1.2 in [1.2.2]

n

for a short description.

• SOI: Southern Oscillation Index, de¬ned as the pressure difference between Darwin (Australia)

and Papeete (Tahiti). An index de¬ned by sea-surface temperature anomalies in the Central Paci¬c

is sometimes used as an alternative SOI, and is called the ˜SST index.™ See Figure 1.3.

• SVD: Singular Value Decomposition. See Appendix B.

• SST: sea-surface temperature.

• UTC is time independent of time zone: ˜Universal Time Co-ordinated.™

The word ˜zonal™ denotes the east“west direction, and the zonal wind is the eastward component of

the wind. Similarly, ˜meridional™ indicates the north“south direction, and the meridional wind is the

northward component of the wind.

B Elements of Linear Analysis

In this subsection we brie¬‚y review some basic concepts of linear algebra, particularly linear bases and

eigenvalues and eigenvectors. The notation used is described in Appendix A.

Eigenvalues and Eigenvectors

Let A be an m — m matrix. A real or complex number » is said to be an eigenvalue of A if there is a

nonzero m-dimensional vector e such that

Ae = »e . (B.1)

Vector e is said to be a (right) eigenvector of A.1 Eigenvectors are not uniquely determined; since it is

clear that, if e is an eigenvector of A, then ±e is also for any number ±. However, when an eigenvector

is simple (i.e., any other eigenvector with the same eigenvalue is a scalar multiple of this eigenvector),

then it uniquely determines a direction in the m-dimensional vector space.

It is possible that a real matrix A has a complex eigenvalue ». Then, the eigenvector e is also complex

(otherwise Ae ∈ Rm but »e ∈ Cm ). Because A = A— , the complex conjugate eigenvalue »— is an

eigenvalue of the real matrix A as well, with eigenvector e — :

Ae — = A— e — = (Ae )— = (»e )— = »— e — .

A square matrix A is said to be Hermitian if A† = A, where A† is the conjugate transpose of A. Real

Hermitian matrices are symmetric. Hermitian matrices have real eigenvalues only.

One eigenvalue may have several linearly independent eigenvectors e i . In that case the eigenvectors

are said to be degenerate since their directions are no longer uniquely determined. The simplest example

of a matrix with degenerate eigenvectors is the identity matrix. It has only one eigenvalue » = 1, which

has m linearly independent eigenvectors e i = (0, . . . , 0, 1, 0, . . . , 0)T with a unit in the ith position. In

general, when » is a degenerate eigenvalue with linearly independent eigenvectors e i , i = 1, . . . , m » ,

any linear combination i ±i e i is also an eigenvector with eigenvalue ». Note that a given eigenvector

is associated with only one eigenvalue.

Bases

A collection of vectors {e 1 , . . . , e m } is said to be a linear basis for an m-dimensional vector space V if

for any vector a ∈ V there exist coef¬cients ±i , i = 1, . . . , m, such that a = i ±i e i . An orthogonal

basis is a linear basis consisting of vectors e i that are mutually orthogonal, that is, e i , e j = 0 if i = j.

The set of vectors is called orthonormal if e i = 1 for all i = 1, . . . , m.

1 A nonzero m-dimensional vector f is said to be a left eigenvector of A if f T A = » f T for some nonzero ». The left

eigenvectors of A are right eigenvectors of AT , and vice versa. We use the term eigenvector to denote a right eigenvector.

413

Appendix B: Linear Analysis

414

Orthonormal Transformations

If {e 1 , . . . , e m } is an orthonormal basis and y = ±i e i , then

i

= ±i e i , e = ±j

j j

y, e (B.2)

i

= y, e i e i .

y (B.3)

i

Equation (B.3) describes a transformation from standard coordinates (y1 , . . . , ym )T to a new set of

coordinates ( y, e 1 , . . . , y, e m )T .

Continue to assume that {e 1 , . . . , e m } is an orthonormal basis. The expectation E Y of a random

vector Y in standard coordinates transforms in the same way as the coordinates:

E Y, e = E Y ,e .

j j

The covariance matrix of Y, with respect to the standard coordinates,

Σ = E (Y ’ µ y )(Y ’ µ y )† ,

is related to the covariance matrix Σ of the transformed vector ( Y, e 1 , . . . , Y, e m )T through

Σ = P † ΣP

where P † is the conjugate transpose of P and the columns of P are the m vectors e 1 , . . . , e m . Note

that, since the basis is orthonormal, P † P = PP † = I. The trace of the covariance matrix (i.e., the sum

of the variances of all components) is invariant under the transformation (B.2):

σYj = tr(P † ΣP) = tr(Σ) = tr(PP † Σ) = tr(Σ ) = σ± j

2 2

j j

where ± j = Y, e j .

Square Root of a Positive De¬nite Symmetric Matrix

The square root of a positive de¬nite symmetric matrix Σ is given by Σ1/2 = 1/2 P T , where P is

= diag(»1 , . . . , »m ) is the corresponding diagonal

an orthonormal matrix of eigenvectors of Σ,

1/2 1/2

matrix of eigenvalues, and 1/2 = diag(»1 , . . . , »m ). Then Σ = (Σ1/2 )T Σ1/2 . The inverse

’1/2 ’1/2

square root of Σ is given by Σ’1/2 = P ’1/2 , where ’1/2 = diag(»1 , . . . , »m ). Note that