stochastic processes that satisfy equations of the

where

form

1 µ X is the mean of the process,

φ(B)Xt = Zt (AR) (10.35)

2 ±1 , . . . , ± p and β1 , . . . , βq are constants such

Xt = θ(B)Zt (MA) (10.36)

that ± p = 0 and βq = 0, and

φ(B)Xt = θ(B)Zt (ARMA), (10.37)

3 {Zt : t ∈ Z} is a white noise process.

where {Zt : t ∈ Z} is a white noise process.

This formality is introduced to provide the tools

There is substantial overlap between the classes

of moving average, auto-regressive, and ARMA needed to brie¬‚y explore the connections between

models. In particular, it can be shown that AR and MA models.

10.5: Moving Average Processes 215

well approximated by a high order AR process.

Consider an MA process represented with the

polynomial backshift operator θ(B) as in (10.36). Also, it is obvious that stationary and invertible

ARMA processes can be closely approximated by

Suppose now that there exists a power series

either a high order AR or a high order MA process

∞

’1

θ (B) = 1 ’ βi B i simply by inverting and truncating the appropriate

backshift operator.

i=1

such that the power series θ (B)θ ’1 (B) converges

10.5.7 Regime-dependent Auto-regressive

to 1 for B in some region in the complex plane that

Processes. Regime-dependent auto-regressive

contains the unit circle. That is, all roots of the MA

processes, or ˜RAMs,™ are nonlinear AR processes

backshift operator θ(B) must lie outside the unit

introduced into climate research by Zwiers and

circle. Then, the MA process can be ˜inverted™ to

von Storch [453].

produce an in¬nite auto-regressive process

The idea is that the dynamics of a stochastic

θ ’1 (B)Xt = Zt (10.38) process Xt are controlled by an external process

Y . The RAM has the form

or, equivalently

p

∞

Xt = ±0,k + ±ik Xt’ j + Ztk , (10.40)

Xt ’ βi Xt’i = Zt . (10.39)

i=1

i=1

where k = 1, . . . , K identi¬es one of K regimes.

Given that the invertibility condition is satis¬ed,

Within each regime the process behaves as an

the process de¬ned by (10.39) is stochastically

AR process of some order no greater than p.

indistinguishable from the process that satis¬es

The dynamics in each regime are forced by their

(10.36). Such a process is called an invertible MA

own white noise process. The choice of regime

process.

k at any given time t depends on the external

Note that the invertibility condition for MA pro-

state variable Y (t). The regime k is set to l when

cesses is analogous to the stationarity condition for

Y (t) ∈ [Tl’1 , Tl ]. The ˜thresholds™ are chosen

AR processes; both conditions can be expressed in

as part of the model ¬tting process. In principle,

terms of the roots of the corresponding backshift

other nonlinear dependencies of k on Y (t) could be

operator. As we have just argued, when the MA

speci¬ed, but the above formulation is piecewise

backshift operator is invertible, the process can

linear, which makes the estimation easier.

be represented as an in¬nite AR process. On the

other hand, when the AR operator has all its roots A RAM was used to model the SST index of the

outside the unit circle, the process is stationary and Southern Oscillation [453]. Two external factors

the AR operator can be inverted so that the process were analysed, namely the intensity of the Indian

monsoon, with K = 2, and the strength of the

can be represented as an in¬nite moving average.

Southwest Paci¬c circulation, with K = 3. It was

A stationary AR process can therefore be ap-

proximated with arbitrary precision by truncating found that the probability of a warm or cold event

its in¬nite MA representation at some suitable of the Southern Oscillation did indeed seem to

depend on the state of the external variable Y (t).

point. Similarly, an invertible MA process can be

This Page Intentionally Left Blank

11 Parameters of Univariate and Bivariate

Time Series

Time series analysis deals with the estimation 11.1.2 Auto-correlation and Persistence Fore-

of the characteristic properties and times of cast. The auto-correlation function can be inter-

stochastic processes. This can be achieved either preted as an indication of the skilfulness of the

in the time domain by studying the auto- persistence forecast of Xt+„ that is constructed

when an observation xt is ˜persisted™ „ time

covariance function, or in the frequency domain

steps into the future. In this context ρ(„ ) is the

by studying the spectrum. This chapter introduces

both approaches.1 correlation between the forecast made at time t and

the verifying realization that is obtained lag „ time

steps later. The proportion of variance ˜explained™

by the persistence forecast is ρ 2 („ ).

11.1 The Auto-covariance Function

As we saw in Chapter 10, a slowly varying time

11.1.0 Complex and Real Time Series. Note series, that is, one with relatively long memory,

that, even though the auto-covariance and auto- tends to retain anomalies of the same sign for

correlation functions of both real and complex- several time steps. Persistence forecasts made for

valued time series are de¬ned below, in this such a process are likely to be more successful

chapter we generally limit ourselves to real time than those made for a process with short memory.

series. Thus we anticipate, and are soon able to show, that

the auto-correlation function of a long memory

process decays to zero more slowly than that of

11.1.1 De¬nition. Let Xt be a real or complex- a short memory process.

valued stationary process with mean µ. Then

γ („ ) = E (Xt ’ µ)(Xt+„ ’ µ)— 11.1.3 Examples. The auto-correlation function

of the Southern Oscillation Index, which is shown

= Cov(Xt , Xt+„ )

in Figure 1.3 in [1.2.2], is positive for lags shorter

than 12 months and oscillates irregularly around

is called the auto-covariance function of Xt , and

zero at longer lags. We will see later that these

the normalized function,

irregular variations at large lags are typical of auto-

correlation function estimates. They are probably

γ („ )

ρ(„ ) = the result of sampling variability and the true

γ (0)

auto-correlation function is likely to be zero at

large lags. Only the ¬rst part of the curve, in

is called the auto-correlation function of Xt . The

argument „ is called the lag. Note that the auto- which the correlation function estimates lie beyond

those levels that can be induced solely by sampling

correlation and auto-covariance functions have the

variation, is of interest. Figure 1.3 shows us that

same shape but that they differ in their units;

the covariance γ („ ) is expressed in the units of once a positive (or negative) SOI anomaly has

X2 while the correlation ρ(„ ) is expressed in developed it will, on average, persist for up to 12

t

months.

dimensionless units. When required for clarity,

The interpretation is similar if Xt is a complex-

we will identify the auto-covariance and auto-

correlation functions of process Xt as γx x and ρx x , valued process. For convenience, assume that Xt

has mean zero. Note that we may express the

respectively.

auto-covariance function in polar coordinates as

1 We recommend [60, 49, 68], and [195] for further reading

iφ(„ )