than 1, we obtain 2 2

E ’ (r v)2 + 4 r v cos(2π ω) ’ r 2 uv

σ2 —

(ω) = 2 (bD 2 ’ 1)

st 2 2

E + (r v)2 ’ 4 r v sin(2πω)

cs

D

σ2

pr o

where E = 1 + (r u)2 ’ 2r u cos(2πω). The phase

cs (ω) = 2

D (for b = 1) is

π

φcs (ω) = .

k

2

’1 r v cos(2πω) ’ r uv

2

1

This time the sine coef¬cient tends to be greater φcs (ω) = 2k tan .

k

E ’ (r v)2

than the cosine coef¬cient, and the standing wave™s

crest or trough is correctly placed at π/2. In this case, the positiveness of the propagating

If the parameters of the process are such that variance densities (11.99) is no longer guaranteed.

the cosine and sine spectra are equal at some For example, setting b = 0.1, r u = 0.5, r v =

frequency ω0 , that is, ω0 is a solution of bD 2 = 1, ’0.8 results in a small negative variance density

then the standing wave spectral density becomes for the travelling waves at frequency ω = ’0.17.

fw

pr o

zero at ω0 and cs (ω0 ) = cs (ω0 ) = bσ 2 . In summary, Hayashi™s formalism does gener-

One ¬nal point for the example is that in all three ally yield reasonable results, but, as the previous

scenarios just discussed, the propagating variance example illustrates, caution is advised.

This Page Intentionally Left Blank

12 Estimating Covariance Functions and

Spectra

methods generally used in time series analysis

12.0.0 Overview. The purpose of this chapter

assume only ergodicity and weak stationarity.

is to describe some of the methods used to

Methods described in this chapter, aside from

estimate the second moments, the auto- and cross-

methods that speci¬cally assume a time-domain

covariance functions, and the power and cross-

model, are non-parametric.

spectra, of the weakly stationary ergodic processes

that were described in the previous two chapters. It Note that ˜non-parametric™ tends to have an

is not our intention to be exhaustive, but rather to interpretation in time series analysis that is

introduce some of the concepts associated with the different from that in other areas of statistics. In

estimation problem. We leave it to the reader to other areas of statistics, non-parametric inference

explore these concepts further in the sources that methods often use exact distributional results that

we cite. are obtained through heavy reliance on sampling

assumptions, such as the assumption that the

observations are realizations of a collection of

12.0.1 Parametric and Non-parametric Ap-

independent and identically distributed random

proaches. We will take one of two approaches

variables. Time series statisticians must replace

when inferring the properties of stochastic proces-

the independence assumption with something

ses from limited observational evidence.

considerably weaker (e.g., weak stationarity and

Parametric estimators assume that the observed ergodicity) and therefore can generally only appeal

process is generated by a member of a spe- to asymptotic theory when making inferences

ci¬c class of processes, such as the class of about the characteristics of a stochastic process.

auto-regressive processes (AR processes; see Sec-

tion 10.3). Some parametric estimation techniques

12.0.2 Outline. The second moments of an

further restrict the type of process considered by

ergodic weakly stationary process have equivalent

adding distributional assumptions. For example, it

representations in the time (the auto-correlation

is often assumed that a process is normal, meaning

function) and frequency (the spectral density

that all joint distributions of arbitrary numbers

functions) domains. We describe non-parametric

of elements Xt1 , . . . , Xtn are multivariate normal.

and parametric approaches to the estimation of the

The parameters of such a process are estimated

auto-correlation function of a univariate process

by ¬nding the member of the class of models that

in Sections 12.1 and 12.2, respectively. Estimation

best ¬ts the observational evidence. The ¬tting

of the corresponding spectral density function is

methods, such as the method of moments, least

described in Section 12.3. In this case, most of our

squares, or maximum likelihood estimation, are

effort is devoted to the non-parametric approach

the same as those used in other branches of statis-

(see [12.3.1“20]; we discuss the parametric

tics. Once a model has been ¬tted, estimates of the

approach brie¬‚y in [12.3.21]) because the non-

auto-covariance function and power spectrum are

parametric estimators can be coupled with an

obtained simply by deriving them from the ¬tted

effective asymptotic theory to make reliable

process.

inferences about the spectrum. Similar tools

The ¬tting of auto-regressive models to ob-

are not available with the parametric approach

served time series is discussed in Section 12.2.

to spectral estimation. The ideas discussed in

Auto-regressive and maximum entropy spectral

Sections 12.1 and 12.3 are extended to multivariate

estimation are brie¬‚y discussed in [12.3.19].

processes in Section 12.4, where we brie¬‚y

Non-parametric estimators make fewer assump- describe a non-parametric estimator of the cross-

tions about the generating process. In fact, the correlation function, and Section 12.5, where we

251

12: Estimating Covariance Functions and Spectra

252

1

describe non-parametric estimators of the cross- B r („ ) ≈ ’ (12.3)

T

spectral density functions.

1 + ±1 |„ | |„ |

— (1 ’ ±1 ) + 3|„ |±1 |„ | > 1

1 ’ ±1

12.1 Non-parametric Estimation of

Equation (12.1) is sometimes in¬‚ated by the

the Auto-correlation Function

factor T /(T ’ |„ |) to adjust for bias, but this is

not generally considered helpful because it also

12.1.0 Outline. We begin by describing the

in¬‚ates the variability of the estimator (recall

usual non-parametric product-moment estimator

[5.3.7] and Figure 5.3).

of the auto-correlation function in [12.1.1]. The

Bartlett [31], working under the assumption

bias and variance of this estimator are exam-

that Xt is a stationary normal process, derived

ined in [12.1.2], some examples are considered

a general asymptotic result about the variability

in [12.1.3,4], and a simple test of the null hypoth-

of r („ ) that is useful for interpreting the sample

esis that the observed process is white is described

auto-covariance function. He showed that

in [12.1.5]. The partial auto-correlation function,

which is useful when ¬tting parametric models to Var r („ ) ≈

time series, is brie¬‚y described in [12.1.6,7].

∞

1

Throughout this chapter we use the notation ρ 2 ( ) + ρ( +„ )ρ( ’„ )

x1 , . . . , xT to represent a sample obtained by ob- T =’∞

serving a single realization of an ergodic, weakly