stationary, stochastic process at T consecutive

times t0 , t0 + 1, . . . , t0 + T ’ 1, beginning at

Thus, if there exists a p such that ρ(„ ) is zero for

some arbitrary time t0 . The corresponding random

„ greater than p, then

variables will be denoted by X1 , . . . , XT . We will

also use the notation xt = xt ’ x, t = 1, . . . , T , p

1

Var r („ ) ≈ 1+2 ρ2( ) (12.4)

to represent the time series of deviations from T =1

T

the sample mean x = T t=1 xt , and we will

1

for „ greater than p. This result can be used

write Xt , t = 1, . . . , T , and X to represent the

to conduct a rough and ready test of the null

corresponding random variables.

hypothesis that ρ(„ ) = 0 at each lag „ as follows.

12.1.1 Non-parametric Estimator. A non- 1 Assume that ρ( ) is zero for ≥ „ .

parametric estimator of the auto-correlation func-

2 Substitute r ( ), 1 ¤ < „ , into

tion ρ(„ ) is given by

approximation (12.4) to obtain σr2 ) , an

(„

r („ ) = c(„ )/c(0) (12.1) estimate of the variance of r („ ).

where c(„ ) is the sample auto-covariance function 3 Compare Z = r („ )/σr („ ) with the critical

values of the standard normal distribution

T

1

c(„ ) = Xt’|„ | Xt . (Appendix D).

(12.2)

T t=|„ |+1

Summation (12.4) is usually truncated at a

˜reasonable™ number of lags, say 20“25.

The sample auto-covariance function is set to zero

for |„ | ≥ T . We emphasize that this test is based on

asymptotic theory and thus is not exact. Also,

the user needs to be aware of the effects of

12.1.2 Properties of the Non-parametric

˜multiplicity.™ When the test is conducted at the 5%

Estimator. Kendall (see Section 7.7 of [220])

signi¬cance level, rejection of the null hypothesis

shows that estimator (12.1) can have substantial

should be expected at 5% of lags tested, even

bias. In particular, if Xt is a white noise process,

when it is true at all lags. None the less, this test

the bias is

does offer some guidance in the interpretation of

’1

B r („ ) ≈ , the auto-correlation function. Statistical packages

T sometimes compute σr („ ) for every „ > 0 and

2

display the approximate critical values ±2σr2 ) on

and when Xt is an AR(1) process with lag-1

(„

correlation coef¬cient ±1 , a graph of r („ ).

Bloom¬eld [49] points out a disadvantage of

1

B r (1) ≈ ’ (1 + 4±1 ) analysing the correlation structure of a time

T

12.1: Non-parametric Estimation of the Auto-correlation Function 253

series in the time domain (as opposed to in the

1.0

spectral domain, see Section 12.3): the estimated

0.6

auto-correlation function has complex correlation

0.2

structure of its own. Bartlett [31] derives the

-0.2

asymptotic covariance between auto-correlation

function estimates at different lags and Box and 0 20 40 60 80 100 120

Jenkins [60] use this result to show that

1.0

∞

1

Cov(r („ ), r („ + δ)) ≈ ρ( )ρ( + δ).

0.6

T =’∞

0.2

-0.2

If we have a process that is, for example, AR(1) 0 20 40 60 80 100 120

with parameter ±1 > 0, this approximation gives

δ Figure 12.1: Estimated auto-correlation functions

Cor(r („ ), r („ + δ)) ≈ ±1

computed from time series of length 240. The

at large lags „ . That is, the correlations between horizontal dashed lines indicate approximate

the auto-correlation function estimates are roughly critical values for testing the null hypothesis that

similar to those of the process itself. Consequently, ρ(„ ) = 0 for all „ at the 5% signi¬cance level.

when the process is persistent, the estimated Top: Estimated auto-correlation function for a

auto-correlation function will vary slowly around time series generated from an AR(1) process with

zero even when the real auto-correlation function ±1 = 0.9.

has decayed away to zero, and we need to be Bottom: Estimated auto-correlation function for a

careful to avoid over-interpreting the estimated time series generated from an MA(10) process with

β1 = · · · = β10 = 1.

auto-correlation function.

12.1.3 Example: Auto-correlation Function Es- The sample auto-correlation function displayed

timates. Figure 12.1 shows some examples of in the lower panel behaves somewhat differently.

auto-correlation function estimates computed from It decays to zero in about 10 lags and then

simulated time series of length 240. The function varies about zero on a shorter time scale than the

displayed in the upper panel was computed from auto-correlation function shown in the upper panel

time series generated from an AR(1) process with (compare with the MA(10) time series shown in

parameter ±1 = 0.9 (see [10.3.3]); that in the lower the lower panel of Figure 10.15).

panel was generated from an MA(10) process with

While the auto-correlation function estimates

parameters β1 = · · · = β10 = 1 (see [10.5.1]). The

are informative, it would be dif¬cult to identify

two standard deviation critical values estimated

precisely the order or type of the generating

with (12.4) (assuming ρ(„ ) is zero for all nonzero

process from only this display. We address the

„ ) are also displayed.

problem of process identi¬cation more fully in

As we would expect from an AR(1) process, the Section 12.2.

estimated auto-correlation function in the upper