lag-15 and then varies randomly about zero at time

scales that are typical of an AR(1) process with

±1 = 0.9. As anticipated, the large lag behaviour 12.1.4 Example: Bias. An impression of the

of the estimated auto-correlation function is quite bias of auto-correlation function estimator (12.1)

similar to that of the process itself (compare the can be obtained from a small Monte Carlo

upper panel in Figure 12.1 with the time series experiment (see Section 6.3). One thousand

samples of each length T = 15, 30, 60, and

shown in the lower panel of Figure 10.7). Note

that the estimated auto-correlation function can 120 were generated from AR(1) processes with

parameters ±1 = 0.3, 0.6 and 0.9. Each time series

take large excursions from zero even when the real

auto-correlation function (not shown) is effectively was used to estimate the auto-correlation function

zero. Some of these excursions extend well beyond at lag-1 and lag-10. The results are given in the

the approximate critical values. following tables.

12: Estimating Covariance Functions and Spectra

254

variability (cf. [11.4.4] and Figure 11.9), time

The lag-1 correlation

series from processes more persistent than white

Sample length

noise will tend to have values of d less than 2.

±1 ρ(1) 15 30 60 120

Samples from processes that have relatively more

0.3 0.3 0.16 0.23 0.27 0.28

high-frequency variability than white noise will

0.6 0.6 0.36 0.47 0.54 0.57

tend to have values of d greater than 2.1

0.9 0.9 0.54 0.72 0.81 0.86

Bloom¬eld [49] interprets d as an index of the

˜smoothness™ of the time series.

The lag-10 correlation

Sample length

±1 ρ(10) 15 30 60 120 12.1.6 Estimating the Partial Auto-correlation

0.0 ’0.06 ’0.04 ’0.03 ’0.01

0.3 Function. The partial auto-correlation function

0.01 ’0.07 ’0.08 ’0.05 ’0.02

0.6 ±„,„ (see equation (11.13) in [11.1.10]) is

0.35 ’0.15 ’0.14

0.9 0.02 0.18 sometimes a useful aid for identifying the order

of AR model that reasonably approximates the

We see that the auto-correlation estimates are

behaviour of a time series. In particular, if Xt is an

negatively biased. The bias is small when the

AR( p) process, then ±„,„ is zero for all „ greater

true correlation is small but it is large when

than p.

the true correlation is large, especially when the

The partial auto-correlation function can be

time series is short. The bias decreases slowly

estimated recursively by substituting the estimated

with increasing sample size. Comparison with

auto-correlation function r („ ) (12.1) into equa-

Kendall™s approximation for the bias (12.3) shows

tions (11.12, 11.13). Box and Jenkins [60] note

that the latter breaks down when „ is large relative

that the recursion is sensitive to rounding errors,

to T , and also that ±1 affects the goodness of the

particularly if the parameter estimates are near the

approximation.

boundaries of the admissible region for weakly

The denominator in (12.1) is summed over more

stationary processes. Quenouille [326] showed that

products than the numerator, but this accounts

if Xt is an AR( p) process, then

only for some of the bias. In¬‚ating the estimated

correlations by multiplying with T /(T ’ |„ |) to 1

adjust for the difference in the number of products Var ±„,„ ≈ for „ > p. (12.5)

T

summed does not eliminate the bias. Most of the

bias arises because it is necessary to remove the

sample mean when estimating the auto-covariance 12.1.7 Example: Partial Auto-correlation Func-

tion Estimates. Partial auto-correlation function

function.

estimates for the examples discussed in [12.1.3]

12.1.5 A Test for Serial Correlation. We are displayed in Figure 12.2. The horizontal lines

introduced the Durbin“Watson statistic (8.24) depict the two standard deviation critical values

in [8.3.16] as a regression diagnostic that is used to (12.5).

The estimated partial auto-correlation function

check for serial correlation in regression residuals.

We mention it again here to remind readers that displayed in the upper panel is essentially zero

it can be used in contexts other than the ¬tting of beyond lag-1, a characteristic that (correctly)

suggests that these time series came from an AR(1)

regression models. The statistic

process.

T ’1

t=1 (Xt+1 ’ Xt )

2

In contrast, the estimated partial auto-

d= correlation shown in the lower panel is

T

t=1 (Xt )

2

signi¬cantly different from zero at lags 1, 2,

is essentially the sample variance of the ¬rst and 11. The estimate agrees quite well with the

differences of the times series divided by the theoretical partial auto-correlation function for the

sample variance of the undifferenced time series. MA(10) process that generated the data, which

Subsection [8.3.16] gives references for the has a sequence of damped peaks at lags „ = 1, 11,

derivation of the distribution of d under the 21, . . ..

null hypothesis that the time series was obtained

1 An AR(1) process with negative parameter ± is an

from a white noise process. Samples taken from 1

example of a weakly stationary process with more high-

white noise processes will have values of d frequency variability than is expected in white noise. A time

near 2. Since ¬rst differencing ¬lters out low- series that has been differenced to remove trend will also show

frequency variability and enhances high-frequency excessive high-frequency variability.

12.2: Identifying and Fitting Auto-regressive Models 255

The other approach we will discuss uses one

0.0 0.2 0.4 0.6 0.8

of two objective order determining criteria (AIC,

developed by Akaike [6] and BIC, developed

by Swartz [360]; see [12.2.10,11]) to select the

model. These criteria use penalized measures of

5 10 15 20

the goodness-of-¬t where the size of the penalty

depends upon the number of estimated parameters

in the model. The user™s connection with this

•

0.0 0.2 0.4 0.6

modelling process is not as close, and thus it

•

is possible that an inappropriate model is ¬tted

•

•

•

to a time series with some sort of pathological

• • •

•

• • •

• •

• • •

• • • • • •

behaviour. On the other hand, since these methods

5 10 15 20

are objective, they can be applied systematically

when careful hand ¬tting of AR models is

impractical.2

Figure 12.2: Estimated partial auto-correlation

Both approaches require model ¬tting tools, a

functions computed from simulated time series of