«

function ρ p such that 1 ±1 ±1 2

Σ1,2,3 = σ X ±1 1 ±1

2

ρ p („ ) = ρ(„ ) for all |„ | ¤ p. (11.11)

± 1 ±1 1

2

The parameters ± p = (± p,1 , . . . , ± p, p ) of the which has inverse

«

approximating process of order p are recursively

’±1

1 0

related to those of the approximating process of 1 ’±1 ’±1

’1

Σ1,2,3 = 2 1 + ±1

2

order p ’ 1 by

σ X (1 ’ ±1 )

2

’±1

0 1

± p,k = ±( p’1),k ’ ± p, p ±( p’1),( p’k) (11.12)

Substituting into (2.34), we obtain the joint density

k = 1, . . . , p ’ 1

function for these three random variables:

where 1

f 1,2,3 (y1 , y2 , y3 ) =

(2πσ X )3/2

2

p’1

ρ( p) ’ k=1 ±( p’1),k ρ( p ’ k)

± p, p = . (11.13) y1 +(1+±1 ) y2 +y3 ’2±1 y1 y2 ’2±1 y2 y3

2 22 2

’

p’1

1’ k=1 ±( p’1),( p’k) ρ( p ’ k) 2(1’±1 )σ X

22

—e .

The recursion is started by setting ±1,1 = ρ(1). 2 In general, when the process is not normal, the value of

± p, p does depend upon the speci¬c realization.

A proof can be found in Appendix M.

11: Parameters of Univariate and Bivariate Time Series

222

To understand the ±2,2 partial correlation coef¬- with Xt’1 , . . . , Xt’„ +1 and the error of a one-step

cient, we now derive the joint density function back forecast of Xt’„ made with the same random

of Y1 and Y3 conditional upon Y2 . Recall from variables. When Xt is AR( p) and normal, these

errors become independent for lags „ > p.

[2.8.6] that

f 1,2,3 (y1 , y2 , y3 )

f 1,3|2 (y1 , y3 |y2 ) = . 11.1.13 Auto-covariance Functions of Filtered

f 2 (y2 )

Series. An operator that replaces a process Xt

with the process

But

∞

1

e’y2 / 2σ X ,

2 2

Yt = ak Xt+k ,

f 2 (y2 ) =

(2π σ X )1/2

2

k=’∞

∞

k=’∞ |ak | < ∞, is called a linear

where

therefore

¬lter. Filters are used to remove, or isolate,

1

f 1,3|2 (y1 , y3 |y2 ) = variation on certain time scales from a process (see

2π σ X

2

Section 17.5). The auto-covariance function of the

y1 +(1+±1 ) y2 +y3 ’2±1 y1 y2 ’2±1 y2 y3 ’(1’±1 ) y2

2 22 2 22 ¬ltered process is

’

2(1’±1 )σ X

22

—e ∞

ak al— γx x („ + k ’ l).

γ yy („ ) = (11.15)

(y ’± y )2 +(y3 ’±1 y2 )2

’ 1 1 22

1 2σ X (1’±1 )

2

= . k,l=’∞

e

2

2πσ X

11.2 The Spectrum

(y1 ’±1 y2 )2

’

1 2σ X (1’±1 )

2 2

= e

(2πσ X )1/2

2

11.2.0 General. The variance of a time

series {X 1 , X 2 , . . . , X T } of ¬nite length may be

(y3 ’±1 y2 )2

’2

1

e 2σ X (1’±1 )

2

— (11.14) attributed to different time scales by expanding it

(2π σ X )1/2

2

into a ¬nite series of trigonometric functions4 (cf.

= f 1|2 (y1 |y2 ) f 3|2 (y3 |y2 ). Equation (C.1))

(T ’1)/2

2π kt 2πkt

Thus Y1 and Y3 are conditionally independent

X t = A0 + + bk sin .

ak cos

[2.8.5], since the joint conditional density function T T

k=1

can be factored as the product of marginal con- (11.16)

ditional density functions. Hence the conditional

Equation (11.16) distributes the variance in the

correlation between Y1 and Y3 is also zero, which

time series

is exactly what we obtain for ±2,2 if we solve

(T ’1)/2

T

(11.13) and (11.12) recursively. 1 1

(X t ’ X ) = ak + bk

2 2 2

(11.17)

Equation (11.14) is the key to understanding the T 2

t=1 k=1

true meaning here. Since Xt is an AR(1) process,

±1 Y2 = ±1 Xt is the best one-step ahead forecast to the periodic components in the expansion shown

in (11.16). The elements (ak + bk ) are collectively

2 2

of Xt+1 . Similarly, ±1 Xt is the best one-step back

referred to as the periodogram of the ¬nite time

˜forecast™ of Xt’1 .3 Equation (11.14) shows that

series {X 1 , . . . , X T } when they are multiplied by

f 1,3|2 is the joint distribution of the one-step ahead

T /4 (cf. [12.3.1]).

and one-step back forecast errors. If the process

was actually AR of order p > 1, the error of Unfortunately, it is not readily apparent that the

expansion in (11.16) is related to the spectrum

the one-step ahead forecast made only with Xt

of an in¬nite time series or a stationary process,

would still depend upon Xt’1 , and that of the