ñòð. 15 |

Thus, the general result for MA(1) has the form

y(t)(1 Ã€ b1 L Ã¾ b1 L2 Ã€ b1 L3 Ã¾ . . . ) Â¼ e(t) (5:1:17)

Equation (5.1.17) can be viewed as the AR(1) process, which illus-

trates that the MA model does depend on past.

The MA(q) model is invertible if it can be transformed into an

AR(1) model. It can be shown that MA(q) is invertible if all solu-

tions to the equation

1 Ã¾ b1 z Ã¾ b2 z2 Ã¾ . . . Ã¾ bq zq Â¼ 0 (5:1:18)

are outside the unit circle. In particular, MA(1) is invertible if

jb1 j < 1. If the process y(t) has a non-zero mean value m, then the

AR(1) model can be presented in the following form

47

Time Series Analysis

y(t) Ã€ m Â¼ a1 [y(t Ã€ 1) Ã€ m] Ã¾ e(t) Â¼ c Ã¾ a1 y(t Ã€ 1) Ã¾ e(t) (5:1:19)

In (5.1.19), intercept c equals:

c Â¼ m(1 Ã€ a1 ) (5:1:20)

The general AR(p) model with a non-zero mean has the following

form

Ap (L)y(t) Â¼ c Ã¾ e(t), c Â¼ m(1 Ã€ a1 Ã€ . . . ap ) (5:1:21)

Similarly, the intercept can be included into the general moving

average model MA(q)

y(t) Â¼ c Ã¾ Bp (L)e(t), c Â¼ m (5:1:22)

Note that mean of the MA model coincides with its intercept because

mean of the white noise is zero.

5.1.3 AUTOCORRELATION FORECASTING

AND

Now, let us introduce the autocorrelation function (ACF) for pro-

cess y(t)

r(k) Â¼ g(k)=g(0) (5:1:23)

where g(k) is the autocovariance of order k

g(k) Â¼ E[y(t) Ã€ m)(y(t Ã€ k) Ã€ m)] (5:1:24)

The autocorrelation functions may have some typical patterns, which

can be used for identification of empirical time series [2]. The obvious

properties of ACF are

r(0) Â¼ 1, Ã€ 1 < r(k) < 1 for k 6Â¼ 0 (5:1:25)

ACF is closely related to the ARMA parameters. In particular, for

AR(1)

r(1) Â¼ a1 (5:1:26)

The ACF of the first order for MA(1) equals

r(1) Â¼ b1 =(b1 2 Ã¾ 1) (5:1:27)

The right-hand side of the expression (5.1.27) has the same value for

the inverse transform b1 ! 1=b1 . For example, two processes

48 Time Series Analysis

x(t) Â¼ e(t) Ã¾ 2e(t Ã€ 1)

y(t) Â¼ e(t) Ã¾ 0:5e(t Ã€ 1)

have the same r(1). Note, however, that y(t) is an invertible process

while x(t) is not.

ARMA modeling is widely used for forecasting. Consider a fore-

cast of a variable y(t Ã¾ 1) based on a set of n variables x(t) known at

moment t. This set can be just past values of y, that is,

y(t), y(t Ã€ 1), . . . , y(t Ã€ n Ã¾ 1). Let us denote the forecast with

^(t Ã¾ 1jt). The quality of forecast is usually defined with the some

y

loss function. The mean squared error (MSE) is the conventional loss

function in many applications

MSE(^(t Ã¾ 1jt)) Â¼ E[(y(t Ã¾ 1)Ã€^(t Ã¾ 1jt))2 ]

y y (5:1:28)

The forecast that yields the minimum of MSE turns out to be the

expectation of y(t Ã¾ 1) conditioned on x(t)

^(t Ã¾ 1jt) Â¼ E[y(t Ã¾ 1)jx(t)]

y (5:1:29)

In the case of linear regression

y(t Ã¾ 1) Â¼ b0 x(t) Ã¾ e(t) (5:1:30)

MSE is reduced to the ordinary least squares (OLS) estimate for b.

For a sample with T observations,

X X

T T

x(t)x0 (t)

bÂ¼ x(t)y(t Ã¾ 1)= (5:1:31)

tÂ¼1 tÂ¼1

Another important concept in the time series analysis is the maximum

likelihood estimate (MLE) [2]. Consider the general ARMA model

(5.1.12) with the white noise (4.2.6). The problem is how to estimate

the ARMA parameters on the basis of given observations of y(t). The

idea of MLE is to find such a vector r0 Â¼ (a1 , . . . , ap , . . . ,

b1 , . . . , bq , s2 ) that maximizes the likelihood function for given ob-

servations (y1 , y2 , . . . , yT )

f1 , 2 , . . . , T (y1 , y2 , . . . , yT ; r0 ) (5:1:32)

The likelihood function (5.1.32) has the sense of probability of ob-

serving the data sample (y1 , y2 , . . . , yT ). In this approach, the ARMA

model and the probability distribution for the white noise should be

49

Time Series Analysis

specified first. Often the normal distribution leads to reasonable

estimates even if the real distribution is different. Furthermore, the

likelihood function must be calculated for the chosen ARMA model.

Finally, the components of the vector r0 must be estimated. The latter

step may require sophisticated numerical optimization technique.

Details of implementation of MLE are discussed in [2].

5.2 TRENDS AND SEASONALITY

Finding trends is an important part of the time series analysis.

Presence of trend implies that the time series has no mean reversion.

Moreover, mean and variance of a trending process depend on the

sample. The time series with trend is named non-stationary. If a

process y(t) is stationary, its mean, variance, and autocovariance are

finite and do not depend on time. This implies that autocovariance

(5.1.24) depends only on the lag parameter k. Such a definition of

stationarity is also called covariance-stationarity or weak stationarity

because it does not impose any restrictions on the higher moments of

the process. Strict stationarity implies that higher moments also do

not depend on time. Note that any MA process is covariance-station-

ary. However, the AR(p) process is covariance-stationary only if the

roots of its polynomial are outside the unit circle.

It is important to discern deterministic trend and stochastic trend.

They have a different nature yet their graphs may look sometimes

very similar [1]. Consider first the AR(1) model with the deterministic

trend

y(t) Ã€ m Ã€ ct Â¼ a1 (y(t Ã€ 1) Ã€ m Ã€ c(t Ã€ 1)) Ã¾ e(t) (5:2:1)

Let us introduce z(t) Â¼ y(t) Ã€ m Ã€ ct. Then equation (5.2.1) has the

solution

X

t

t

a1 tÃ€i e(t)

z(t) Â¼ a1 z(0) Ã¾ (5:2:2)

iÂ¼1

where z(0) is a pre-sample starting value of z. Obviously, the random

shocks are transitory if ja1 j < 1. The trend incorporated in the defin-

ition of z(t) is deterministic when ja1 j < 1. However, if a1 Â¼ 1, then

equation (5.2.1) has the form

50 Time Series Analysis

y(t) Â¼ c Ã¾ y(t Ã€ 1) Ã¾ e(t) (5:2:3)

The process (5.2.3) is named the random walk with drift. In this case,

equation (5.2.2) is reduced to

X

t

z(t) Â¼ z(0) Ã¾ (5:2:4)

e(t)

iÂ¼1

ñòð. 15 |