ñòð. 14 |

t1 t1

Hint: Calculate d(WnÃ¾1 ) using the Itoâ€™s lemma.

42 Stochastic Processes

3. Solve the Ornstein-Uhlenbeck equation that describes the mean-

reverting process in which the solution fluctuates around its

mean

dX Â¼ Ã€mXdt Ã¾ sdW, m > 0

Hint: introduce the variable Y Â¼ X exp (mt).

*4. Derive the integral (4.4.13) directly from the definition of the

Itoâ€™s integral (4.4.9).

Chapter 5

Time Series Analysis

Time series analysis has become an indispensable theoretical tool in

financial and economic research. Section 5.1 is devoted to the com-

monly used univariate autoregressive and moving average models.

The means for modeling trends and seasonality effects are described

in Section 5.2. The processes with non-stationary variance (condi-

tional heteroskedasticity) are discussed in Section 5.3. Finally,

the specifics of the multivariate time series are introduced in

Section 5.4.

5.1 AUTOREGRESSIVE AND MOVING AVERAGE

MODELS

5.1.1 AUTOREGRESSIVE MODEL

First, we shall consider a univariate time series y(t) for a process

that is observed at moments t Â¼ 0, 1, . . . , n (see, e.g., [1, 2]). The time

series in which the observation at moment t depends linearly on

several lagged observations at moments t Ã€ 1, t Ã€ 2, . . . , t Ã€ p

y(t) Â¼ a1 y(t Ã€ 1) Ã¾ a2 y(t Ã€ 2) Ã¾ . . . Ã¾ ap y(t Ã€ p) Ã¾ e(t), t > p (5:1:1)

is called the autoregressive process of order p, or AR(p). The term e(t) in

(5.1.1) is the white noise that satisfies the conditions (4.2.6). The lag

43

44 Time Series Analysis

operator Lp Â¼ y(t Ã€ p) is often used for describing time series. Note that

L0 Â¼ y(t). Equation (5.1.1) in terms of the lag operator has the form

Ap (L)y(t) Â¼ e(t) (5:1:2)

where

Ap (L) Â¼ 1 Ã€ a1 L Ã€ a2 L2 Ã€ . . . Ã€ ap Lp (5:1:3)

The operator Ap (L) is called the AR polynomial in lag operator of

order p. Let us consider AR(1) that starts with a random shock. Its

definition implies that

y(0) Â¼ e(0), y(1) Â¼ a1 y(0) Ã¾ e(1),

y(2) Â¼ a1 y(1) Ã¾ e(2) Â¼ a1 2 e(0) Ã¾ a1 e(1) Ã¾ e(2), . . .

Hence, by induction,

X

t

a1 i e(t Ã€ i)

y(t) Â¼ (5:1:4)

iÂ¼0

Mean and variance of AR(1) equal, respectively

E[y(t)] Â¼ 0, Var[y(t)] Â¼ s2 =(1 Ã€ a1 2 ), (5:1:5)

Obviously, the contributions of the â€˜â€˜oldâ€™â€™ noise converge with time to

zero when ja1 j < 1. As a result, this process does not drift too far from

its mean. This feature is named mean reversion.

The process with a1 Â¼ 1 is called the random walk

y(t) Â¼ y(t Ã€ 1) Ã¾ e(t) (5:1:6)

In this case, equation (5.1.4) reduces to

X

t

y(t) Â¼ e(t Ã€ i)

iÂ¼0

The noise contributions to the random walk do not weaken with time.

Therefore, the random walk does not exhibit mean reversion. Now,

consider the process that represents the first difference

x(t) Â¼ y(t) Ã€ y(t Ã€ 1) Â¼ e(t) (5:1:7)

Obviously, past noise has only transitory character for the process

x(t). Therefore, x(t) is mean-reverting. Some processes must be

45

Time Series Analysis

differenced several times in order to exclude non-transitory noise

shocks. The processes differenced d times are named integrated of

order d and denoted as I(d). The differencing operator is used for

describing an I(d) process

Di d Â¼ (1 Ã€ Li )d , j, d Â¼ . . . , Ã€2, Ã€1, 0, 1, 2 . . . (5:1:8)

If an I(d) process can be reduced to AR(p) process while applying the

differencing operator, it is named ARI(p, d) process and has the form:

D1 d y(t) Ã€ a1 D1 d y(t Ã€ 1) Ã€ . . . Ã€ ap D1 d y(t Ã€ p) Â¼ e(t), t ! p Ã¾ d

(5:1:9)

Note that differencing a time series d times reduces the number of

independent variables by d, so that the total number of independent

variables in ARI(p, d) within the sample with n observations equals

n Ã€ p Ã€ d.

The unit root is another notion widely used for discerning perman-

ent and transitory effects of random shocks. It is based on the roots of

the characteristic polynomial for the AR(p) model. For example,

AR(1) has the characteristic polynomial

1 Ã€ a1 z Â¼ 0 (5:1:10)

If a1 Â¼ 1, then z Â¼ 1 and the characteristic polynomial has the

unit root. In general, the characteristic polynomial roots can have

complex values. The solution to equation (5.1.10) is outside the unit

circle (i.e., z > 1) when a1 < 1. It can be shown that all solutions for

AR(p) are outside the unit circle when

1 Ã€ a1 z Ã€ a2 z2 Ã€ . . . Ã€ ap zp Â¼ 0 (5:1:11)

5.1.2 MOVING AVERAGE MODELS

A model more general than AR(p) contains both lagged observa-

tions and lagged noise

y(t) Â¼ a1 y(t Ã€ 1) Ã¾ a2 y(t Ã€ 2) Ã¾ . . . Ã¾ ap y(t Ã€ p) Ã¾ e(t)

Ã¾ b1 e(t Ã€ 1) Ã¾ b2 e(t Ã€ 2) Ã¾ . . . Ã¾ bq e(t Ã€ q) (5:1:12)

This model is called autoregressive moving average model of order

(p,q), or simply ARMA(p,q). Sometimes modeling of empirical data

46 Time Series Analysis

requires AR(p) with a rather high number p. Then, ARMA(p, q) may

be more efficient in that the total number of its terms (p Ã¾ q) needed

for given accuracy is lower than the number p in AR(p). ARMA(p, q)

can be expanded into the integrated model, ARIMA(p, d, q), similar

to the expansion of AR(p) into ARI(p, d). Neglecting the autoregres-

sive terms in ARMA(p, q) yields a â€˜â€˜pureâ€™â€™ moving average model

MA(q)

y(t) Â¼ e(t) Ã¾ b1 e(t Ã€ 1) Ã¾ b2 e(t Ã€ 2) Ã¾ . . . Ã¾ bq e(t Ã€ q) (5:1:13)

MA(q) can be presented in the form

y(t) Â¼ Bq (L)e(t) (5:1:14)

where Bq (L) is the MA polynomial in lag operator

Bq (L) Â¼ 1 Ã¾ b1 L Ã¾ b2 L2 Ã¾ . . . Ã¾ bq Lq (5:1:15)

The moving average model does not depend explicitly on the lagged

values of y(t). Yet, it is easy to show that this model implicitly

incorporates the past. Consider, for example, the MA(1) model

y(t) Â¼ e(t) Ã¾ b1 e(t Ã€ 1) (5:1:16)

with e(0) Â¼ 0. For this model,

y(1) Â¼ e(1), y(2) Â¼ e(2) Ã¾ b1 e(1) Â¼ e(2) Ã¾ b1 y(1),

ñòð. 14 |