ñòð. 16 |

(5.2.4) is named stochastic trend. Consider, for example, the determin-

istic trend model with m Â¼ 0 and e(t) Â¼ N(0, 1)

y(t) Â¼ 0:1t Ã¾ e(t) (5:2:5)

and the stochastic trend model

y(t) Â¼ 0:1 Ã¾ y(t Ã€ 1) Ã¾ e(t), y(0) Â¼ 0 (5:2:6)

As it can be seen from Figure 5.1, both graphs look similar. In

general, however, the stochastic trend model can deviate from the

deterministic trend for a long time.

Stochastic trend implies that the process is I(1). Then the lag

polynomial (5.1.3) can be represented in the form

7

y(t)

6

5

4

B

3

A

2

1

t

0

0 10 20 30 40

Figure 5.1 Deterministic and stochastic trends: A - equation (5.2.5), B -

equation (5.2.6).

51

Time Series Analysis

Ap (L) Â¼ (1 Ã€ L)ApÃ€1 (L) (5:2:7)

Similarly, the process I(2) has the lag polynomial

Ap (L) Â¼ (1 Ã€ L)2 ApÃ€2 (L) (5:2:8)

and so on. The standard procedure for testing presence of the unit

root in time series is the Dickey-Fuller method [1, 2]. This method is

implemented in major econometric software packages (see the Section

5.5).

Seasonal effects may play an important role in the properties of time

series. Sometimes, there is a need to eliminate these effects in order to

focus on the stochastic specifics of the process. Various differencing

filters can be used for achieving this goal [2]. In other cases, seasonal

effect itself may be the object of interest. The general approach for

handling seasonal effects is introducing dummy parameters D(s, t)

where s Â¼ 1, 2, . . . , S; S is the number of seasons. For example,

S Â¼ 12 is used for modeling the monthly effects. Then the parameter

D(s, t) equals 1 at a specific season s and equals zero at all other

seasons. The seasonal extension of an ARMA(p,q) model has the

following form

y(t) Â¼ a1 y(t Ã€ 1) Ã¾ a2 y(t Ã€ 2) Ã¾ . . . Ã¾ ap y(t Ã€ p) Ã¾ e(t)

X

S

Ã¾b1 e(t Ã€ 1) Ã¾ b2 e(t Ã€ 2) Ã¾ . . . Ã¾ bq e(t Ã€ q) Ã¾ ds D(s, t) (5:2:9)

sÂ¼1

Note that forecasting with the model (5.2.9) requires estimating

(p Ã¾ q Ã¾ S) parameters.

5.3 CONDITIONAL HETEROSKEDASTICITY

So far, we considered random processes with the white noise (4.2.6)

that are characterized with constant unconditional variance. Condi-

tional variance has not been discussed so far. In general, the processes

with unspecified conditional variance are named homoskedastic.

Many random time series are not well described with the IID process.

In particular, there may be strong positive autocorrelation in squared

asset returns. This means that large returns (either positive or nega-

tive) follow large returns. In this case, it is said that the return

52 Time Series Analysis

volatility is clustered. The effect of volatility clustering is also called

autoregressive conditional heteroskedasticity (ARCH). It should be

noted that small autocorrelation in squared returns does not neces-

sarily mean that there is no volatility clustering. Strong outliers that

lead to high values of skewness and kurtosis may lower autocorrela-

tion. If these outliers are removed from the sample, volatility cluster-

ing may become apparent [3].

Several models in which past shocks contribute to the current

volatility have been developed. Generally, they are rooted in the

ARCH(m) model where the conditional variance is a weighed sum

of m squared lagged returns

s2 (t) Â¼ v Ã¾ a1 e2 (t Ã€ 1) Ã¾ a2 e2 (t Ã€ 2) Ã¾ . . . Ã¾ am e2 (t Ã€ m) (5:3:1)

In (5.3.1), e(t) $ N(0, s2 (t)), v > 0, a1 , . . . , am ! 0. Unfortunately,

application of the ARCH(m) process to modeling the financial time

series often requires polynomials with high order m. A more efficient

model is the generalized ARCH (GARCH) process. The GARCH

(m, n) process combines the ARCH(m) process with the AR(n) pro-

cess for lagged variance

s2 (t) Â¼ v Ã¾ a1 e2 (t Ã€ 1) Ã¾ a2 e2 (t Ã€ 2) Ã¾ . . . Ã¾ am e2 (t Ã€ m)

Ã¾ b1 s2 (t Ã€ 1) Ã¾ b2 s2 (t Ã€ 2) Ã¾ . . . Ã¾ bn s2 (t Ã€ n) (5:3:2)

The simple GARCH(1, 1) model is widely used in applications

s2 (t) Â¼ v Ã¾ ae2 (t Ã€ 1) Ã¾ bs2 (t Ã€ 1) (5:3:3)

Equation (5.3.3) can be transformed into

s2 (t) Â¼ v Ã¾ (a Ã¾ b)s2 (t Ã€ 1) Ã¾ a[e2 (t) Ã€ s2 (t Ã€ 1)] (5:3:4)

The last term in equation (5.3.4) is conditioned on information avail-

able at time (t Ã€ 1) and has zero mean. This term can be treated as a

shock to volatility. Therefore, the unconditional expectation of vola-

tility for the GARCH(1, 1) model equals

E[s2 (t)] Â¼ v=(1 Ã€ a Ã€ b) (5:3:5)

This implies that the GARCH(1, 1) process is weakly stationary when

a Ã¾ b < 1. The advantage of the stationary GARCH(1, 1) model is

that it can be easily used for forecasting. Namely, the conditional

expectation of volatility at time (t Ã¾ k) equals [4]

53

Time Series Analysis

E[s2 (t Ã¾ k)] Â¼ (a Ã¾ b)k [s2 (t) Ã€ v=(1 Ã€ a Ã€ b)] Ã¾ v=(1 Ã€ a Ã€ b) (5:3:6)

The GARCH(1, 1) model (5.3.4) can be rewritten as

s2 (t) Â¼ v=(1 Ã€ b) Ã¾ a(e2 (t Ã€ 1) Ã¾ be2 (t Ã€ 2) Ã¾ b2 e2 (t Ã€ 3) Ã¾ . . . ) (5:3:7)

Equation (5.3.7) shows that the GARCH(1, 1) model is equivalent to

the infinite ARCH model with exponentially weighed coefficients.

This explains why the GARCH models are more efficient than the

ARCH models.

Several GARCH models have been described in the econometric

literature [1â€“3]. One popular GARCH(1, 1) model with a Ã¾ b Â¼ 1 is

called integrated GARCH (IGARCH). It has the autoregressive unit

root. Therefore volatility of this process follows random walk and can

be easily forecasted

E[s2 (t Ã¾ k)] Â¼ s2 (t) Ã¾ kv (5:3:8)

IGARCH can be presented in the form

s2 (t) Â¼ v Ã¾ (1 Ã€ l)e2 (t Ã€ 1) Ã¾ ls2 (t Ã€ 1) (5:3:9)

where 0 < l < 1. If v Â¼ 0, IGARCH coincides with the exponentially

weighed moving average (EWMA)

X

n

liÃ€1 e2 (t Ã€ i)

2

s (t) Â¼ (1 Ã€ l) (5:3:10)

iÂ¼1

Indeed, the n-period EWMA for a time series y(t) is defined as

z(t) Â¼ [y(t Ã€ 1) Ã¾ ly(t Ã€ 2) Ã¾ l2 y(t Ã€ 3) Ã¾ . . .Ã¾

(5:3:11)

nÃ€1 n

y(t Ã€ n)]=(1 Ã¾ l Ã¾ . . . Ã¾ l )

l

ñòð. 16 |