0.1

¬nite time series {x1 , . . . , xn } in its trigonometric

expansion (see [12.3.1] and (C.1))

-0.1

AR(2) ±1 = 0.9, ±2 = ’0.8 2πkt 2πkt

xt = + bk sin

ak cos

n n

0 10 20 30 40 k

(16.19)

Figure 16.3: Cross-correlation functions between and then estimating XH with

t

the input series and corresponding Hilbert

π

2π kt

transform shown in Figures 16.2 and 16.4. The

xH = +

ak cos (16.20)

t

cross-covariance functions have been estimated n 2

k

from ¬nite time series. π

2πkt

+ bk sin +

n 2

2π kt 2πkt

= ’ ak sin .

bk cos

n n

k

This estimate matches equations (16.4) and (16.5).

The frequency domain approach has two

advantages over the time domain approach. First,

it is not necessary to choose the ¬lter length T .

Second, it appears that data near the endpoints

need not be treated specially. Thus the frequency

domain approach seems to be more robust than

the time domain approach. However, this is

not really the case. The trigonometric expansion

(16.19) implicitly assumes that the discrete ¬nite

Figure 16.4: A realization of an AR(2) process time series represents one chunk of a periodic

(solid) with ± = (0.9, ’0.8) and its Hilbert process with period n + 1. This is generally

transform XH (dashed) computed with (16.15) and not the case. The numbers {x1 , x2 , . . .} are not

t

T = 20. a smooth continuation of {. . . , xn’1 , xn }, and the

shift (16.20) of the entire non-periodic time series

transports the ˜discontinuity™ into the middle of

which has a maximum at lags-1 and 2, a zero at

the transformed time series. The problem will be

lag-3, a negative minimum at lag-5, and so forth.

more severe for shorter time series and longer time

The results are virtually unchanged if a longer

scales. As in spectral analysis, the problem can be

¬lter window with T > 20 is used.

reduced by using a data taper (cf. [12.3.8]).

Again, we advise making plots of the input

16.2.4 Estimating the Hilbert Transform from

time series together with the estimated Hilbert

a Finite Time Series. Two different approaches

transform to ensure that there are no unpleasant

may be used to estimate the Hilbert transform of a

surprises.

¬nite time series (cf. Barnett [19]).

In the time domain we can use the approximate

¬lter (16.15) with some ¬nite T . Obviously the 16.2.5 Properties of the Hilbert Transformed

¬rst and last T values of the Hilbert transform are Process. The cross-covariance function between

not as well estimated since the ¬lter length must a process and its Hilbert transform is anti-

either be reduced, or ¬lter (16.15) must be used in symmetric since their co-spectrum vanishes (cf.

an asymmetric manner. (16.13, 11.68));

The Hilbert transforms displayed in Figures

γx H x (δ) = ’γx x H (δ), (16.21)

16.2 and 16.4 were derived in this way, but

16.3: Complex and Hilbert EOFs 357

and in particular, the covariance matrix is given by

and in particular

γx H x (0) = 0. 1

(16.22) 2

Σx x (0) = Σx x = “x x (ω) dω (16.28)

’1

Thus, the process and its Hilbert transform are 2

1

uncorrelated at lag zero. 2

=2 x x (ω) dω

When the Hilbert transform is applied twice,

0

then the original time series appears with reversed

where the co-spectrum matrix x x (ω) is the

sign:

real part of the spectral matrix. Similarly, the

(XH )H = ’Xt . (16.23) quadrature spectrum matrix

x is the imaginary

t

Also, the Hilbert transform is a linear operation. part of the spectral matrix (see [11.4.1]).

It follows from (16.28) that the conventional

Thus

EOFs are the eigenvectors of the co-spectrum

(X + βY)t = Xt + βYt .

H H H

(16.24) matrix of the process X.

When two different random vectors X and

The relationship between a process and its Y with dimensions m x and m y are considered,

Hilbert transformed process, as represented by the then the rectangular m x — m y cross-covariance

covariance matrix or the spectrum, is described matrix Σx y = Cov X j , Yk describes the

in [16.2.7]. This relationship will be used in covariability of the two vectors.jkThe m — m

x y

Section 16.3. We brie¬‚y introduce the spectral matrix of Fourier transforms of the entries in the

matrix next. cross-covariance matrix is known as the cross-

spectral matrix and denoted by “x y .

16.2.6 The Spectral Matrix of a Random

Vector. In Section 11.4 we de¬ned the cross-

16.2.7 Hilbert Transform and the Spectral

spectrum of two processes X1t and X2t as

Matrix. The covariance matrix of the Hilbert

the Fourier transform of their cross-covariance

transform is equal to the covariance matrix of

function. We now generalize these de¬nitions to

the original process. This follows directly from

vector random variables.

(16.11) and (16.28).

The lag covariance matrix of an m-dimensional

We saw in [16.2.2] that the Hilbert transform

random vector Xt = (X1t , . . . , Xmt )T is the m — m

may be viewed as a linear ¬lter h. It therefore

matrix

follows from (11.74) that the cross-spectral matrix

† between Xt and Yt is given by

Σx x („ ) = E Xt ’ E(Xt ) Xt+„ ’ E(Xt+„ ) .

“x H x (ω) = F {h}(ω)“x x (ω) (16.29)

The spectrum of the vector process is de¬ned as

= H (ω)[Λx x + iΨx x ](ω)

the Fourier transform of the lag covariance matrix

(Ψx x ’ i Λx x )(ω) if ω > 0

∞

=

Σx x („ )e’2πiω„

“x x (ω) = (16.25) (Ψx x + i Λx x )(ω) if ω < 0.

„ =’∞

Therefore

or, in short,