mask interesting events. In order to prevent aliasing, one should either apply

a low-pass ¬lter to the signal before sampling, or use oversampling with

subsequent removal of the high-frequency components.

The assumed periodicity of the series fn , n ∈ [0, N ), may also cause

3 If a series fn is interpolated with the sinc function: f (t) = n fn sin z/z, with z = π(t’ti )/”t,

then the Fourier transform of the resulting function vanishes for ν > 1/2”t.

12.8 Autocorrelation and spectral density from FFT 325

artefacts when fn are samples over a period T of a truncated, but in principle

in¬nite series of data. In order to avoid unrealistic correlations between the

selected time series and periodic images thereof, it is adviza ble to extend

the data with a number of zeroes. This is called zero-padding and should

ideally double the length of the array. The double length of the data series

re¬nes resolution of the frequency scale by a factor of two; of course the

factual resolution of the frequency distribution is not changed just by adding

zeroes, but the distributions look nicer. Distributions can be made smoother

and wiggles in spectral lines, caused by sudden truncation of the data, can

be avoided by multiplying the time data by a window function that goes

smoothly to zero near the end of the data series. An example is given

below.

Since very fast algorithms (FFT, fast Fourier transform, invented by Coo-

ley and Tukey, 1965)4 exist for this representation of Fourier transforms,

the periodic-discrete form is most often used in numerical problems. A

well-implemented FFT scales as N log N with the array size N . The most

e¬cient implementations are for values of N that decompose into products

of small primes, such as powers of 2.

12.8 Autocorrelation and spectral density from FFT

Consider an in¬nite series of real data fn , n ∈ Z. Let the series be station-

ary in the sense that statistical properties (average, variance, correlation

function) evaluated over a speci¬c interval [i, . . . i + N ) are within statisti-

cal accuracy independent of the origin i. The autocorrelation function Ck ,

which is discrete in this case, is de¬ned as

i+N ’1

1

Ck = lim fn fn+k . (12.67)

N ’∞ N

n=i

The value of Ck does not depend on the origin i; C0 equals the mean square

(also called the mean power per sample) of f . The function is symmetric:

C’k = Ck . Generally the autocorrelation is understood to apply to a funct-

ion with zero mean: when fn has zero mean, Ck tends to 0 for large k and

C0 is the variance of f . If the mean of fn is not zero, the square of the

mean is added to each Ck . The discrete autocorrelation function, as de¬ned

above, can be viewed as a discretization of the continuous autocorrelation

4 See for a description Press et al. (1992) or Pang (1997). Python numarray includes an FFT

module. A versatile and fast, highly recommended public-domain C subroutine FFT library is

available from

http://www.¬tw.org/ (“Fastest Fourier Transform in the West”).

326 Fourier transforms

function C(„ ) of a continuous stationary function of time f (t):5

t0 +T

1

def

C(„ ) = lim f (t)f (t + „ ) dt, (12.68)

T ’∞ T t0

where t0 is an arbitrary origin of the time axis. Note that C(0) is the mean

power per unit time of f (t) and that C(’„ ) = C(„ ).

The autocorrelation function can be estimated from a truncated series of

data fn , n ∈ [0, N ) as

Ck ≈ Ck

trunc

N ’k’1

1

fn fn+k (k ≥ 0),

trunc trunc

Ck = C’k = (12.69)

N ’k

n=0

or from a periodic series of data fn , n ∈ Z, fn+N = fn , as

N ’1

1

per

Ck ≈ Ck = fn fn+k . (12.70)

N ’k

n=0

The factor 1/(N ’ k) instead of 1/N in (12.70) corrects for the fact that

k terms in the periodic sum have the value zero, provided the data series

has been properly zero-padded. Without zero-padding it is better to use the

per

factor 1/N ; now Ck does not approximate Ck but a mixture of Ck and

CN ’k :

N ’1

N ’k k 1

per

Ck + CN ’k ≈ Ck = fn fn+k . (12.71)

N N N

n=0

This makes the correlation function symmetric about N/2. The estimation

is in all cases exact in the limit N ’ ∞, provided that the correlation dies

out to negligible values above a given n.

We now consider the Fourier transform of the continuous autocorrelation

function. Because the autocorrelation function is real and even, its Fourier

transform can be expressed as a cosine transform over positive time:

∞

S(ν) = 4 C(„ ) cos(2πν„ ) d„. (12.72)

0

S(ν) is a real even function of „ . The factor 4 is chosen such that the inverse

5 Of course, one may read any other variable, such as a spatial coordinate, for t.

12.8 Autocorrelation and spectral density from FFT 327

transform6 has a simple form:

∞

C(„ ) = S(ν) cos(2πν„ ) dν. (12.73)

0

We see that the power per unit time C(0) equals the integral of S(ν) over

all (positive) frequencies. Therefore we may call S(ν) the spectral density,

as S(ν)dν represents the power density in the frequency interval dν.

The spectral density can also be determined from the direct Fourier trans-

form of the time function. Consider a time slice fT (t), t ∈ [0, T ), extended

with zero for all other t, with its Fourier transform

T

f (t)e2πiνt dt,

FT (ν) = (12.74)

0

∞

F (ν)e’2πiνt dt.

f (t) = (12.75)

’∞

We can now see that the spectral density is also given by

1—

S(ν) = 2 lim FT (ν)FT (ν). (12.76)

T ’∞ T

The right-hand side is the mean power (square of absolute value, irrespective

of the phase) per unit of time expressed as density in the frequency domain.

The factor 2 arises from the fact that S(ν) is de¬ned for positive ν while the

right-hand side applies to the frequency domain (’∞, ∞). The equality of

(12.72) and (12.76) is the Wiener“Khinchin theorem.

Proof First we assume that the limit exists. It is intuitively clear that this

is true for stationary time series that contain no constant or periodic terms.7

We skip further discussion on this point and refer the interested reader to

the classic paper by Rice (1954) for details. Substituting C(„ ) from (12.68)

into the de¬nition of S(ν) given in (12.72), we ¬nd

∞