Let xf be the f “th population quantile, so that S± (σ, β, µ)(xf ) = f . Let xf

ˆ

be the corresponding sample quantile, i.e. xf satis¬es Fn (ˆf ) = f .

ˆ x

McCulloch (1986) analyzed stable law quantiles and provided consistent esti-

mators of all four stable parameters, however, with the restriction ± ≥ 0.6.

De¬ne

x0.95 ’ x0.05

v± = , (1.7)

x0.75 ’ x0.25

which is independent of both σ and µ. Let v± be the corresponding sample

ˆ

value. It is a consistent estimator of v± . Now, de¬ne

x0.95 + x0.05 ’ 2x0.50

vβ = , (1.8)

x0.95 ’ x0.05

and let vβ be the corresponding sample value. vβ is also independent of both

ˆ

σ and µ. As a function of ± and β it is strictly increasing in β for each ±. The

statistic vβ is a consistent estimator of vβ .

ˆ

Statistics v± and vβ are functions of ± and β. This relationship may be inverted

and the parameters ± and β may be viewed as functions of v± and vβ

± = ψ1 (v± , vβ ), β = ψ2 (v± , vβ ). (1.9)

Substituting v± and vβ by their sample values and applying linear interpolation

between values found in tables provided by McCulloch (1986) yields estimators

ˆ

± and β.

ˆ

Scale and location parameters, σ and µ, can be estimated in a similar way.

However, due to the discontinuity of the characteristic function for ± = 1

and β = 0 in representation (1.1), this procedure is much more complicated.

1.3 Estimation of parameters 23

We refer the interested reader to the original work of McCulloch (1986). The

quantlet stabcull returns estimates of stable distribution parameters from

sample x using McCulloch™s method.

1.3.3 Sample Characteristic Function Methods

Given an i.i.d. random sample x1 , ..., xn of size n, de¬ne the sample character-

istic function by

n

1

ˆ eitxj .

φ(t) = (1.10)

n j=1

ˆ ˆ

Since |φ(t)| is bounded by unity all moments of φ(t) are ¬nite and, for any

¬xed t, it is the sample average of i.i.d. random variables exp(itxj ). Hence,

ˆ

by the law of large numbers, φ(t) is a consistent estimator of the characteristic

function φ(t).

Press (1972) proposed a simple estimation method, called the method of mo-

ments, based on transformations of the characteristic function. From (1.1) we

have for all ±

|φ(t)| = exp(’σ ± |t|± ). (1.11)

Hence, ’ log |φ(t)| = σ ± |t|± . Now, assuming ± = 1, choose two nonzero values

of t, say t1 = t2 . Then for k = 1, 2 we have

’ log |φ(tk )| = σ ± |tk |± . (1.12)

ˆ

Solving these two equations for ± and σ, and substituting φ(t) for φ(t) yields

ˆ

log |φ(t1 )|

log ˆ 2 )|

log |φ(t

±=

ˆ , (1.13)

log | t1 |

t2

24 1 Stable distributions in ¬nance

and

ˆ ˆ

log |t1 | log(’ log |φ(t2 )|) ’ log |t2 | log(’ log |φ(t1 )|)

log σ =

ˆ . (1.14)

log | t1 |

t2

In order to estimate β and µ we have to apply a similar trick to {log φ(t)}. The

estimators are consistent since they are based upon estimators of φ(t), {φ(t)}

and {φ(t)}, which are known to be consistent. However, convergence to the

population values depends on the choice of t1 , ..., t4 . The optimal selection of

these values is problematic and still is an open question.

The quantlet stabmom returns estimates of stable distribution parameters from

sample x using the method of moments. It uses a selection of points suggested

by Koutrouvelis (1980): t1 = 0.2, t2 = 0.8, t3 = 0.1, and t4 = 0.4.

Parameter estimates can be also obtained by minimizing some function of the

di¬erence between the theoretical and sample characteristic functions. Koutrou-

velis (1980) presented a regression-type method which starts with an initial

estimate of the parameters and proceeds iteratively until some prespeci¬ed

convergence criterion is satis¬ed. Each iteration consists of two weighted re-

gression runs. The number of points to be used in these regressions depends on

the sample size and starting values of ±. Typically no more than two or three

iterations are needed. The speed of the convergence, however, depends on the

initial estimates and the convergence criterion.

The regression method is based on the following observations concerning the

characteristic function φ(t). First, from (1.1) we can easily derive

log(’ log |φ(t)|2 ) = log(2σ ± ) + ± log |t|. (1.15)

The real and imaginary parts of φ(t) are for ± = 1 given by

π±

{φ(t)} = exp(’|σt|± ) cos µt + |σt|± βsign(t) tan ,

2

and

π±

{φ(t)} = exp(’|σt|± ) sin µt + |σt|± βsign(t) tan .

2

1.3 Estimation of parameters 25

The last two equations lead, apart from considerations of principal values, to

{φ(t)} π±

= µt + βσ ± tan sign(t)|t|± .

arctan (1.16)

{φ(t)} 2

Equation (1.15) depends only on ± and σ and suggests that we estimate these

parameters by regressing y = log(’ log |φn (t)|2 ) on w = log |t| in the model