problems in early attempts (prior to the mid-1970s) resulted from trying to ¬nd

stationary values of the likelihood function using Newton™s or related methods

without proper safeguards for avoiding the region of attraction of the in¬nite

^

maximum. He proposed avoiding the solution of g(l) ¼ 0 altogether and instead

numerically maximizing a reparameterized conditional log-likelihood function with

univariate global optimization methods. This involves a parameter transformation

l(u) :¼ x1:n À exp(Àu), u [ (À1, 1); discussed by Grif¬ths (1980), that renders

the reparameterized log-likelihood approximately quadratic and symmetric in the

neighborhood of its ¬nite (local) maximum.

The Fisher information on u ¼ (m, s 2 , l)` in one observation is

2 3

2

=2

eÀmþs

1

0

6 7

s2 s2

6 7

6 7

2

=2

ÀeÀmþs

6 7

1

I (u) ¼ 6 7, (4:37)

0

6 7

2s4 s2

6 7

4 5

2 2 2

=2 =2

eÀ2mþ2s (1 þ s 2 )

eÀmþs ÀeÀmþs

s2 s2 s2

from which approximate variances of the MLEs can be obtained by inversion.

It is ¬¬¬

pnoteworthy that all ML estimators converge to their limiting values at the

usual n rate. This is even the case for l despite it being a threshold parameter. The

reason is the high contact (exponential decrease) of the density for x ! l:

A further estimation technique that is suitable in this context is maximum

product-of-spacings (MPS) estimation, proposed by Cheng and Amin (1983). Here

the idea is to choose u ¼ (m, s, l)` to maximize the geometric mean of the spacings

( )1=(nþ1)

Y

nþ1

GM ¼ Di ,

i¼1

124 LOGNORMAL DISTRIBUTIONS

where Di ¼ F(xi ) À F(xiÀ1 ); or equivalently its logarithm. For the three-parameter

lognormal distribution it can be shown that

p¬¬¬ d

˜

n(u À u) ! N [0, I (u)À1 ]:

Thus, the MPS estimators are asymptotically ef¬cient and also converge at the usual

p¬¬¬

n rate.

A modi¬ed method of moments estimator (MMME) has seen suggested by Cohen

and Whitten (1980); see also Cohen and Whitten (1988). Here the idea is to replace the

third sample moment by a function of the ¬rst order statistic (which contains more

information about the shift parameter l than any other observation). As with the local

MLE, this leads to a nonlinear equation in one variable. Cohen and Whitten reported

that it can be satisfactorily solved by the Newton “Raphson technique.

We note that a so-called four-parameter lognormal distribution has been de¬ned by

& '

X Àl

Ã Ã

Z ¼ m þ s log , (4:38)

d

where Z denotes a standard normal random variable. Since this can be rewritten as

Z ¼ mÃÃ þ sÃ log(X À l),

with mÃÃ ¼ mÃ À sÃ log d; it is really just the three-parameter lognormal distribution

that is de¬ned by (4.30).

4.8 MULTIVARIATE LOGNORMAL DISTRIBUTION

The most natural de¬nition of a multivariate lognormal distribution is perhaps in

terms of a multivariate normal distribution as the joint distribution of log Xi , i ¼

1, . . . , k: This approach leads to the p.d.f.

& '

1 1 ` À1

p¬¬¬¬¬¬

f (x1 , . . . , xk ) ¼ exp À (log x À m) S (log x À m) ,

(2p)n=2 jSjx1 Á Á Á xk 2

xi . 0, i ¼ 1, . . . , k, (4:39)

where x ¼ (x1 , . . . , xk )` , log x ¼ (log x1 , . . . , log xk )` ; m ¼ (m1 , . . . , mk )` ; and

S ¼ (sij ): If X ¼ (X1 , . . . , Xk )` is a random vector following this distribution,

this is denoted as X $ LNk (m, S). From the form of the moment-generating

function of the multivariate normal distribution, we get

1`

rk

r1

E(X1 Á Á Á Xk ) ¼ exp r` m þ r Sm ,

2

125

4.8 MULTIVARIATE LOGNORMAL DISTRIBUTION

where r ¼ (r1 , . . . , rk )` : It follows that for any i ¼ 1, . . . , k

12 2

E(Xis ) ¼ exp smi þ s sii ,

2

and for any i, j ¼ 1, . . . , k

& '

1

cov(Xi , Xj ) ¼ exp mi þ mj þ (sii þ sjj ) {exp(sij ) À 1}: (4:40)

2

The conditional distributions of, for example, X1 given X2 , . . . , Xk may be shown to

be also lognormal. However, despite the close relationship with the familiar

multivariate normal distribution, some differences arise. For example, although

Pearson™s measure of (pairwise) correlation @Yi ,Yj can assume any value between À1

and 1 for the multivariate normal distribution (for which the marginals differ only in

location and scale), the range of this coef¬cient is much narrower in the multivariate

lognormal case and depends on the shape parameters sij : Speci¬cally, if Y is

bivariate normal with unit variances, a calculation based on (4.40) shows that for the

corresponding bivariate lognormal distribution the range of @Xi ,Xj is

!

eÀ1 À 1

, 1 ¼ [À0:3679, 1],

eÀ1

so only a limited amount of negative correlation is possible. For further information

on the dependence structure of the multivariate lognormal distribution, see Nalbach-

Leniewska (1979).

In the context of income distributions, Singh and Singh (1992) considered a

likelihood ratio test for comparing the coef¬cients of variation in lognormal

distributions. Since from (4.8) the coef¬cients of variation are given by

p¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬¬

exp(si2 ) À 1, i ¼ 1, . . . , k; the problem is equivalent to comparing the variances

2

of Yi ¼ log Xi , i ¼ 1, . . . , k: Thus, the null hypothesis may be stated as H0 : s1 ¼