Due to its close relationship with the normal distribution, the estimation of

lognormal parameters presents few dif¬culties. The maximum likelihood esti-

mators are

1X n

^

m ¼ log x ¼ log xj (4:25)

n i¼1

and

1X n

2

^

s¼ (log xj À log x): (4:26)

n i¼1

119

4.6 ESTIMATION

The Fisher information on (m, s 2 )` in one observation is

2 3

1

07

6 s2

I (m, s 2 ) ¼ 6 7: (4:27)

4 15

0

2s 4

Thus, the lognormal parameterization (4.1) enjoys the attractive property that the

parameters are orthogonal.

From these expressions, parametric estimators of, for example, the mean, median,

or mode, are easily available via the invariance of the ML estimators; asymptotic

standard errors follow via the delta method. For example, Iyengar (1960) and Latorre

(1987) derived the asymptotic distributions of the ML estimators of the Gini

coef¬cient; the latter paper also presents the asymptotic distribution of Zenga™s

inequality measure j: For the Gini coef¬cient,

" #

2 s 2 =2

^

s se

^

G ¼ 2F p¬¬¬ À 1 % N G, : (4:28)

2pn

2

P

It is also worth noting that the UMVUE of s 2 ; namely, s 2 ¼ n (log xj À

˜ j¼1

log x)=(n À 1); may be interpreted, in our context, as the UMVUE of the variance of

logarithms, VL(X ):

The unbiased estimation of some classical inequality measures from lognormal

populations was studied by Moothathu (1989). He observes that for functionals

t(b, l) ¼ erf (bl), (4:29)

p¬¬¬¬ Ð x 2

where erf(x) ¼ 2= p 0 eÀt is the error function and b is some known constant,

UMVU estimators are given by

nÀ1 2

U2 (b) ¼ h b, , ÀV2 ,

2

when both parameters of the distribution are unknown, and

nÀ1 2

U1 (b) ¼ h b, , ÀV1 ,

2

when only s is unknown. Here

1X n

(log Xj À m)2

2

V1 ¼

2 j¼1

120 LOGNORMAL DISTRIBUTIONS

and

1X n

(log Xj À log X )2

2

V2 ¼

2 j¼1

and

p¬

2b t G(m) 13 1

; , m þ ; Àb2 t ,

h(b, m, t) ¼ p¬¬¬¬ 1 F2

pG(m þ 1=2) 22 2

where 1 F2 is a generalized hypergeometric function (see p. 288 for a de¬nition). The

Gini and Pietra measures are clearly of the form (4.29) (see Section 4.5 above),

namely, G ¼ erf (s=2) and P ¼ erf (s=23=2 ): Moothathu also provided unbiased

estimators of their variances, as well as strongly consistent and asymptotically

normally distributed estimators of G and P:

The optimal grouping of samples from lognormal populations was stu-

died by Schader and Schmid (1986). Suppose that a sample X ¼ (X1 , . . . , Xn )` of

size n is available and the parameter of interest is s: (This is the relevant

parameter in connection with inequality measurement, as the formulas in

Section 4.5 indicate.) By independence, the Fisher information of the sample for s is

2n

I (s) ¼ nI1 (s) ¼ ,

s2

where I1 denotes the information in one observation. For a given number of groups k

with group boundaries 0 ¼ a0 , a1 , Á Á Á , akÀ1 , ak ¼ 1; de¬ne the class

frequencies Nj as the number of Xi in [ajÀ1 , aj ), j ¼ 1, . . . , k: Thus, the joint

distribution of N ¼ (N1 , . . . , Nk )` is multinomial with parameters n and pj ¼ pj (s);

where

° aj

j ¼ 1, . . . , k:

pj (s) ¼ f (xjs) dx,

ajÀ1

Now the Fisher information in N is

X [@pj (s)=@s]2 n X [f(zj )zj À f(zjÀ1 )zjÀ1 ]2

k k

IN (s) ¼ n ,

¼2

s j¼1 F(zj ) À F(zjÀ1 )

pj (s)

j¼1

where z0 ¼ À1; zk ¼ 1; zj ¼ ( log aj À m)=s; j ¼ 1, . . . , k À 1; and f and F again

denote the p.d.f. and c.d.f. of the standard normal distribution. This expression is, for

¬xed s; a function of k and the k À 1 class boundaries a1 , . . . , akÀ1 :

As was described in greater detail in Section 3.6 in the Pareto case, passing from

the complete data X to the class frequencies N implies a loss of information, which

121

4.7 THREE- AND FOUR-PARAMETER LOGNORMAL DISTRIBUTIONS

Table 4.2 Optimal Class Boundaries for Lognormal Data

kÃ zÃ , . . . , zÃÃ À1

g 1 k

2 0.8355 0.8355 . . .

9 22.5408 2 1.9003 21.3715