i¼0

!

2i þ 1

1=(rÀ1)

þ sr :

G

r

Like the Gini coef¬cient, the Pietra coef¬cient can only be evaluated numerically.

However, for a modi¬ed version where the mean deviation from the mean is replaced

by the mean deviation from the median, Scheid (2001) provided the expression

P1

[(sr )2iþ1 =(2i þ 1)! r(2iþ1)=r G((2i þ 2)=r)]

E(jX À xmed j) i¼0

P :

Pmed ¼ ¼

2 1 [(sr )2i =(2i)! r2i=r G((2i þ 1)=r)]

2E(X ) i¼0

In the log-Laplace case this is simply

s1

Pmed ¼ ,

2

implying that as in the lognormal case, inequality increases as the shape parameter

s1 increases.

137

4.10 GENERALIZED LOGNORMAL DISTRIBUTION

Finally, the variance of logarithms equals

G(3=r)

VL(X ) ¼ r2=r sr2 : (4:59)

G(1=r)

It would seem that all the measures cited decrease with r and increase with sr ; but a

rigorous proof of this fact is still lacking.

Brunazzo and Pollastri (1986) suggested estimating the parameters via a method

of moments estimation of the generalized normal parameters, that is, a method of

moments estimation for the logarithms of the data. It is easy to see that m and sr may

be estimated by

" #1=^

r

1X 1X

n n

^

jlog xi À log xjr

^ ^

m¼ sr ¼

log xi and ,

n i¼1 2 i¼1

once an estimate of r is available. To obtain such an estimate, one requires a certain

ratio of absolute central moments of log X : For the generalized normal distribution

(Lunetta, 1963),

1 G[( p þ 1)=r]

m0p :¼ E(jY À mjp ) ¼ :

rp=r srp G(1=r)

Thus,

E(jY À mj2p ) G(1=r)G[(2p þ 1)=r]

bp :¼ :

¼

E(jY À mjp ) G2 [( p þ 1)=r]

If we set p ¼ r; this simpli¬es to

br ¼ r þ 1:

Thus, r may be estimated using the empirical counterpart of br : Brunazzo and

Pollastri (1986) suggested solving the equation

c b

m02r À (r þ 1)m0r 2 ¼ 0

by means of the regula falsi. [Using the parameterization of the exponential power

distribution (4.42), for which the additional shape parameter b varies on a bounded

set, Rahman and Gokhale (1996) suggested using the bisection method.]

138 LOGNORMAL DISTRIBUTIONS

Scheid (2001) considered the maximum likelihood estimation of the parameters.

The gradient of the log-likelihood is given by

n

1 Xlog xi À mrÀ1

@ log L sign(log xi À m)

¼

sr i¼1 sr

@m

1X n

@ log L n

jlog xi À mjr

¼ À þ rþ1

sr sr i¼1

@sr

n

1 Xlog xi À mr

@ log L n log r n n 1

À 2þ 2c 1þ

¼ þ2

r i¼1

r2 sr

@r r r r

!

n

1 Xlog xi À mrÀ1 log xi À mrÀ1

log ,

À

r i¼1

sr sr

where c denotes the digamma function. [Somewhat earlier, Bologna (1985) obtained the

ML estimators in the log-Laplace case as well as their sampling distributions, and also the

distributions of the sample median and sample geometric mean.]

The Fisher information matrix for the parameter u :¼ (m, sr , r)` can be shown to

be (Scheid, 2001)

2 3

(r À 1)G(1 À 1=r)

0 0

6 7

sr2 r2=r G(s)

6 7

6 7

6 7

r B

I (u) ¼ 6 7,

0 À (4:60)

6 7

sr2 rsr

6 7

6 7

4 B sc 0 (s) þ B2 À 1 5

0 À

r3

rsr

where s :¼ 1 þ 1=r and B :¼ log r þ c(s):

This matrix coincides with the Fisher information of the generalized normal