P(X ) ¼ FSM (m; a, b, q) À FGB2 m; a, b, 1 þ , q À ,

a a

where m is the ¬rst moment. Finally, the variance of logarithms is (Schmittlein,

1983)

10

[c (q) þ c0 (1)]:

VL(X ) ¼ var(log X ) ¼

a2

Unlike in the lognormal case, these measures are not very attractive analytically.

Klonner (2000) presented necessary as well as suf¬cient conditions for ¬rst-order

stochastic dominance (FSD) within the Singh “ Maddala family. The conditions

a1 ! a2 , a1 q1 a2 q2 , and b1 ! b2 are suf¬cient for X1 !FSD X2, whereas the

conditions a1 ! a2 and a1 q1 a2 q2 are necessary. It is instructive to compare these

conditions to those for the Lorenz ordering (6.68): Although a1 q1 a2 q2 is also a

condition for X1 !L X2, the second condition a1 ! a2 appears in reversed form in

(6.68). The reason is that FSD describes “size,” whereas the Lorenz ordering

describes variability. Namely, a1 a2 and a1 q1 a2 q2 mean that X1 is associated

with both a heavier left and a heavier right tail and thus more variable than X2 . On

the other hand, a1 ! a2 and a1 q1 a2 q2 mean that X1 is associated with a lighter left

and a heavier right tail and thus stochastically larger than X2 . See Chapter 2 for

details of the argument in connection with the Lorenz ordering.

Zenga ordering within the Singh “ Maddala family was studied by Polisicchio

(1990). It emerges that a1 a2 implies X1 !Z X2 , for a ¬xed q, and similarly that

q1 q2 implies X1 !Z X2 , for a ¬xed a. Under these conditions we know from

207

6.2 SINGH “ MADDALA DISTRIBUTION

(6.68) that X1 !L X2 ; however, a complete characterization of the Zenga order

among Singh “ Maddala distributions appears to be unavailable at present.

6.2.5 Estimation

Singh and Maddala (1976) estimated parameters by using a regression method

minimizing

Xn h x a io2

n

i

log[1 À F(xi )] þ q log 1 þ , (6:70)

b

i¼1

that is, a nonlinear least-squares regression in the Pareto diagram. Stoppa (1995)

discussed a further regression method utilizing the elasticity dlog F(x)=dlog x of the

distribution. The resulting estimators could be used, for example, as starting values

in ML estimation.

The log-likelihood for a complete random sample of size n equals

X

n

log L ¼ nlog q þ nlog a þ (a À 1) log xi À nalog b

i¼1

h x a i

X

n

i

À (1 þ q) log 1 þ ,

b

i¼1

yielding the likelihood equations

X xi b a !À1

n X xi

n n

¼ (1 þ q) þ1 , (6:71)

log log

þ

a i¼1 b b xi

i¼1

a !À1

X

n

b

n ¼ (1 þ q) 1þ , (6:72)

xi

i¼1

nXh x a i

n

i

:

log 1 þ (6:73)

¼

q i¼1 b

The algorithmic aspects of this optimization problem are discussed in Mielke and

Johnson (1974), Wingo (1983), and Watkins (1999).

Specializing from the information matrix for the GB2 distribution (6.30), we

obtain for u ¼ (a, b, q)`

0 1

" #

I11 I12 I13

2

@ log L @ I21 I23 A,

I (u) ¼ ÀE I22 (6:74)

¼:

@ui @uj i, j

I31 I32 I33

208 BETA-TYPE SIZE DISTRIBUTIONS

where

1

{q[(c (q) À c (1) À 1)2 þ c0 (q) þ c0 (1)]

I11 ¼ 2 (2 þ q)

a

þ 2[c (q) À c (1)]}, (6:75)

1 À q þ q[c (q) À c (1)]

I21 ¼ I12 ¼ , (6:76)

b(2 þ q)

a2 q

I22 , (6:77)

¼2

b (2 þ q)

a

I23 ¼ I32 ¼ À , (6:78)

b(1 þ q)

c (q) À c (1) À 1

I31 ¼ I13 ¼ À , (6:79)

a(1 þ q)

1

:

I33 ¼ (6:80)

q2

[For I33 we used the identity c0 (x) À c0 (x þ 1) ¼ xÀ2.]

Schmittlein (1983) provided an explicit expression for the inverse of the Fisher

information (using a different parameterization) as well as the information matrix

when the data are grouped and/or type I censored. Comparing these formulae

with the expressions above permits an evaluation of the information loss due to

the effect of grouping and/or censoring. Asymptotic variances for functionals of

the distribution can be obtained by the delta method. Since the resulting

expressions for the asymptotic variances of, for example, the Gini, Pietra, and

variance of logarithms measures of inequality are not very attractive analytically,