ñòð. 7 |

Two characteristics are used to describe probable values of random

variable X: mean (or expectation) and median. Mean of X is the

average of all possible values of X that are weighed with the prob-

ability density P(x)

Ã°

m E[X] Â¼ xP(x)dx (3:1:5)

Median of X is the value, M, for which

Pr(X > M) Â¼ Pr(X < M) Â¼ 0:5 (3:1:6)

Median is the preferable characteristic of the most probable value for

strongly skewed data samples. Consider a sample of lottery tickets

that has one â€˜â€˜luckyâ€™â€™ ticket winning one million dollars and 999

â€˜â€˜losers.â€™â€™ The mean win in this sample is $1000, which does not

realistically describe the lottery outcome. The median zero value is a

much more relevant characteristic in this case.

The expectation of a random variable calculated using some avail-

able information It (that may change with time t) is called conditional

expectation. The conditional probability density is denoted by P(xjIt ).

Conditional expectation equals

Ã°

E[Xt jIt ] Â¼ xP(xjIt )dx (3:1:7)

Variance, Var, and the standard deviation, s, are the conventional

estimates of the deviations from the mean values of X

Ã°

Var[X] s2 Â¼ (x Ã€ m)2 P(x)dx (3:1:8)

19

Probability Distributions

In financial literature, the standard deviation of price is used to

characterize the price volatility.

The higher-order moments of the probability distributions are

defined as

Ã°

mn E[Xn ] Â¼ xn P(x)dx (3:1:9)

According to this definition, mean is the first moment (m m1 ), and

variance can be expressed via the first two moments, s2 Â¼ m2 Ã€ m2 .

Two other important parameters, skewness S and kurtosis K, are

related to the third and fourth moments, respectively,

S Â¼ E[(x Ã€ m)3 ]=s3 , K Â¼ E[(x Ã€ m)4 ]=s4 (3:1:10)

Both parameters, S and K, are dimensionless. Zero skewness implies

that the distribution is symmetrical around its mean value. The posi-

tive and negative values of skewness indicate long positive tails and

long negative tails, respectively. Kurtosis characterizes the distribu-

tion peakedness. Kurtosis of the normal distribution equals three.

The excess kurtosis, Ke Â¼ K Ã€ 3, is often used as a measure of devi-

ation from the normal distribution. In particular, positive excess

kurtosis (or leptokurtosis) indicates more frequent medium and large

deviations from the mean value than is typical for the normal distri-

bution. Leptokurtosis leads to a flatter central part and to so-called

fat tails in the distribution. Negative excess kurtosis indicates frequent

small deviations from the mean value. In this case, the distribution

sharpens around its mean value while the distribution tails decay

faster than the tails of the normal distribution.

The joint distribution of two random variables X and Y is the

generalization of the cumulative distribution (see 3.1.3)

Ã°Ã°

b c

c) Â¼

Pr(X b, Y h(x, y)dxdy (3:1:11)

Ã€1 Ã€1

In (3.1.11), h(x, y) is the joint density that satisfies the normalization

condition

Ã°Ã°

11

h(x, y)dxdy Â¼ 1 (3:1:12)

Ã€1 Ã€1

20 Probability Distributions

Two random variables are independent if their joint density function

is simply the product of the univariate density functions: h(x, y) Â¼

f (x)g(y). Covariance between two variates provides a measure of their

simultaneous change. Consider two variates, X and Y, that have the

means mX and mY , respectively. Their covariance equals

Cov(x, y) sXY Â¼ E[(x Ã€ mX )(y Ã€ mY )] Â¼ E[xy] Ã€ mX mY (3:1:13)

Obviously, covariance reduces to variance if X Â¼ Y: sXX Â¼ sX 2 .

Positive covariance between two variates implies that these variates

tend to change simultaneously in the same direction rather than in

opposite directions. Conversely, negative covariance between two

variates implies that when one variate grows, the second one tends

to fall and vice versa. Another popular measure of simultaneous

change is the correlation coefficient

Corr(x, y) Â¼ Cov(x:y)=(sX sY ) (3:1:14)

The values of the correlation coefficient are within the range [ Ã€ 1, 1].

In the general case with N variates X1 , . . . , XN (where N > 2),

correlations among variates are described with the covariance matrix.

Its elements equal

Cov(xi , xj ) sij Â¼ E[(xi Ã€ mi )(xj Ã€ mj )] (3:1:15)

3.2 IMPORTANT DISTRIBUTIONS

There are several important probability distributions used in quan-

titative finance. The uniform distribution has a constant value within

the given interval [a, b] and equals zero outside this interval

0, x < a and x > b

PU Â¼ (3:2:1)

1=(b Ã€ a), a x b

The uniform distribution has the following mean and higher-order

moments

mU Â¼ 0, s2 U Â¼ (b Ã€ a)2 =12, SU Â¼ 0, KeU Â¼ Ã€6=5 (3:2:2)

The case with a Â¼ 0 and b Â¼ 1 is called the standard uniform distribu-

tion. Many computer languages and software packages have a library

function for calculating the standard uniform distribution.

21

Probability Distributions

The binomial distribution is a discrete distribution of obtaining n

successes out of N trials where the result of each trial is true with

probability p and is false with probability q Â¼ 1 Ã€ p (so-called Ber-

noulli trials)

N!

PB (n; N, p) Â¼ CNn pn qNÃ€n Â¼ CNn pn (1 Ã€ p)NÃ€n , CNn Â¼ (3:2:3)

n!(N Ã€ n)!

The factor CNn is called the binomial coefficient. Mean and higher-

order moments for the binomial distribution are equal, respectively,

mB Â¼ Np, s2 B Â¼ Np(1 Ã€ p), SB Â¼ (q Ã€ p)=sB , KeB Â¼ (1 Ã€ 6pq)=sB 2

(3:2:4)

In the case of large N and large (N Ã€ n), the binomial distribution

approaches the form

1

PB (n) Â¼ pï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒï¬ƒ exp [Ã€(x Ã€ mB )2 =2s2 B ], N ! 1, (N Ã€ n) ! 1 (3:2:5)

2psB

that coincides with the normal (or Gaussian) distribution (see 3.2.9). In

the case with p ( 1, the binomial distribution approaches the Poisson

distribution.

The Poisson distribution describes the probability of n successes in

N trials assuming that the fraction of successes n is proportional to

the number of trials: n Â¼ pN

n n n NÃ€n

N!

PP (n, N) Â¼ 1Ã€ (3:2:6)

n!(N Ã€ n)! N N

As the number of trials N becomes very large (N ! 1), equation

(3.2.6) approaches the limit

PP (n) Â¼ nn eÃ€n =n! (3:2:7)

Mean, variance, skewness, and excess kurtosis of the Poisson distri-

bution are equal, respectively,

ñòð. 7 |