ñòð. 5 |

where X , Y have cumulative distributi on functions

FX and FY respective ly. Then

var[ h1 ( X ) âˆ’ h2 (Y )] = var[ h1 ( X )] + var[ h2 (Y )] âˆ’ 2 cov( h1 ( X ), h2 (Y ))

and this is small if cov( h1 ( X ), h2 (Y )) is large.

When functions h1 ( X ), h2 (Y ) are both increasein g functions

we can arrange a large covariance if we use the

SAME uniform to generate X and Y .i.e.

âˆ’1 âˆ’1

X = FX (U ), Y = FY (U ).

Opposite of antithetic .

Theorem: Common and

Antithetic Random Numbers

Theorem : Suppose h1 ( x), h2 ( y ) are both increasing or both

decreasing functions. Subject to the constraints that

X , Y have given cumulative distribution functions

FX and FY respectively, then the covariance

cov(h1 ( X ), h2 (Y ))

is maximized when we use common random numbers, i.e.

âˆ’1 âˆ’1

X = FX (U ), Y = FY (U )

and minimized when we use antithetic random numbers :

âˆ’1 âˆ’1

X = FX (U ), Y = FY (1 âˆ’ U )

Comparing estimators for Call

Option Pricing Example

â€¢ script9

Combining Monte Carlo

Estimators

â€¢ If I have many MC estimators,

with/without various variance reduction

techniques, which should I choose?

Combining Estimators

â€¢ Suppose I have m unbiased estimators all

of the same parameter Î¸

â€¢ Put these estimators in a vector Y

Y = (Y1 , Y2 ,...Ym )' so that E(Y ) = 1Î¸ where 1 represents the

vector (1,1,...,1) of length m.

Any linear combination of these estimators with

coefficients that add to one is also an unbiased

estimator of the parameter Î¸

Which such linear combination is best?

Best linear combination of

estimators.

Q : What linear combination of these estimators does the

best job of estimating Î¸ .

A : The best linear combination is âˆ‘ biYi where âˆ‘ bi = 1 and

i

m

bi is proportional to âˆ‘ Cij and where C = V is the inverse of the

-1

j =1

covariance matrix V of Y .

Vij = cov(Yi , Y j ).

Estimating Covariance

When we have many independent values of these estimators, (from n

simulations, e.g. Yij , j = 1,2,...,n are replicated values of Yi ,

we may estimate variance - covariance matrix using sample covariance :

Estimate Vij by

1n _ _

âˆ‘ (Yik âˆ’ Yi )(Y jk âˆ’ Y j )

n âˆ’ 1 k =1

Theorem on Optimal Linear

Combination of estimators

Theorem : (Best Linear Combination of Estimators)

The linear combination of estimators Yi , i = 1,2,...m of the form

m

âˆ‘ b Y where the vector b is given by

ii

i =1

âˆ’1 âˆ’1 âˆ’1

b = (1'V 1) 1'V . Here 1 is the column vector of m ones.

'

The variance of the resulting estimator is

1

âˆ’1

b 'V b = .

âˆ’1

1'V 1

Example: combining the

estimators of the call option

price

Consider the following estimators :

0.53

Y1 = ( f (.47 + .53U ) + f (1 âˆ’ .53U )) (antithetic)

2

0.37

Y2 = [ f (.47 + .37U ) + f (.84 âˆ’ .37U )]

2

.16

+ [ f (.84 + .16U ) + f (1 âˆ’ .16U )]

2

This is stratified into [.47,.84] and [.84,1] and uses antithetic

within strata, common U between strata.

Y3 = .37 f (.47 + .37U ) + .16 f (1 âˆ’ .16U )

stratified, antithetic between strata

Example: (cont)

Y4 = âˆ« g (u )du + [ f (U ) âˆ’ g (U )] (control variate)

where g (u ) = 6[(u âˆ’ .47) + ]2 + (u âˆ’ .47) +

f (Z )

Y5 = where Z = .47 + .53 U (importance sampling)

g (Z )

Generate simulated values of all five estimators Y1 ,...Y5 using

the same uniform.

Do this repeatedly for n values of U . Obtain the individual estimators

5

and the covariance matrix V . The best linear combination is âˆ‘ bi Yi

i =1

1 âˆ’1

where the vector b is given by b = V 1. (b is proportional to

âˆ’1

1'V 1

the sum of the rows of V âˆ’1 - rescaled so âˆ‘ bi = 1)

MATLAB function OPTIMAL

â€¢ function [o,v,b,t1]=optimal(U)

â€¢ % generates optimal linear combination of five estimators and outputs

â€¢ % average estimator and variance.

â€¢ t1=cputime;

â€¢ Y1=(.53/2)*(fn(.47+.53*U)+fn(1-.53*U)); t1=[t1 cputime];

â€¢ Y2=.37*.5*(fn(.47+.37*U)+fn(.84-.37*U))+.16*.5*(fn(.84+.16*U)+fn(1-

.16*U));

â€¢ t1=[t1 cputime];

â€¢ Y3=.37*fn(.47+.37*U)+.16*fn(1-.16*U); t1=[t1 cputime];

â€¢ intg=2*(.53)^3+.53^2/2; Y4=intg+fn(U)-GG(U); t1=[t1

cputime];

â€¢ Y5=importance('fn','importancedens','Ginverse',U); t1=[t1

cputime];

â€¢ X=[Y1' Y2' Y3' Y4' Y5'];

â€¢ mean(X)

â€¢ V=cov(X); Z=ones(5,1); C=inv(V); b=C*Z/(Z'*C*Z);

â€¢ o=mean(X*b); % this is mean of the optimal linear combinations

â€¢ t1=[t1 cputime];

â€¢ v=1/(Z'*V1*Z);

â€¢ t1=diff(t1); % these are the cputimes of the various estimators.

Results for option pricing

â€¢ [o,v,b]=optimal(rand(1,100000))

â€¢ Estimators = 0.4619 0.4617 0.4618 0.4613 0.4619

â€¢ o = 0.46151 % best linear combination

ñòð. 5 |