ñòð. 1 |

â€¢ [o,v,b]=optimal(rand(1,100000))

â€¢ Estimators = 0.4619 0.4617 0.4618 0.4613 0.4619

â€¢ o = 0.46151 % best linear combination

(true value=0.46150)

â€¢ v = 1.1183e-005 %variance per uniform input

â€¢ bâ€™ = -0.5503 1.4487 0.1000 0.0491 -0.0475

Efficiency of Optimal Linear

Combination

â€¢ Efficiency gain based on number of uniform

random numbers 0.4467/0.00001118 or

about 40,000.

â€¢ However, one uniform generates 5

estimators requiring 10 function evaluations.

â€¢ Efficiency based on function evaluations

approx 4,000

â€¢ A simulation using 500,000 uniform

random numbers ; 13 seconds on

Pentium IV (2.4 Ghz) equivalent to

twenty billion simulations by crude

Monte Carlo.

Interpreting the coefficients b.

Dropping estimators.

âˆ’10

â€¢ Variance of the mean of 100,000 is 1.18 Ã—10

Standard error is around .00001

â€¢ Some weights are negative, (e.g. on Y1)

some more than 1 (on Y2), some

approximately 0 (could they be dropped?

For example if we drop Y3 then

variance increases to about 1.6 Ã— 10 .

-10

Tipsâ€¦

â€¢ If you are simulating to generate resuts for a more

complicated model (e.g. asian option, non-normal

distribution etc) use a simple model (European option,

normal distribution etc) as contol variate. Use simulation

to estimate the difference (assuming you know the result

for the control)

â€¢ Allow the uniform variates as input parameters. This

facilitates variance reduction without changing the

program.

â€¢ Try a variety of variance reduction techniques (5-10)

including some with second-difference like expressions.

Your best estimator is usually an optimal linear

combination.

â€¢ Only combine antithetic random numbers additively.

Black-Scholes price in R

â€¢ blsprice=function(So,strike,r,T,sigma,div){

â€¢ d1<-(log(So/strike)+(r-

div+(sigma^2)/2)*T)/(sigma*sqrt(T))

â€¢ d2<-d1-sigma*sqrt(T)

â€¢ call<-So*exp(-div*T)*pnorm(d1)-exp(-

r*T)*strike*pnorm(d2)

â€¢ put=call-So+strike*exp(-r*T)

â€¢ c(call,put)}

Useful URLâ€™s

â€¢ http://www.std.com/nr/index.html

(numerical recipies)

â€¢ http://www.cboe.com

â€¢ http://rweb.stat.umn.edu/R/ (R library)

Simulating Survivorship bias and

the maxima of Brownian Motion

Examples in Biostatistics: Sequential tests for a

mean. e.g. For testing hypothesis

H0 : Âµ â‰¥ 0

H1 : Âµ < 0

Reject H 0 as soon as B (t ) < âˆ’c0 âˆ’ c1t.

i.e. min{B (t ) + c0 + c1t < 0}

Modeling highs

Brownian Motion

â€¢ Brownian motion

dX ( t ) = Âµ dt + Ïƒ dW ( t )

âˆ† X (t ) ˜ N ( Âµ âˆ† t , Ïƒ âˆ† t )

2

We typically observe

O i = X i (0)

C i = X i (T )

L i = min{ X i ( t ); 0 < t < T }

H i = max{ X i ( t ); 0 < t < T }

X i ( t ) are correlated Brownian Motion processes

Acceptance-rejection generating

from f(x)

Suppose we generate the Close C by Acceptance-Rejection using its pdf

f(x)

We can use the same picture to

simulate jointly (H,C) for BM

Similarly if C is discrete

Exponential Statistics

Theorem :

For Brownian Motion,

Z H = ( H âˆ’ O)( H âˆ’ C ) ˜ exp(Ïƒ T / 2)

2

Z L = ( L âˆ’ O)( L âˆ’ C ) ˜ exp(Ïƒ T / 2)

2

independent of (O, C).

Proof :

Random Walk and reflection

m â‰¤ max(0, u)

âŽ§ 1

âŽª f (2m âˆ’ u)

P(H â‰¥ m | C = u) = âŽ¨

m>u

âŽª f (u)

âŽ©

Substitute observed values in

survivor function in Normal case

f (2 H âˆ’ C )

U= is the survivor function for H given C evaluated at its

f (C )

observed values. Therefore it is Uniform[0,1].

f (2 H âˆ’ C ) 1

But U = = exp{ (C 2 âˆ’ (2 H âˆ’ C ) 2 )}

2TÏƒ 2

f (C )

2H (H âˆ’ C )

= exp{âˆ’ }

TÏƒ 2

2H (H âˆ’ C )

Therefore âˆ’ ln(U ) = is exp(1)

TÏƒ 2

TÏƒ 2

Z H = H ( H âˆ’ C ) is exp( ).

2

Estimating volatility based on

High and Low

Both Z H and Z L (and average) provide

an estimator of volatility or variance.

ñòð. 1 |