0% found this document useful (0 votes)
80 views166 pages

Module 07 Lecture Slides

This document discusses random variate generation using the inverse transform method. It begins with an overview of the module and what will be covered, including discrete and continuous distributions as well as stochastic processes. It then goes into more detail on the inverse transform method, providing examples of how to generate random variates from continuous distributions like Weibull, exponential, triangular, and normal. It also discusses how to apply the inverse transform method to discrete distributions using tables or other techniques.

Uploaded by

SOOMI OH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
80 views166 pages

Module 07 Lecture Slides

This document discusses random variate generation using the inverse transform method. It begins with an overview of the module and what will be covered, including discrete and continuous distributions as well as stochastic processes. It then goes into more detail on the inverse transform method, providing examples of how to generate random variates from continuous distributions like Weibull, exponential, triangular, and normal. It also discusses how to apply the inverse transform method to discrete distributions using tables or other techniques.

Uploaded by

SOOMI OH
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 166

Computer Simulation

Module 7: Random Variate


Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Introduction

M7-001
Module Overview
Last Module: We studied ways to
generate Uniform(0,1) pseudo-
random numbers. Who cares…?

This Module: We’ll take those


PRN’s and use them to generate
everything else! You name it, we’ll
generate it!

Idea: Find the proper trick or


algorithm and off we go! This is
how we drive simulations.
M7-002
Module Overview
1. Introduction  This lesson
2. Inverse Transform Method
3. Continuous Examples
4. Discrete Examples
5. Empirical Distribution Example
6. Convolution Method
7. Acceptance-Rejection Method
8. Proof
9. Continuous Examples
10. Poisson Example
M7-003
Module Overview, II
11. Composition Method
12. Box-Muller Normals
13. Order Statistics and Other Stuff
14. Multivariate Normal Distribution
15. Baby Stochastic Processes
16. Nonhomogeneous Poisson
17. Time Series
18. Queueing Processes
19. Brownian Motion

M7-004
Introduction
Goal: e U (O, 1 number to generate ob er ation ariate from
other di tribution and e en to ha tic proce e .

Try to be fa t reproducible.

■ Di rete di tribution like Bernoulli Binomial Poi on and


e1npiri al
■ Continuou di tribution like exponential normal man wa
and e1npiri al

M7-005
Intro (cont’d)
■ Multi ariate nonnal
■ onhomo eneou Poi on proce
■ Autoregre i e mo ing a erage time erie
■ Waiting time
■ Brownian motion

Let tart with an old friend ...

Inver e Transform heorem: Let be a continuou random ariable


with c.d.f. . Then ( ) rv U O 1).
M7-006
Summary
This Time: Discussed what’s coming
up in this module on random variate
(and random process) generation.

Next Time: We’ll look at the Inverse


Transform method one last time!

M7-007
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Inverse Transform Method

M7-008
Lesson Overview
Last Lesson: We introduced the
topic of RV generation.

This Lesson: Now it’s time to get


going and start putting together our
bag of tricks.

Idea: Let’s go into some additional


detail with Inverse Transform…

M7-009
One Last Time!
Inver e Transform Theorem: Let be a continuou random ariable
with c.d.f. ( ). Then ( ) rv U O 1 .

Proof: Let and uppo e that ha c.d.f. y . Then


~
P( < y == P F )< y

P( < -1 y) == ( -l (y y. □

M7-010
How Do We Use This Result?
Let rv U (O 1 . Then mean that the random ariable
p-l ( ha the ame di tribution a .

o here i the inve~ e tran iform m tho i for generating a RV ha ing


.d.f. ( ):
D ample from U O 1).
fJ Return == F- 1 ( .

M7-011
   8.7
%$

  
 7.@

7.?

   7.>

    7.=

 
 
7.<

  7.;

7.:

7.9
9

7.8
8

77
7.7
18 7 8 9 : ;

%
M7-012
Exa1nple: The U b di tribution with F < < b.
Solving - )/ b - ) == for we oet + (b - □

Example: The Exp di tribution with ( == 1 - - A > 0.

Solving F ( for

1 1
-- 1- or -- □
A A

M7-013
Summary
This Time: Stated and finally proved
the Inverse Transform Theorem. Then
showed how to use it on a couple of
easy examples.

Next Time: We’ll do some more-


interesting applications of Inverse
Transform on trickier continuous
examples.

M7-014
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Inverse Transform Method:


Continuous Examples
M7-015
Lesson Overview
Last Lesson: Stated and finally
proved the Inverse Transform
Theorem. Then showed how to
use it on some trivial examples.

This Lesson: We’ll apply the


method on trickier continuous
examples.

The method almost always


works (well, sort of).
M7-016
More Continuous Examples
Example: The Weibull di tribution ( ) == 1 - - (-Xx).B 1 > 0.

ol ing ( for

1 1
== - [- n (l - )] 11 or == - [- ) ]1/ . □
A A

M7-017
Example: The triangular O 1 2 di tribution ha p.d.f.

if O < <1
!( ) ==
2- if 1 < < 2.
The c.d.f. i
2/ 2 if O <<1
1- - 2) / 2 if < < 2.

Need to look at two


cases separately!

M7-018
If < 1/ 2 we ol e / == to get == v'2U.

If > 1/ 2 the onl root of 1 - - ) 2/ 2 == in [1, ] i


== 2 - ✓2

Thu for example if == 0. we take == ,Jo.8. □

Remark: Do not replace b 1- here!

Demo Time!!!
M7-019
Example: The tandard normal di tribution. Unfortunate! the
in er e .d.f. <1>- 1 •) doe not ha e an anal tical form. Thi i often a
prob/ ,n " ith th in er tran form 111 thod.

Ea olution: Do a table lookup. E.g. If == 0. 7 then


Z == q> - 1 ( == 1.9 . D
NORMSINV(0.9754$

Crude portable approximation BC : The fallowing approximation


gi e at lea t one decimal place of a ura for
0.001 < < 0. 6
0.13 _ 1_ ) 0.135

0.197
M7-020
Here a better portable olution to o-enerate Nor 0 1 : The following
approxilnation ha ab olute en·or < 0. x 10- 3 :
2
Z == ign ( 1/ 2 ( ·o 1 + 2 )
- 1 + d1 + d2 2 + d3 3
where ign == 1 0 - 1 if i po iti e zero or negati e
re pectively
== {- n [n in ,1 -
and

o == 2. 1 17, 1 == 0. 02 3, 2 == 0.01032 ,
d1 == 1. 327 d2 == 0.1 9269 d3 == 0.00130 .
M7-021
In an ca e if Z rv or(0 1 l ) and ou want rv or µ a ju t take
+- µ + aZ.

Easy Example In er e Tran form : uppo e ou want to g~nerate


rv r . 1 and OU tart with == 0. . Then
)

µ+a Z == 3 + 4l-l 0. + 0. 7 ) D

Demo Time!!!

M7-022
Summary
This Time: Discussed how to
generate several interesting
continuous RVs via the Inverse
Transform method.

Next Time: We’ll use Inverse


Transform in a more-discrete way.

M7-023
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Inverse Transform Method:


Discrete Examples
M7-024
Lesson Overview
Last Lesson: Discussed how to
generate several interesting
continuous RVs via the Inverse
Transform method.

This Lesson: We’ll use Inverse


Transform for discrete examples.

“What happens in simulation


class stays in simulation class.”

M7-025
Discrete Examples
For di crete di tribu ion it , often be t to con ruct a table.

Baby Discrete Examp e: The Bernoulli(p) di tribution.

P (X ==
0 1- p 1- p [O 1 - p ]
1 p 1 1 - p 1]

If <1- p then take X == O· otherwi e == . or in tance if


p == 0.7 · nd we generate == 0.1 we take X == 0. □
M7-026
Alternately we can con truct the following backward table (which
i n t trictly inver e tran for1n but it the one that I u ually u e ).

p X == U (O 1
1 p [O p]
0 1- p (p 1]

if <p take X == 1· otherwi e X == 0. □

M7-027
Example: Suppo e we have a lightly le -trivial di crete p.m.f.

P (X == F U(O 1
- 1 0.6 0.6 [0.0 0.6
2.5 0.3 0.9 (0.6 0.9
4 0.1 1.0 (0.9 1.0

Thu if == 0.63 we take X == 2. . □

M7-028
Sometime there an ea y way to avoid con tructing a table.

Exa1nple: The di crete uniform di tribution on {1 2 ... j }

1
P( == J 1 2 ...

Clearly == I l where I·l i the ceiling function.

So if == 10 and == 0.376 then == f3. 761 == . D

M7-029
Exa1nple: The oeometric di tribution with p.m.f. and c.d.f.

and F k == 1 - qk i ~ == 1 2 • • • j

where q == 1 - p. Thu after ome algebra

i [J : 1 - qk > l
==
rI nn(1l -- p ) l rv
II
For in tance if p == 0.3 and == 0.72 we obtain

M7-030
Remark: Can al o generate Geom by counting Bern trial until
you get a ucce .

Easy Example: Generate rv Geom 1/6). Thi i the ame thing a


counting the number of dice to e until a 3 or any pa1ticular
number come up where the Bern 1/6 trial are the i.i.d. dice to e .
For in tance if you to 6 2 1 4 3 then you top on the Bernoulli trial
and that your an wer.

But life i n t alway dice to e . A general way to generate a Geom


i to count the number of trial until · < p. For example if p ~ 0.
then 1 ~ 0.71 2 ~ 0.96 and 3 ~ 0.12 implie that ~ 3. □

M7-031
Remark: If you have a di crete di tribution like Poi ,\) with an
infinite number of value you could write out table entrie until the
c.d.f. i nearly one generate exactly one and then earch until you
find X == p - l ) i.e. i uch that E (F ( i - 1 , F · ] .

p X == F( ) U O 1)
1 f ( 1) F ( 1) [o F 1]
2 f ( 2) F 2 (F 1) F ( 2 ]
3 f 3) F 3) (F 2) F ( 3 ]
..
.
M7-032
e- 22
Exa1nple: Suppo e X "'-' Poi (2) o that f ( ) == I
== o, 1 2 .. ..

f F ) U 0.1 J

0 0.1353 0.135'"' (0 0.13 3]


0.2706 0.4059 0.13 3 0.4059]
2 0.2706 0.6765 0.40. 9 0.676 ]

For in tance if == 0.313 then X == 1. □

M7-033
Summary
This Time: Generated several
discrete RVs by sort of using Inverse
Transform.

Next Time: We’ll generate RVs from


continuous empirical (sample)
distributions – very useful when we
don’t know beforehand the exact
distribution of a RV.

M7-034
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Inverse Transform Method:


Empirical Distributions
M7-035
Lesson Overview
Last Time: We generated certain
discrete RVs.

This Time: What happens when


we have some data from an
unknown continuous distribution?

Almost have to blend continuous


and discrete ideas.

M7-036
Continuous Empirical Distributions
If ou can t find a good theoretical di tribution to model a certain RV
ou ma want to u e the ,npiri al .d.f of the data 1 , ... , n

number f X i <
n

ote that n i a tep function with jump of height /n e er


time an ob er ation o cur .
Good new : E en though i continuou the Gli enko-Cantelli
Le,nma a that "n( ) -+ for all a n -+ . o Pn( ) i a
good approximation to the true c.d.f. F ( ).
M7-037
The ARE A function DISC and CONT can be u ed to generate RV
from the empirical c.d.f. of di crete and continuou di tribution
re pecti ely.
To do o our elve we fir t define the ord r d point
(1) < 2 < ··· < (n ) · For example if 1 == 2 == 1 and
3 == 6 then (l == 1 (2 == and (3 ) == 6.

F,.(x)

2/ 3 true, but unknow n fu£J.

1/ 3

4 6

M7-038
Gi en that ou onl ha e a finite number of data point we can turn
the empirical c.d.f. into a continuou RV b u ing linear interpolation
between the (i) .

0 if < (1)
F i- 1 + x - X (i)
if (i) < < (i+ l ) v·
n- 1 (n - l )(X c + 1) - X (i))
1 if > (n)
-8,9,/,18
~U (O l . LetP == - 1) and I == fPl.
(J

+I p -  
I+ 1 (
   37,84
 37,84
I+ l) - (I))-1
 ##$+
M7-039
Example: Suppo e (l ) == 1 == and (3) == 6. If
2) == 0. 73
then P == - 1 == 1. 6 and I == IPl == 2. So

(I ) + P - J+1 ( (I+l - (I )

(2) + 1. 6 - 2 + 1 ( (3) - (2) )


+ 0. 6 (6 - )
.92. □

M7-040
Empirical vs. Interpolated
c.d.f.’s
)  *+

8

 %# *+
* ...

90:

809

80:


8 ; =
 %!$

M7-041
Check ( lightly different way :

0+ - l
2(4- 1)
1·f < < ( · == 1 ca
l e)
F( ) ==
1
2 + 2(6-- 4) if 4 < < 6 ( · == 2 ca e)

Setting F X and olving for the two ca e we have

1+6 if < 1/ 2
X ==
2+ if > 1/ 2
Then == 0.73 implie X == 2 + 0.73 == 4.92. □

M7-042
Summary
This Time: Showed how to generate
RVs from a continuous empirical
distribution.

Next Time: Things are about to get


convoluted – literally!

M7-043
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Convolution Method

M7-044
Lesson Overview
Last Time: Showed how to
generate RVs from empirical
(sample) distributions.

This Time: We’ll discuss the


convolution method.

Sum-thing’s in the air! dd


And you know that it’s right!
www.youtube.com/watch?v=RTZoJ01FpD8

M7-045
Convolution
Con olution refer to adding thing up.

Example: Binomial , p . If 1 ... n rv i.i.d. Bern then


== I:~ 1 i rv Bin n p .

know how to get Bernoulli RV ia In er e Tran form:


uppo e , . . . n are i.i.d. U O l . If i < p et · == 1·
otherwi e et i == 0. Repeat for · == 1, . . . n . Add up to get .

For in tance if rv Bin( , 0. and 1 == . 2 == 0.17and


3 == 0. 1 then == 0 + 1 + 0 == 1. □
M7-046
Example: Triangular O 1 2 .

It can be hown that if 1 and 2 are i.i.d. U O 1) then 1 + 2 i


Tri a O 1 2 . Thi i ea ier - but ma be not fa ter - th n our in er e
tran form 1nethod.) □

A B
7 8 7 8 7 8 9

M7-047
Example: Erlangn A . If 1 ) . . . n rv i.i.d. Exp A then
Y == L ~ 1 · rv Erlangn ( A . By inver e tran fo1m

- 1
y ==
A

Thi only take one natural log evaluation o it pretty efficient. □

M7-048
Example: A rud de ert i land or O 1 approxilnate generator
which I wouldn t u e .

Suppo e that 1 . . . n are i.i.d. U(O 1 and let == I:~ 1 'I, •

For large the CLT implie that Y ~ or /2 / 12 .

In particular let choo e == 12 and a ume that it large. Then


12
- 6 = L i - 6 ~ or O 1 . □
i= l

M7-049
Other con olution-related tidbit :

Did ou know ....

If 1 ... 1 n are i.i.d. Geom then E~ 1 i rv egBin n p .

If Z1 . . . Zn are i.i.d. or O 1 then E~ 1 zi rv 2 7 •

If 1 ... 1 n are i.i. d. Cauch then rv Cauch thi i kind of like


getting nowhere fa t! .

Demo Time!
M7-050
Summary
This Time: Used convolutions to
generate various RVs. It’s a nice
trick that we can occasionally apply,
and that about “sums” it up!

Next Time: Acceptance-Rejection. It’s


the most-useful RV generation
technique, but it’s a bit tricky at first.

M7-051
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Acceptance-Rejection
Method
M7-052
Lesson Overview
Last Visit: Convolution! Revolution!
www.youtube.com/watch?v=HBU2m8GdxuY

This Visit: Acceptance-Rejection

This is a tough topic, but also very


useful, so we’ll divide it up into
palatable chunks.

Note: The last stage of grief is


acceptance.
M7-053
Acceptance-Rejection Method
Moti ation: The majority of c.d.f. canno be in erted efficien I . A-R
ample from a di tribution that i almo t the one we want and then
adju t b accepting onl a ertain proportion of tho e ample .

Baby Example: Gener te a U 3 1) RV. You would u uall do thi


ia in er e tran form but what the heck! Here the A-R algorithm:

1. Generate rv U(O 1 .

2. If >
-
2
3 A CEPT . 0 wi e REJE T and go to tep 1.

M7-054
Notation: Suppo ewe want to imulate a continuou RV with p.d.f.
f (x) but that it difficult to generate directly. Al o uppo e that we
can ea ily oenerate a RV having p.d.f. h(x) t (x) / · where )
.
111a1orz f i.e.
>J E

and

J )d
>J f d === 1

where we a ume that · <

M7-055
Theorem on eumann 1951 : Define g( ) f ( )/t( ) and note
that O < g < 1 for all . Let "'"' U O ) and let be a RV
independent of with p.d.f. h y) == t y) / . If < g( then
ha conditional p.d.f. f y .  Y has the right p.d.f.!

Thi ugge t the following a ceptance-rejection algorithm ...

M7-056
Algorithm A-R
Repeat
Generate fromU (O, 1)
Generate fro1n h(y indep ndent of
Acceptance  
_ f (Y ) _ f (Y )
until < g( - t(Y ) - c h(Y )
Re turn +----

It really works! Awful proof next lesson! Meanwhile…

M7-057
I
 

 
Generate a point Y uniformly under t(x)
(equivalently, sample Y from p.d.f. h(x)).

  Accept the point with probability f(Y) / t(Y) =


f(Y) / [c h(Y)].

If you accept, then set X = Y and stop.

M7-058
Summary
This Time: We started playing around
with the Acceptance-Rejection
method. We did a baby example,
gave some motivation, and presented
the underlying theorem.

Next Time: Proof of the theorem.


Let’s be careful out there… it’s nasty.

www.youtube.com/watch?v=Jmg86CRBBtw

M7-059
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Proof of the Acceptance-


Rejection Method
M7-060
Lesson Overview
Last Summit: Introduction to
Acceptance-Rejection RV
generation.

This Summit: Proof that it works.

Things may get a little painful, but


you won’t really be expected to
reproduce the proof yourself. ☺

Wise saying: “No pain, no pain.”

M7-061
Proof of A-R Method
Theorem on eumann 1951 : Define g( ) f( )/t( ) and note
that O < g < for all . Let rv U O 1) and let be a RV
independent of with p.d.f. h(y) == t y) / . If < g( then
Acceptance  
ha conditional p.d.f. f y . then  
 %

Proof that ha p.d.f. f


Let be the A ceptan e e ent. The .d.f. of
.
1

p < == P ( < I ) ( 1)
M7-062
Then

p I == y P( <g == y
P ( < g(y) I == y
P ( < g y) and are independent

g y) i uniform . (2)

Let’s keep (1) and (2) in the back of our minds for a wee bit…

M7-063
B the law of total probabilit

p <
- --
1jx p 1 < I == y) l y dy
-

-- p == y h y dy

-- I_ jxP( I == y)t y) dy

-- ![ g(y t(y dy b 2

-- I_ jxf y)dy. (3)

M7-064
p - .!_ 1f y)dy = 1 (4)

Then 1 3 and 4 impl

p <
P( < ) == - -
~ --
p

o that the p.d.f. of i f ( ). □


www.youtube.com/watch?v=ZSnxYtFarNw

M7-065
There are two main i ue :
■ The ability to quickly ample from h y .
.
■ 1nu t be mall 1nu t be clo e to f ( ) 1nce
1
p < g y )

and the number of trial until ucce [ <g ]i


Geom 1/ ) o that the mean number of trial i .

M7-066
Summary
This Time: Hearty huzzahs to all! We
got through the hardest proof of the
course!

Next Time: We’ll do some examples


to see how all of the magic works!

M7-067
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

A-R Method: Continuous


Examples
M7-068
Lesson Overview
Last Meeting of the Minds:
Proved the major theorem behind
Acceptance-Rejection.

This Meeting of the Minds: Some


examples involving continuous
distributions.

A-R is a general method that


works when others may be
difficult to apply.
M7-069
Theorem von Neumann 1951 : Define g f / and note
that O < g < 1 for all . Let rv U O 1 and let be a RV
independent of with p.d.f. h y ) == y / . If < g( then
ha (conditional) p.d.f. f

Algorithm A-R
Repeat
Generate from U (O, 1)
Generate fro1n h(y independent of )
_ f (Y _ f (Y )
until < g( - t (Y) - ch Y
Return +---- Y
M7-070
Example Law 2015 : Generate a RV with p.d.f.
f == 0 3 2 0 < < 1. an t in ert thi analyti all .
 %# "
Max occurs at  '  , and  ( ) ' 
 .

(Inefficient) majorizer  ( ) ' 


 
 %"
 %"
  

 
#
Get  ' "  ( )  ' 
 , so that
h(x) = t(x)/c = 1, i.e., a Unif(0,1) p.d.f., and
 %
 %
*+  % * & +$
 ( ) '  '
*+ 


E.g., if  '  and ' , then turns


out that  ( *+, so we take  

Demo Time!
M7-071 7.= 8
Example Ro : Generate a tandard h llf nor,nal RV with p.d.f.
2 - 2; 2
f( ) = v'2K > 0.

e the majorizing function

with = lo ( ) d
Then
l I (easy Exp(l) p.d.f.) ,
and
- ( - 1) 2 / 2
g( ) f ( )I □
M7-072
We can u e the half-normal re ult to oenerate a r (O, 1) variate.

Generate from U O 1).


Generate from the half-normal di tribution.
Return
if < 1/ 2
z
if > 1/ 2.

Reminder: We can then generate r µ , a- 2 RV by u ing the


obviou tran formation µ + a- Z.

M7-073
Summary
This Time: We used Acceptance-
Rejection to generate two non-
trivial continuous RVs.

Next Time: We’ll apply what looks


like A-R on a discrete example.

M7-074
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

A-R Method: Poisson


Distribution
M7-075
Lesson Overview
Last Conclave: Used A-R on a
couple of continuous examples: a
crazy polynomial, and a half-
normal.

This Conclave: Use a method


similar to A-R to generate a
discrete RV.

Something fishy is in the air!

M7-076
Example: The Poi on di tribution with probability ma function

P( == 0 1 ...

We 11 u e a variation of A-R to generate a realization of . The


algorithm will go throuo-h a et of equivalent tatement to arrive at a
rule that give

Recall that by definition if we ob er e exactly arrival


from a Poi on A proce in one time unit.

Define · a the ·th interarrival time from a Poi A proce .

M7-077
<=> ee exactl n Poi ,\ arri al b t === 1
n n+l
{=? L
·= 1
i<l< L
i= l
t
nth arrival occurs by time 1; and
(n+1)st arrival occurs after time 1

n
- 1 - 1
,\ n II <1<
- - ,\
i= l
n n +l
{=} II ·> -,\ > II z· (5)
i= l i= l
M7-078
The followincr A-R aloorithm ample U O I until (5 beco1ne true
i.e. until the fir t time uch that -A > TI~+/

Algorithm
a +- - A. p +-- l· +-- -1
ntil p <
Generate fromU (O, 1)
p +- p . +-- +1
Return

M7-079
Exa1nple (BC N ): Obtain a Poi (2) RV.
I

Sample until - ,A == 0.13.53 rr1:+i1


> 'l •

n+l rr~+/ Stop.


0 0.3911 0.3911 0

1 0.9451 0.3696 0

2 0.5033 0.1860 0

3 0.7003 0.1303 Ye

Thu we take X == 3. □

M7-080
Remark: An ea y argu1nent ay that the expected number of that
are required to generate one realization of Xi E [X + 1) == A+ l.

Remark: If > 20 we can u e the no1mal approximation


X - A
r (O, 1 .
~

Algorithm (for A > 20)


Generate Z from N r 0. 1).
Return X == max 0 LA+ ~z + 0. J) ( continuity con·ection .
E.g. if A == 30 and Z == 1. 6 then == l30. + v'36 1.46)J == 3 .
M7-081
Remark: Of cour e nother wa to gener te a Poi A i impl to
table the .d.f. alue like we did in an earlier di crete in er e
tr n form example. Thi ma be more effi ient and accurate than the
abo e method - whi hi not to a that the A-R method i n t le er
and prett !

A Final Note: A-R is used for many other random variables, and even
stochastic processes. We just don’t have time to do any additional
fellas right now, so try not to be too sad. 

www.youtube.com/watch?v=LTBqYEwl8mo

M7-082
Summary
This Time: We used Acceptance-
Rejection to generate Poisson RVs.

Next Time: Stay composed! We’ll


be generating RVs via the
Composition method.

M7-083
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Composition

M7-084
Lesson Overview
Last Rendez-Vous: Used A-R to
generate Poisson RVs.

This Rendez-Vous: Learn about


Composition, which is useful
when you have “mixtures” of RVs.

A very nice technique for RVs that


exhibit certain structures.

M7-085
Composition
Idea: Suppose a RV actually
comes from two RV’s (sort of on
top of each other). E.g., your plane
can leave the airport gate late for
two reasons — air traffic delays
and maintenance delays, which
compose the overall delay time.

What if there are many reasons?

In any case, how to generate?


M7-086
The goal i to generate a RV with c.d.f.
 don’t panic, this may be small

( ) = L Pj j ),
j=l

where Pj > 0 for all j Ej Pj == 1 and the Fj( ) are ea c.d.f.


to generate from.

■ Generate a po iti e inte er J uch that P ( J == · == Pj for all j.


■ Return from c.d.f. J

M7-087
Proof that ha c.d.f. F : By the law of total probability

P( < )
-
--
L P < II == .)P (J ==
-
j=l

L Fj ( PJ F . □
j=l

M7-088
Example: Laplace di tribution. Exponential distribution reflected off of y-axis.

1 1
2 <0 2 < 0
!( ) 1 -x
and
1- ! - x
2 > 0 > 0

Meanwhile let de ompo e into negati e exponential and


regular exponential di tribution :

X <0if 0 if
1 and 2
1 if > 0 1- - x if

M7-089
Then
1
F - F1 (
2
o that we generate from F1

We 11 u e inver e tran form to olve F1 - x - for half the


time and F2 == 1 - - X the other half. Then

n( w.p. 1/2

- n ) w.p. 1/2

M7-090
Summary
This Time: Learned about
Composition – how to generate
RVs that can themselves be
decomposed into several easy-to-
generate RVs.

And speaking of decomposing, how


is Beethoven doing these days?

Next Time: We’ll learn about a


cool way to general normal RVs.

M7-091
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Box-Muller Normal RVs

M7-092
Lesson Overview
Last Caucus: Talked about the
Composition RV generation
method.

This Caucus: Learn about the


Box-Muller method to generate
normals.

We’ll turn you into a true believer!

M7-093
Box-Muller Method: Here a nice ea y way to generate tandard
normal .

Theorem: If 1 2 are i.i.d. U O 1 then

✓-2
✓-2

are i.i.d. Nor O 1 .

ote that the trig calculation mu t be done in radian .

Proof Someday oon. □

M7-094
Some intere ting corollarie follow directly from Box-Muller.

But

-2 n 1

-2 n 1

rv E p 1/ 2 .

Thu we e ju t proven that

p 1/ 2 . □
M7-095
· u h r rv t 1).

Moreover

✓- 2
✓- 2

Thu we ve ju t proven that

and imilarly

Similarly Z§ /Zt == tan 2 21r rv 2 (1) F( l , 1 .

M7-096
Polar Method - a little fa ter than Box- uller.

1. Generate 1 2 i.i.d. U 0 l .

Let i == 2 i - 1 · == 1, 2 and

2. If l > 1 reject and go back to Step 1.

0 wi e let == J -2 and accept Z · +-- . == 1. 2.


)

Then Z1 Z2 are i.i.d. or(0 1 .

M7-097
A Curious Misconception
It’s “Box-Muller”, not “Box-Müller”.
Surprising, considering that…

Umlauts are everywhere!

Many German and Turkish words:


Düsseldorf, Fahrvergnügen, köpek,
Bärkenpantzensniffersnatcher,…

Euro-Trash fake brand names:


Häagen-Dazs, Freshëns,…

M7-098
More Umlauts!
Heavy-Metal Rock Groups:
 
  
%  
 % ! 3  *4

1%
"

M7-099
Summary
This Time: Looked at the Box-
Muller method for generating
normal, along with a couple of
bonus corollaries.

Next Time: Some special-case


tricks involving order statistics
and some other distributions.

M7-100
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Order Statistics and Other


Stuff
M7-101
Apology to Mr. Cieber
Some folks have unfortunately
nfo
forrttu misinterpreted as disparaging my
completely innocent
ent rre
en remarks
em from the last class about Mr. Ḃieber.

Georgia Tech denies any responsibility or liability for my remarks.


Nevertheless…

Actually compared to literary giant Alexandre Dumas.

I used the term “talent-free” to signify that Mr. Ḃieber has graciously
ciou
usslyy
u
performed numerous free co concerts
concnce
nc e so as to showcase his wonderfulul
talent for the poor
orr ch
cchildren
hiilld n o
off th
the
e world to see.

In fa
fact, Mr. Ḃieber
ie
eb
be
e has
err ha
has a wonderful song with an umlaut.

https://www.youtube.com/watch?v=nntGTK2Fhb0
http
httpss:/
tp ://w
/w

Mr. Ḃieber is one of this generation’s


n’’s great artists
on’
on
n

Selena really missed the boat with Justin.

I am certainly a True Ḃelieḃer /j


M7-102
Apology to Mr. Bieber
Some folks have unfortunately misinterpreted as disparaging my
completely innocent remarks from the last class about Mr. Ḃieber.

Georgia Tech denies any responsibility or liability for my remarks.


Nevertheless…

Actually compared to literary giant Alexandre Dumas.


s.

I used the term “talent-free” to signify that Mr. Ḃieber has graciously
performed numerous free concerts so as to showcase his wonderful
talent for the poor
oor children
c ild
ch of the world to see.

In fact, Mr. Ḃieber


ebe
ber ha
be has
a a wonderful song with an umlaut.

httttps:/
h tps:/
tp
https://www.youtube.com/watch?v=nntGTK2Fhb0

Mrr Ḃieber
M
Mr. Ḃie is one of this generation’s great artists

Selena really missed the boat with Justin.


Jj
I am certainly a True Ḃelieḃer
M7-103
Lesson Overview
Last Get-Together: Talked about
the Box-Muller normal RV
generation method.

This Get-Together: How to


generate order statistics
efficiently.

www.youtube.com/watch?v=53XyCbIJGKY

M7-104
Order Statistics
uppo e that 1 2 , . . . n are i.i.d. from 01ne di tribution with
.d.f. F ( ) and let == mi { 1, ... n} with .d.f. (y . i
ailed he fir t order tat. Can we generate u ing ju t one U O 1 .

Ye ! ince the i are i.i. d. we ha e


C y 1-P >y 1 -P mi .> y
1,

1 - P all i > y == 1 - (P 1 > y ]n


1- [1-

M7-105
ow do Inver e Tran for1n : et G Y and olve for Y. After a
little algebra get don t be afraid ...

p - 1( 1 _ 1 _ )1/n ).

Example: Suppo e 1 j • • • n rv E p (,\ . Then


( == 1 - ( - Ay n == 1 - - n Ay_

Thu == m in· { i} rv E p A . So take == _ _1__ n □


nA

We can do the a1ne kind of thing for Z == ma. i


M7-106
Other Quickies

2( di tribution: If Z1 Z2 ... , Zn are i.i.d. or O 1 then


I:~ 1 z; rv /2

2
di tribution: If Z rv r 0, 1) and rv and and Y are
independent then
z
n .
fr1n
ote that (1 i the Cauchy di tribution.

M7-107
F m di tribution: If and and
Me independent then / .m .
)

Generating RV fro1n continuou empirical di tribution - no time


here. Can u e the CO T function in Arena.

M7-108
Summary
This Time: Showed how to
efficiently generate certain order
statistics.

Next Time: We’ll finally be getting


into multivariate generation!
Example: heights and weights.

M7-109
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Multivariate Normal
Distribution
M7-110
Lesson Overview
Last Tête-à-Tête: Generated
order statistics + some
miscellaneous distributions.

This Tête-à-Tête: Multivariate


normal!

We’re about to enter a different


dimension of sounds, sight, and
mind.

www.youtube.com/watch?v=NzlG28B-R8Y
M7-111
Bivariate Normal Distribution
The random e tor ( ) ha the bi ariate nor,nal di tribution with
mean µx == [ ] and µy == [ ] arian e a} == ar ( and
a} == ar and correlation p == r , ) if it ha joint p.d.f.

f ( y) = 1 xp {- ~i + z} y) -2pzx ( z y)]}
axay J l - p 2 - p2

where x ( ) - µx )/ax and y (y y - µy )/ay.

For example height and weight of people an be modeled a


bi ariate normal.
M7-112
4
MATLAB example:
0
3 90 0
0 Bivariate normal
0

2
means = 0,
variances = 1,
1
covariance = 0.9
0

-1
0

-2

0
-3
-4 -3 -2 -1 0 1 2 3 4

M7-113
Multivariate Normal Distribution
The random e tor X == 1, ... k)T ha the ,nu/ti ariate normal
di tribution with mean e torµ == (µ1 ... µk )T and k x k
o ariance matrix a ij if it ha p.d.f.

otation: X rv ork µ ).
M7-114
In order to generate X let tart with a ector Z === Z1 ... Zk of
i.i.d. Nor O 1 RV . That i uppo e Z rv ork (O I where Ii the
J x k identity matrix and O i imply a vector of O .

Suppo e we can find the lower triangular) Chole ky matrix uch


that~ === T

Then it can be hown that X === µ + Z i multivariate normal with


mean µ and covariance matrix

M7-115
For k == 2 we can ea ily derive
y0ii 0
✓--
~--
a1 2
22
__a_ f_2
y"aii a11

Since X == µ + 'Z we have

0-12 z1+
y0ii

M7-116
The followino algorithm compute for oeneral dimen ion k .. .

Algorithm

For · == 1 .... k
)

For · == 1 .... · - 1)

i j ) I jj

ji +---- 0

M7-117
Once ha been computed the multivariate normal RV
X == µ + Z can ea ily be generated:

1. Generate Z1 Z2 . .. , Zk rv i.i.d. r(O 1 .

2. Let i ~ µi + ~ j=l ij z j . == 1 2, ... k.

3. Return X ==

M7-118
Summary
This Time: Learned how to
generate multivariate normal
observations (correlated stuff like
height vs. weight).

Next Time: We’ll start playing


around with a variety of useful
stochastic processes.

M7-119
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Baby Stochastic Processes

M7-120
Lesson Overview
Last Spiel: Multivariate normal
distribution.

This Spiel: We’ll start looking at


the generation of some easy
stochastic processes – Markov
chains and Poisson arrivals.

www.youtube.com/watch?v=gGAiW5dOnKo

dcJ
M7-121
Markov Chains
Consider a time series having a
certain number of states (e.g.,
sun / rain) that can transition from
day to day.

Example: On Monday it’s sunny,


on Tues and Weds, it’s rainy, etc.

Informally speaking, if tomorrow’s


weather only depends on today,
then you have a Markov chain.
M7-122
Markov Chains
Ju t do a imple example. Let i == 0 if it rain on da ·· otherwi e
i == 1. Denote the da -to-da tran ition probabilitie b

Pj k == P ta e k on da · + 1 I tate j on da ·) ·. k == O 1.
i

uppo e that the probability tate tran ition matrix i

 e.g., P01 = 0.3


0. 7 0 ..
p
0. 0.

M7-123
uppo e it rain on Monda . imulate the re t of the work week.

PRI i - 1) 'I, i < P.o. R/

Tu Poo == 0.7 0.62 y


R
R • •
'

www.youtube.com/watch?v=tIdIqbv7SPo
'
i,"
.

.
t

'
\
'

w Poo == 0.7 0.03 y R


Th Poo == 0.7 0.77 ~~~·
• • 41 •

:
.. ~
• ~;N[ .
!'.:
F Po == 0.4 0.91 .. .... .; '{ '"

www.youtube.com/watch?v=FZmgGcZeayA

M7-124
Poisson Arrivals
When the atTi al rate i a on tant A the interan~i al of a Poi on A
proce are i.i.d. Exp ( A and the arri al time are:

o ~ 0 and

Soooo easy!

M7-125
ow uppo e that we want to generate fixed number n of PP ;\
aiTi al in aft ed ti,n int r al [a b]. To do o we no ea theorem
tating that the joint di tribution of the n arri al i the ame a the
joint di tribution of the order tati tic of 1 i.i.d. U (a b RV .

Generate i.i. d. 1 ... n from U O 1

ort the

et the a1Tival time to i f- a+ b- ) (i)

Still soooo easy!


M7-126
Summary
This Time: Discussed some very
simple Markov chain and Poisson
arrival generation.

Next Time: What if the arrival rates


change over time? We’ll need to
discuss nonhomogeneous Poisson
processes!

M7-127
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Nonhomogeneous Poisson
Processes
M7-128
Lesson Overview
Last Oration: MCs and PPs.

This Oration: Nonhomogeneous


Poisson processes. What
happens when the rate changes
over time?

Careful! The “easy” algorithm


may not work very well for
NHPPs.

M7-129
NHPPs – Nonstationary Arrivals
ame umption a reoular Poi on pro e ex ept the arTi al rate A
i n t a con tant o tationary in rement doe n t appl .

Let
A(t == rate inten ity fun tion at time t,
t == number of an ival during [O, t] .
4

Then
+t - ( ) ~ Poi on (1 Au du) .
+t

M7-130
Exa1nple: Suppa e that the arrival pattern to the Waffle Hou e over a
certain ti1ne period i a NHPP \\'ith A ) == 2 . Find the probability
that there will be exactly 4 arrival between time t == 1 and 2.

Fir t of all the number of airival in that time interval i

JV 2 , - 1 ~ Poi (1 2 2
d) ~ Poi (7 / 3 .

Thu

- 7/37/ 3 4
0.120. □
'
.

M7-131
Incorrect NHPP Algorithm [it can kip interval with large A )]

o +- 0: . +-- 0

Repeat

Don t use this algorithm!


M7-132
 %  *+
Whatever shall we do?

The Thinning Algorithm


i. Assumes that  =
*+
maxt *+ is finite,
ii. Generates potential
arrivals at the max
rate  , and
iii. Accepts a potential
arrival at time 
w.p. *+/ . 

 ! '  & (-)



 

M7-133
Thinning Algorithm
0 +---- . +----
Rep at
Demo Time!
t +---- Ti
Repeat
Generate fr mU O 1

t +- t - >.\ n( ) Each t update represents a potential arrival (at rate  )

until < A( / A* But we only keep the potential arrival w.p. *+

f- 1
These Ti’s are the arrivals that we end up keeping.
M7-134
Summary
This Time: Talked about generating
nonhomogeneous Poisson arrivals
(where the arrival rate varies
throughout the day).

Next Time: We’ll look at various


simple (and one not-so-simple) time
series processes.

M7-135
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Time Series

M7-136
Lesson Overview
Last Soliloquy: NHPPs

This Soliloquy: Various time


series processes.

We’ll be doing standard normal-


noise “ARMA” processes + a
more-obscure “ARP” process with
Pareto noise.

M7-137
First-Order Moving Average Process
An A 1 i a time erie proce i defined b

i == i +0 ·- 1, for · == 1 2 . . ..
)

where 0 i a on tant and the · are i.i.d. or O 1) RV that are


independent of o.

The A 1 i a popular tool for modelino and detecting trend .

M7-138
The A 1 ha covariance function ar i) == 1 + 02

.+ 0 i- 1 ·+1 +0 i == 0 .r . 0

and

So the covariance die off pretty quickly.

How to oenerate. Start with o rv r O 1). Then generate


1 rv r (0 1 to get Y1 2
j r O 1 to get Y2 etc.

M7-139
First-Order Autoregressive Process
An AR 1 proce i defined b

for · == 1 2 .... I

where - < < o rv or O 1) and the are i.i.d.


or O 1 - 2 RV that are independent of 0.

Thi i u ed to model lot of real-world tuff.

M7-140
The AR(l ha covariance function o °Y:· ~+k) == lkl for all
J == 0 ± 1 ±2 ....

If i clo e to one you get highly po itively con. elated ½ . If i


clo e to zero the ~ are nearly independent.

How to generate . Start with Yo rv or O 1) and


1 rv J1 - . 2 r O 1 to get Y1 == Yo+ 1.

Then generate 2 rv J1- 2 r O 1 to get Y2 == Y1 + 2 etc.

M7-141
AR(1) pix

4 4 3

3 3 2

2 2

0
0 0
-1
-1 -1

-2
-2 -2

-3 -3 -3

-4 -4 -4
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000

 384,) B27.@<  384,) B7  384,) B7.@<

M7-142
ARMA(p,q) Process
An ob iou generalization of the MA 1 and AR 1 pro e e i the
ARMA q which con i t of a pth order AR and a qth order MA
which we will impl define without tating propertie :
p q

L j i- j + -+ L ej i - j . == 1. 2 ....
J J

j=l j =l

where the j and 0j re cho en o a to a ure that the pro e


doe n t explode. uch pro e e are u ed in a variety of modeling and
foreca ting application .
M7-143
Exponential AR Process
An EAR 1 proce Lewi 19 0 i defined b

w.p.
i, w.p. 1 -

for · == 1, 2, ... where O < < o rv Exp 1) and the i are i.i.d.
Exp 1 RV that are independent of 0 .

The EAR(l ha the ame o ariance tructure a the AR 1 except


that O < < 1 that i ( · i+k ) == lkl_
M7-144
EAR(1) pix

1~-~-~--~-~-~--~-~-~~-~-~ s ~-~-~--~-~-~--~-~--~-~-~

6 7

6
5

5
4
4
3
3
2
2

0 0

-1 '----'-----'---..__--'------'---'---'-----''----'----' -1 - - - - ~ - - ~ - ~ - ~ - - ~ - - - - - - - - ~
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000

 384,) B7.7  384,) B7.@<

M7-145
Autoregressive Pareto – ARP
ow let ee how to generate a erie of correlated Pareto RV . Fir t
of all a RV ha the Pari to di tribution with parameter ,\ > 0 nd
> 0 if it ha c.d.f.

Fx == 1- .\/ for > .\.

The Pareto i a hea -tailed di tribution that ha a ariet of u e in


tati ti al modeling.

M7-146
In order to obtain the ARP proce let tart off with a regular AR l
with normal noi e

Y.· == P ·-1 + i for · == 1 2 ....


J

where -1 < p < 1 o rv Nor O 1) and the · are i.i.d.


or O 1 - p 2 and independent of o- Note that Yo 1 Y2 ... are
1naro-inally Nor O 1 but correlated.

M7-147
Feed thi proce into the or O 1 c.d.f. <1? · to obtain correlated
nif O 1 RV · == <1? ~ ) l == 1 2 ....

ow feed the correlated · into the inver e of the Pareto c.d.f. to


obtain corTelated Pareto RV

- 1
i == F X

M7-148
Summary
This Time: Talked about a variety of
time series models. AR(1), MA(1),
ARMA, EAR, and the nasty ARP.

Next Time: We’ll step back and take


a quick look at a baby queueing
process.

 Baby dreaming of
queueing theory

M7-149
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Queueing

M7-150
Lesson Overview
Last Jam Session: Time series.

This Jam Session: An easy way


to generate some queueing RVs.

This is actually a trivial lesson,


since I want to take a breather
before we end the module in the
next lesson… You’ll see…

M7-151
M/M/1 Queue
Con ider a ingle- erver queue with cu tomer arri ing accordin to a
Poi on ,\ proce tanding in line with a FIFO di cipline and then
etting erved in an x p µ amount of time.

Let J i+ l denote the interarri al time between the ·th and ( · + 14t
u tomer · let i be the ·th cu tomer er ice time· and let i
denote the ·th u tomer wait before ervi e.

M7-152
Lindley o-ive a ery nice way to generate a erie of waiting time for
thi imple example where you don t even need to worry about the
exponential a umption :

T
•· i
Q -
1 - m { ~ + i - I ·+1 O}.

And imilarly the total time in y tern ·Q +


1,
.
i 1

M7-153
Summary
This Time: Some simple queueing
results to enable easy generation of
cycle and waiting times. Easy as
pie!

Next Time: Brownian motion – one


of my favorite topics!! And
extremely important!

M7-154
Computer Simulation
Module 7: Random Variate
Generation

Dave Goldsman, Ph.D.


Professor
Stewart School of Industrial and Systems Engineering

Brownian Motion

M7-155
Lesson Overview
Last Board Meeting: Queueing
results.

This Board Meeting: Generating


Brownian motion.

Probably the most-important


stochastic process out there. If
you liked the Central Limit
Theorem, you’ll love this stuff!

M7-156
Brownian Motion
Di o ered b Brown· anal zed rigorou 1 b Ein tein· mathem ti al
rigor e abli bed b Wiener al o called Wi n r pro e

Widel u ed in e er thin from finan ial anal i to queuein theory


to tati tic to o her OR/IE appli ation area .

The tocha tic pro e {W t ) t > O} i tandard Bro1, nian ,notion if:
n w o) == o.
FJ W t ) rv r O t .
M7-157
IJ {W(t t > O} ha tationary and independent increment .
Increment : Anything like W b - W

Stationary increment : The di tribution of W +h - W( only


depend on h.
.
Independent increment : If < d then W d - W 1
indep of W(b) - W a .
0.6

0.4

0.2

-02

-0 .4 '------'-------'------'----'---'-------'-------'------'-----'-------'
0 100 200 300 400 500 600 700 800 900 1000
M7-158
How do ou get B ... i an equence of i.i.d. RV with
mean zero and ariance 1. To ome extent the don t e en ha e
to be indep! Don ker Central Limit Theorem a that
1 lntJ
-Jn L i d > W t) a n -+
i=l

where d > denote con ergence in di tribution a get big and l·J
i the floor fun tion e.g. l3. 7J == .

The regular CLT is a very special case


of this Big Boy!
M7-159
Here a wa to con truct B

One choice that work well i to take i == ±1 each with probability


1/2. Take at lea t 100 t == 1/ n /n,.. . /n and calculate
W 1/ n , W (2/ ), ... W /n .
Demo Time!
Another hoice i impl to take · rv r O 1).

Exerci e: Let on truct ome BM! Fir t pi k ome large alue of


and tart with W 0) == 0. Then
. .
w(n) = w( ~ ) + Jn·
M7-160
Here are 01ne mi cellaneou propertie of Brownian otion:
■ BM i continuou ever where but ha no de1ivative ! Deep!
www.nbc.com/saturday-night-live/video/deep-thoughts-kryptonite/n10201 & (might be PG-13)

■ (W ( W t ) == min( t .

■ Area under W (t) i norm 1: f01 W (t) dt '"'-I

■ A Bro1' nian brid e B t) i conditioned BM uch that


W (O) == W (l ) == 0.

■ (B( B t ) == min( t - t.

M7-161
Geometric Brownian Motion
The proce t == 0) xp{ µ - a 2 t + liW t) } t > 0 i often
u ed to model tock price where µ i related to the drift of the
tock price li i it olatility and 0) i the initial price.

In addition we an u e GBM to e timate option price . E.g. a


European call option permit it owner who pa an up-front fee for
the pri ilege to purcha e the tock at a pre- greed trike price k a a
pre-determined expiry date T . It alue i
- rT [ _ k) +],
where + == ma.x{O } and µ ~ the ri k-free intere t rate.
M7-162
To e timate thi expected value we can run multiple imulation
replication of W T and - k + and then take the ample
average of the - rT - k )+ value .

Exercise: Let e timate the value of a tock option. Pick your favorite
alue of a- k and off you go!

Lot of way to actually do thi . I would recommend that you directly


imulate the B many time a de cribed on the la t paae.

But there are other way : You can ju t imulate the di tribution of
T) directly it lognormal or you can actually look up the exact
Black-Schole an wer ee below .
M7-163
How to Win a Nobel Prize

Let •) and 4> ( •) denote the u ual or(O 1) p.d.f. and c.d.f. Moreover
define the horrible-looking

T - a~T - n(k / 0
b
a\lT
ow get your ticket to ! orway or Sweden or wherever they give out
the obel Prize ...
I

M7-164
The Black-Schole European call option alue i

- rTE [ - k]+
2
- rTE [ 0 p{ ( - ~ + CTW( }- A] +
)

O p{ ( - ~ + CTll } - J]
2
) + ,( )d

0 <I> b + all) - k - rT <I> (b after lot of algebra . □

M7-165
Summary
This Time: Brownian motion!
Study hard and make $ / € / ₤ on
Wall Street!

This completes Module 7, which


was ginormous. It gets a little
easier now (I hope).

Next on the agenda is Input


Modeling – how do you decide
what RVs to use to drive the
simulation?
M7-166

You might also like