Stochastic Processes
Stochastic Processes
M. Haugh
G. Iyengar
E[X 2 ] E[X ]2 .
2
= np
=
np(1 p).
A Financial Application
Suppose a fund manager outperforms the market in a given year with
probability p and that she underperforms the market with probability 1 p.
She has a track record of 10 years and has outperformed the market in 8 of
the 10 years.
Moreover, performance in any one year is independent of performance in
other years.
Question: How likely is a track record as good as this if the fund manager had no
skill so that p = 1/2?
Answer: Let X be the number of outperforming years. Since the fund manager
has no skill, X Bin(n = 10, p = 1/2) and
n
X
n
P(X 8) =
pr (1 p)nr
r
r=8
Question: Suppose there are M fund managers? How well should the best one do
over the 10-year period if none of them had any skill?
4
r e
.
r!
X
r=0
r P(X = r) =
X
r e
r
r!
r=0
X
r e
r
r!
r=1
=
=
X
r1 e
r=1
(r 1)!
r e
= .
r!
r=0
5
Bayes Theorem
Let A and B be two events for which P(B) 6= 0. Then
T
P(A B)
P(A | B) =
P(B)
P(B | A)P(A)
=
P(B)
P(B | A)P(A)
= P
j P(B | Aj )P(Aj )
where the Aj s form a partition of the sample-space.
Y2
6
5
4
3
2
1
7
6
5
4
3
2
1
8
7
6
5
4
3
2
9
8
7
6
5
4
3
10
9
8
7
6
5
4
Y1
11
10
9
8
7
6
5
12
11
10
9
8
7
6
Table : X = Y1 + Y2
F(x) =
f (y) dy.
=
= 2 .
exp( + 2 /2)
Var(X )
10
M. Haugh
G. Iyengar
E [N x ] = x E [N ] .
Var(E[W |N ]) + E[Var(W |N )]
Var(x N ) + E[N x2 ]
2x Var(N ) + x2 E[N ].
3
i=1
M. Haugh
G. Iyengar
Multivariate Distributions I
Let X = (X1 . . . Xn )> be an n-dimensional vector of random variables.
Definition. For all x = (x1 , . . . , xn ) Rn , the joint cumulative distribution
function (CDF) of X satisfies
FX (x) = FX (x1 , . . . , xn ) = P(X1 x1 , . . . , Xn xn ).
Definition. For a fixed i, the marginal CDF of Xi satisfies
FXi (xi ) = FX (, . . . , , xi , , . . . ).
It is straightforward to generalize the previous definition to joint marginal
distributions. For example, the joint marginal distribution of Xi and Xj satisfies
Fij (xi , xj ) = FX (, . . . , , xi , , . . . , , xj , , . . . ).
We also say that X has joint PDF fX (, . . . , ) if
Z x1
Z xn
FX (x1 , . . . , xn ) =
Multivariate Distributions II
Definition. If X1 = (X1 , . . . Xk )> and X2 = (Xk+1 . . . Xn )> is a partition of
X then the conditional CDF of X2 given X1 satisfies
FX2 |X1 (x2 | x1 ) = P(X2 x2 | X1 = x1 ).
If X has a PDF, fX (), then the conditional PDF of X2 given X1 satisfies
fX2 |X1 (x2 | x1 ) =
(1)
fX1 (x1 )
Independence
Definition. We say the collection X is independent if the joint CDF can be
factored into the product of the marginal CDFs so that
FX (x1 , . . . , xn ) = FX1 (x1 ) . . . FXn (xn ).
If X has a PDF, fX () then independence implies that the PDF also factorizes
into the product of marginal PDFs so that
fX (x) = fX1 (x1 ) . . . fXn (xn ).
Can also see from (1) that if X1 and X2 are independent then
fX2 |X1 (x2 | x1 ) =
Implications of Independence
Let X and Y be independent random variables. Then for any events, A and B,
P (X A, Y B) = P (X A) P (Y B)
(2)
More generally, for any function, f () and g(), independence of X and Y implies
E[f (X )g(Y )] = E[f (X )]E[g(Y )].
(3)
= E 1{XA} 1{Y B}
= E 1{XA} E 1{Y B}
by (3)
= P (X A) P (Y B) .
Implications of Independence
More generally, if X1 , . . . Xn are independent random variables then
E [f1 (X1 )f2 (X2 ) fn (Xn )] = E[f1 (X1 )]E[f2 (X2 )] E[fn (Xn )].
Random variables can also be conditionally independent. For example, we say X
and Y are conditionally independent given Z if
E[f (X )g(Y ) | Z ] = E[f (X ) | Z ] E[g(Y ) | Z ].
used in the (in)famous Gaussian copula model for pricing CDOs!
In particular, let Di be the event that the i th bond in a portfolio defaults.
Not reasonable to assume that the Di s are independent. Why?
But maybe they are conditionally independent given Z so that
P(D1 , . . . , Dn | Z ) = P(D1 | Z ) P(Dn | Z )
often easy to compute this.
6
=
=
(4)
AE [X] + a
>
A Cov(X) A .
(5)
M. Haugh
G. Iyengar
>
1
1
1
e 2 (x) (x)
(2)n/2 ||1/2
12
22
.
Linear Combinations
A linear combination, AX + a, of a multivariate normal random vector, X, is
normally distributed with mean vector, AE [X] + a, and covariance matrix,
A Cov(X) A> .
M. Haugh
G. Iyengar
Martingales
Definition. A random process, {Xn : 0 n }, is a martingale with respect
to the information filtration, Fn , and probability distribution, P, if
1. EP [|Xn |] < for all n 0
2. EP [Xn+m |Fn ] = Xn for all n, m 0.
Martingales are used to model fair games and have a rich history in the modeling
of gambling problems.
We define a submartingale by replacing condition #2 with
EP [Xn+m |Fn ] Xn
for all n, m 0.
for all n, m 0.
Pn
i=1
En
"n+m
X
#
Xi (n + m)
i=1
n
X
i=1
n
X
"
Xi + En
n+m
X
#
Xi (n + m)
i=n+1
Xi + m (n + m) = Mn .
i=1
1
.
2
1 + 2 + + 2n2 + 2n1
2n1 1 + 2n1
1 + 2 + + 2n1
2n + 1.
(6)
1/2
P(Wn+1 = 2n+1 + 1 | Wn = 2n + 1)
1/2
so that
E[Wn+1 | Wn = 2n + 1]
(1/2)1 + (1/2)(2n+1 + 1)
2n + 1 = Wn .
(7)
Polyas Urn
Consider an urn which contains red balls and green balls.
Initially there is just one green ball and one red ball in the urn.
At each time step a ball is chosen randomly from the urn:
1. If ball is red, then its returned to the urn with an additional red ball.
2. If ball is green, then its returned to the urn with an additional green ball.
Let Xn denote the number of red balls in the urn after n draws. Then
P(Xn+1 = k + 1 | Xn = k)
P(Xn+1 = k | Xn = k)
k
n+2
n+2k
.
n+2
M. Haugh
G. Iyengar
Brownian Motion
Definition. We say that a random process, {Xt : t 0}, is a Brownian motion
with parameters (, ) if
1. For 0 < t1 < t2 < . . . < tn1 < tn
(Xt2 Xt1 ), (Xt3 Xt2 ), . . . , (Xtn Xtn1 )
are mutually independent.
2. For s > 0, Xt+s Xt N(s, 2 s) and
3. Xt is a continuous function of t.
We say that Xt is a B(, ) Brownian motion with drift and volatility
Property #1 is often called the independent increments property.
Remark. Bachelier (1900) and Einstein (1905) were the first to explore Brownian
motion from a mathematical viewpoint whereas Wiener (1920s) was the first to
show that it actually exists as a well-defined mathematical entity.
2
(8)
Information Filtrations
For any random process we will use Ft to denote the information available
at time t
- the set {Ft }t0 is then the information filtration
- so E[ | Ft ] denotes an expectation conditional on time t information available.
E0 [(Wt+s Ws + Ws ) Ws ]
E0 [(Wt+s Ws ) Ws ] + E0 Ws2 .
(9)
E0 [Es [(Wt+s Ws ) Ws ]]
E0 [Ws 0]
0.
M. Haugh
G. Iyengar
= X0 e
= X0 e
= Xt e
(t+s) + Wt+s
2
2
t + Wt +
2
2
s + (Wt+s Wt )
s + (Wt+s Wt )
(10)
= Et Xt e
= Xt e
= Xt e
s + (Wt+s Wt )
h
i
Et e (Wt+s Wt )
2
2
2
2
= e s Xt
so the expected growth rate of Xt is .
Xt2
Xt1
Xt3
Xt2
, . . . , Xt tn
n1
Xt+s
Xt
N (
2
2 )s,
2 s .