Lectnotemat 5
Lectnotemat 5
for
Timo Koski
Department of Mathematics
KTH Royal Institute of Technology
Stockholm, Sweden
2
Contents
Foreword 9
3
4 CONTENTS
2 Probability Distributions 47
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.2 Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.1 Univariate Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.2.2 Continuous Bivariate Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.2.3 Mean, Variance and Covariance of Linear Combinations of R.V.s . . . . . . . . . . . . . . 59
2.3 Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.3.1 Univariate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.3.2 Bivariate Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4 Transformations of Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.4.1 The Probability Density of a Function of Random Variable . . . . . . . . . . . . . . . . . 65
2.4.2 Change of Variable in a Joint Probability Density . . . . . . . . . . . . . . . . . . . . . . 67
2.5 Appendix: Decompositions of Probability Measures on the Real Line . . . . . . . . . . . . . . . 71
2.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.5.2 Decompositions of µ on (R, B) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
2.5.3 Continuous, Discrete and Singular Random Variables . . . . . . . . . . . . . . . . . . . . 74
2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.6.1 Distribution Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2.6.2 Univariate Probability Density Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6.3 Multivariate P.d.f.’s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.6.4 Expectations and Variances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
2.6.5 Additional Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Bibliography 333
Index 340
Foreword
This text corresponds to the material in the course on intermediate probability calculus for masters students
that has been lectured at the Department of mathematics at KTH during the last decades. Here this material is
organized into one document. There are topics that were not included in the earlier courses and a few previous
items have been omitted. The author is obviously indebted to Prof.em. Bengt Rosén and Prof.em. Lars Holst
who have built up the course.
Boualem Djehiche, Gunnar Englund, Davit Karagulyan, Gaultier Lambert, Harald Lang, Pierre Nyquist
and several others are thanked for having suggested improvements and for having pointed out several errors
and mistakes in the earlier editions.
9
10 CONTENTS
Chapter 1
1.1 Introduction
In the first courses on probability given at most universities of technology, see, e.g., [12, 16, 101] for a few
excellent items in this educational genre, as well as in courses involving probability and random processes in
physics and statistical physics, see [17, 58, 62, 73, 78] or reliability of structures [32, 77] or civil engineering
[4], one seemingly considers all subsets, called events, of a space of outcomes. Then one treats a (in practice,
finitely additive) probability as a positive total mass = 1 distributed on these events. When the goal is to train
students in the use of explicit probability distributions and in statistical modelling for engineering, physics and
economics problems, the approach is necessary and has definite didactic advantages, and need not be questioned
(and the indicted authors are, of course, well aware of the simplifications imposed).
There is, however, a need to introduce the language1 and viewpoint of rigorous mathematical analysis, as
argued in [43]. The precise (and abstract) mathematical theory requires a more restricted set of events than
all the subsets. This leads us to introduce algebras of sets and sigma algebras of sets. The material below has
approximately the same level of mathematical completeness as [20, 43, 44, 95] and [103, chapter1].
. . . adds little that is of value to (physicists)’. That notwithstanding, in [62, chapter 10] the merits of surveying ’the concepts and
jargon of modern probability theory’ (that is, what corresponds to chapters 1 and 2 in these notes) are recognized. The rationale
is that a natural scientist or an engineer will learn how to interpret the basic points of a mathematical discourse in a preferred
intuitive idiom.
11
12 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
denoted by Ω. Later on we shall refer to Ω as the outcome space or sample space and ω as an elementary
outcome.
Example 1.2.1 The examples of Ω first encountered in courses of probability theory are simple. The outcomes
ω of a toss of coin are heads and tails, and we write the universal set as
Ω = { heads , tails }.
Let now Ω be an abstract universal set and A, B e.t.c. denote sets, collections of elements in Ω.
• Ac is the complement set of A. It consists of all elements ω that do not belong to A. It follows that
(Ac )c = A.
• A ⊆ B denotes the inclusion of sets. It means that A is a subset of B. This means that if ω ∈ A, then
ω ∈ B. In addition, we have for any set A ⊆ Ω.
Note that A ⊆ B and B ⊆ A if and only if A = B.
We use also on occasion the notation of strict inclusion A ⊂ B, which means that A 6= B.
• If A ⊆ B, then B c ⊆ Ac .
• P(A) denotes the family of all subsets of A and is known as the power set of A.
• A ∪ B is the union of the sets A and B. The union consists of all elements ω such that ω ∈ A or ω ∈ B
or both. We have thus
A∪Ω=Ω
A∪∅=A
A ∪ Ac = Ω
A∪B = B ∪A
and
A ∪ A = A.
∪∞
i=1 Ai = A1 ∪ A2 ∪ . . .
consists of the elements ω such that there is at least one Ai such that ω ∈ Ai .
1.2. TERMINOLOGY AND NOTATIONS IN ELEMENTARY SET THEORY 13
• A ∩ B is the intersection of the sets A and B. The intersection consists of all elements ω such that ω ∈ A
and ω ∈ B. It is seen that
A∩Ω = A
A∩∅ =∅
A ∩ Ac = ∅
A∩B = B ∩A
and
A ∩ A = A.
∩∞
i=1 Ai = A1 ∩ A2 ∩ . . .
• The sets A and B are said to be disjoint if A ∩ B = ∅. The sets A1 , A2 , . . . , An are pairwise disjoint if all
pairs Ai , Aj are disjoint for i 6= j.
• A \ B is the set difference of the sets A and B. It is the complement of B in A, and thus contains all
elements in A that are not in B, or, ω ∈ A and ω ∈
/ B. Therefore we get
A \ B = A ∩ Bc.
• De Morgan’s Rules
The following rules of computation are frequently useful in probability calculus and are easy to memorize.
(A ∪ B)c = Ac ∩ B c
(A ∩ B)c = Ac ∪ B c
c
(∪∞ ∞ c
i=1 Ai ) = ∩i=1 Ai
and
c
(∩∞ ∞ c
i=1 Ai ) = ∪i=1 Ai .
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
and
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) .
14 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
• A × B is the (Cartesian) product of the sets A and B. It consists of all pairs (ω1 , ω2 ) such that ω1 ∈ A
and ω2 ∈ B.
The product A1 × A2 × . . . × An consists of ordered n-tuples (ω1 , ω2 , . . . , ωn ) such that ωi ∈ Ai for each
i = 1, . . . , n.
If Ai = A for all i, then we write
An = A × A × . . . × A
as a product of n copies of A.
• Intervals
If a and b are real numbers, a < b, then
are intervals with endpoints a and b. These are subsets of the real line R, here taken as a universal set
with elements denoted by x (=a real number) such that (a, b) = {x ∈ R | a < x < b}, [a, b) = {x ∈ R |
a ≤ x < b}, (a, b] = {x ∈ R | a < x ≤ b} and [a, b] = {x ∈ R | a ≤ x ≤ b}. We take [a, a) = ∅. For
(a, b) and (a, b] we can let a = −∞ and for [a, b) and (a, b) we can allow b = +∞. Hence we can write
(−∞, ∞) = {x ∈ R | −∞ < x < ∞}. The set operations are, e.g., (a, b)c = (−∞, a] ∪ [b, ∞).
1. Ω ∈ A
3. If A ∈ A and B ∈ A, then A ∪ B ∈ A.
Condition 1. above is known as non-emptiness, condition 2. above is known as closure under complement,
and condition 3. above is known as closure under union. Note that if A ∈ A and B ∈ A, then there is closure
under intersection, A ∩ B ∈ A, too. This follows since Ac , B c ∈ A, hence Ac ∪ B c ∈ A and A ∩ B = (Ac ∪ B c )c
by De Morgan’s rule. Since ∅ = Ωc , ∅ ∈ A.
Example 1.3.2 If Ω is a finite set, then the power set P(Ω) is an algebra.
Definition 1.3.2 (Sigma - Algebra a.k.a. Sigma- Field a.k.a σ -field ) A collection A of subsets of Ω is
called a σ - algebra/field if it satisfies
1. Ω ∈ A
1.3. ALGEBRAS OF SETS 15
2. If A ∈ A, then Ac ∈ A.
Here the condition 3. is referred to as closure under countable union. If an algebra is finite, then it is also
a Sigma - Algebra. A σ - algebra A is usually constructed by first choosing an algebra, say C, of subsets of
Ω that generates A. By this we mean that we augment C by all possible countable unions of sets in C, their
complements, all possible countable unions of these complements ad infinitum. We shall describe this procedure
in some more detail in the sequel, when Ω = the real line, denoted by R.
is NOT a sigma-field.
An ⊂ An+1
def
lim An = ∪∞
n=1 An . (1.1)
n→∞
16 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
Then limn→∞ An ∈ A. In words, ω is in the limit of a increasing sequence of events, if ω belongs to some An
and thereby to infinitely many sets in the collection.
Suppose that for all n ≥ 1
An+1 ⊂ An
and we say that (An )∞
n=1 is decreasing. Then we can define
def
lim An = ∩∞
n=1 An , (1.2)
n→∞
and limn→∞ An ∈ A. In other words, ω is in the limit of a decreasing sequence of events, if ω belongs to all An .
Example 1.3.6 Let Ω = R and suppose that we have a sigma field A such that all intervals of the form
1
1, 2 − ∈ A.
n
Then the sequence of events is increasing
1 1
1, 2 − ⊂ 1, 2 −
n n+1
and [1, 2) ∈ A, since
1
[1, 2) = lim 1, 2 − .
n→∞ n
Note that a σ algebra is clearly an algebra, but the converse is not always true, as the following example shows:
Example 1.3.7 Let Ω = R and let A denote the collection of subsets of the form:
Example 1.3.8 Suppose that we have a sigma field A such that all intervals of the form
(a, b) ∈ A,
Theorem 1.3.9 Given any collection C of subsets of a set Ω, there is a smallest algebra A containing C. That
is, there is an algebra A containing C such that if B is any algebra containing C then B contains A.
Proof Let F denote the family of all algebras of subsets of Ω which contain C. The axioms of set theory are
required to justify the existence of this family; it is a subset of P(P(Ω)) where P denotes taking the power set.
Let A = ∩{B|B ∈ F }. Then, for any A ∈ A and B ∈ A, A ∪ B ∈ B for all B ∈ F and hence A ∪ B ∈ A.
Similarly, if A ∈ A, then Ac ∈ A. It follows that A is an algebra and that C ⊆ A. Furthermore, A ⊆ B for any
algebra B containing C.
Lemma 1.3.10 Let C denote an indexing set. If (Ac )c∈C is a collection of σ algebras, then A = ∩c Ac (that is
the collection of sets that are in Ac for all c ∈ C) is a σ algebra.
Corollary 1.3.11 Given a collection of sets C, there exists a smallest σ algebra B containing each set in C.
That is, there exists a sigma algebra B such that if A is any other sigma algebra containing each set in C, then
B ⊂ A.
Proof The proof follows in exactly the same way as the proof of the existence of a smallest algebra containing a
given collection of sets. The set of all possible sigma algebras containing S exists by the power set axiom2 (applied
twice). Take the intersection. This exists by De Morgan’s laws. It is easy to check the hypotheses to see that
the resulting set is a σ-algebra; if A is in all the σ -algebras, then so is Ac . If (Aj )∞
j=1 are in all the σ algebras,
∞
then so is ∪j=1 Aj . The resulting collection is a σ algebra and is contained in any other σ algebra containing
each set in C. .
Referring to corollary 1.3.11 we say again that B is generated by C. In addition, we launch the notation
F ⊆ G,
which says that any set in the sigma field F lies also in the sigma field G.
is a sigma-field generated by the collection of sets {{ω1 }}, or, generated by the set {ω1 },
of P(A) if and only if B is a subset of A. Or, every set has a power set. Here one is stepping outside the realm of naı̈ve set theory
and considering axiomatic set theory with the Zermelo-Fraenkel axioms.
18 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
Definition 1.3.3 (The Borel Sigma Algebra) The Borel σ algebra B over R is generated by intervals of
the form (a, b).
Thereby the Borel σ algebra B contains all sets of the form (−n, b), n is a positive integer, and
and thus
(−∞, b) = lim (−n, b)
n→∞
is in B. In addition
1 1
{a} = lim a − ,a + ,
n→∞ n n
and we see that all singleton sets belong to B. Furthermore,
Theorem 1.3.14 The Borel σ algebra B over R is generated by each and every of
7. open sets of R
8. closed subsets of R
Definition 1.3.4 (Borel function) A function f : R 7→ R is called a Borel function, if for every set A in B,
the Borel σ algebra, we have that
f −1 (A) = {x ∈ R | f (x) ∈ A}
belongs to the Borel σ algebra, i.e.,
f −1 (A) ∈ B.
Definition 1.4.1 (Measure) A measure over a σ- algebra is a non negative set function µ : F → R+ satisfying
∞
X
µ(∪∞
i=1 Ai ) = µ(Ai ).
i=1
This is known as countable additivity.
If µ(Ω) = 1, then µ is said to be a probability measure and we use the notation P for the generic probability
measure.
20 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
The definition above is postulated for further mathematical developments of probability calculus.
For real world applications of probability the main problem is the choice of the sample space Ω of
events and the assignment of probability on the events.
We quote the following fundamental theorem of probability [81, ch. 1.5]. It tells that it is possible to construct
a probability measure on a sigma algebra generated by an algebra by first giving the measure on the generating
algebra.
Theorem 1.4.1 Let A be a set algebra and let σ (A) be the (smallest) sigma algebra generated by A. If P is
e defined on σ (A)
a probability measure defined on A, then there exists one and only one probability measure P
e
such that if A ∈ A, then P(A) = P(A).
We shall next find a few direct consequences of the axiomatic definition of a probability measure P.
P(∅) = 0.
Proof Consider Ω = Ω ∪ (∪∞ i=1 Ai ), where Ai = ∅ for i = 1, 2, . . . ,. Then Ai ∩ Aj = ∅ and Ω ∩ Aj = ∅, i.e., the
sets in the union Ω ∪ (∪∞
i=1 i ) are disjoint. We set a = P(∅). Then countable additivity yields
A
1 = P(Ω) = P (Ω ∪ (∪∞
i=1 Ai ))
∞
X ∞
X
= P(Ω) + P(Ai ) = 1 + P(Ai )
i=1 i=1
= 1 + a + a+ a+ ...,
Theorem 1.4.3 (Finite Additivity) Any countably additive probability measure is finitely additive, i.e., for
all Ai in the collection (Ai )ni=1 of pairwise disjoint sets
n
X
P(∪ni=1 Ai ) = P(Ai ).
i=1
P(A) ≤ P(B).
Example 1.4.7 (Probability measure on a countable outcome space) We consider the special case Ω =
n
{ω = (ωi )i=1 | ωi ∈ {0, 1}}. In words, the elementary outcomes are finite sequences of digital bits. Ω is count-
able. The sigma field Fo is generated by the collection of sets Ak (a.k.a. cylinders) of the form
for any integer k ≤ n and arbitrary string of bits, xl1 xl2 . . . xlk . We assign the weight p(ω) ≥ 0 to every
P
elementary outcome ω and require that ω p(ω) = 1. Then the probability of any set A in Fo is defined by
def
X
P(A) = p(ω). (1.3)
ω∈A
It can be shown (an exercise to this section) that P is a countably additive probability measure, and therefore
(Ω, Fo , P) is a probability space. The measure P can be extended to the σ-field of measurable subsets F of the
uncountable {(ωi )∞ i=1 | ωi ∈ {0, 1}}.
The proofs of the continuity properties are given below. One needs to recall (1.1) and (1.2).
Theorem 1.4.8 If Bn ↑ ∪∞
k=1 Bk , then P(B) = limn→∞ P (Bn ).
A \ B = A ∩ Bc,
P(B) = P (∪∞ ∞
k=1 Bk ) = P (∪k=2 (Bk \ Bk−1 ) ∪ B1 )
But the sets in the decomposition are seen to be pairwise disjoint, and hence the countable additivity yields
∞
X
P (∪∞
k=2 (Bk \ Bk−1 ) ∪ B1 ) = P (Bk \ Bk−1 ) + P (B1 )
k=2
n
X
= lim P (Bk \ Bk−1 ) + P (B1 )
n→∞
k=2
Theorem 1.4.9 If Bn ↓ ∩∞ ∞
k=1 Bk , then P (∩k=1 Bk ) = limn→∞ P (Bn ).
P (∪∞ c c
k=1 Bk ) = lim P (Bn ) .
n→∞
1.4. PROBABILITY SPACE 23
= lim P (Bnc ) .
n→∞
P (∩∞ c
k=1 Bk ) = 1 − lim P (Bn )
n→∞
= 1 − lim (1 − P (Bn ))
n→∞
If we take a = b, we need to have the singleton sets {a} in F , and their probability is zero. If F is to be a
1 1
sigma-field, then the open interval (a, b) = ∪∞
i=1 [a + i , b − i ] must be in F , and the probability of such an open
interval is by continuity from below (see condition 2. in section 1.4.2 above)
1 1 2
P ((a, b)) = lim P a + , b − = lim b − a − = b − a.
i→∞ i i i→∞ i
Any open subset of Ω is the union of finite or countably infinite set of open intervals, so that F should contain
all open and closed subsets of Ω. Hence F must contain any set that is the intersection of countably many open
sets, and so on.
The specification (1.5) of probability must therefore be extended from all intervals to all of F . We cannot
figure out a priori how large F will be. One might think that F should be the set of all subsets of Ω. However,
this does not work:
Suppose that we wish to define a measure µ to called length, length(A), for all subsets A of R such
that
length ([a, b]) = b − a a < b,
24 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
and such that the measure satisfies the additional condition of translation invariance
where A + y = {x + y | x ∈ A}.
This is now shown to lead to a contradiction. Take Q = the set of rational numbers, i.e., Q = {p/q |
p ∈ Z, q ∈ Z}. For any real x ∈ R let Qx = Q + x. One can show that for any x ∈ R and y ∈ R
either Qx = Qy or Qx and Qy are disjoint. One can also show that Qx ∩ [0, 1] 6= ∅ for all x ∈ R, or
in plain words, each Qx contains at least one element from [0, 1].
Let V be a set obtained by choosing exactly one element from the interval [0, 1] from each Qx . (V
is well defined, if we accept the Axiom of Choice3 .)
Thus V is a subset of [0, 1]. Suppose q1 , q2 , . . . is an enumeration of all the rational numbers in the
interval [−1, 1], with no number appearing twice in the list. Let for i ≥ 1
Vi = V + qi .
[0, 1] ⊂ ∪∞
i=1 Vi ⊂ [−1, 2].
Since Vi are translations of V , they should have the same length as V . If the length of V is defined
to be zero, so [0, 1] would also have length zero by monotonicity. If the length of V were strictly
positive, then the length of ∪∞i=1 Vi would by countable additivity be infinite, and hence the interval
[−1, 2] would have infinite length. In either way we have a contradiction.
The difficulty will be resolved by taking F to be the Borel sigma algebra, c.f. definition 1.3.3 above, and by
construction of the Lebesgue measure.
For the construction of the Lebesgue measure we refer to [36, chapter 1.] or [91, chapter 11.].
We outline a rudiment of this theory. Lebesgue measure over the real line is defined as follows: The
length of an interval [a, b], (a, b), (a, b] or [a, b) is given by b − a (c.f. the measure length above).
The outer measure of a set A is given as the infimum over open intervals (In )∞ n=1
∞
X
m∗ (A) = inf |In |,
(In )∞
n=1 :A⊂∪n In
n=1
where |In | denotes the length of the interval In . A set B is said to be measurable, with measure
λ(B) = m∗ (B) if for any set A ⊂ R it holds that
m∗ (A) = m∗ (A ∩ B) + m∗ (A ∩ B c ).
The Heine Borel lemma states that every covering by open sets has a finite subcovering.
One then uses the Carathéodory Extension Theorem to show that Lebesgue measure is well defined
over the Borel σ algebra .
Finally, why not be content with probability measures only on set algebras ? The answer is that a good
theory of probability needs limits of random variables and infinite sums of random variables, which require
events outside a set algebra.
3 http://en.wikipedia.org/wiki/Axiom of choice
1.5. RANDOM VARIABLES AND DISTRIBUTION FUNCTIONS 25
Example 1.4.10 Let F to be the Borel sigma algebra, c.f. definition 1.3.3, restricted to [0, 1]. Let A =]a, b]
with 0 ≤ a ≤ b ≤ 1 and P(A) = b − a. By definition 1.3.3 we know that singleton sets {a} belong to F and
thus P({a}) = 0. Hence, e.g., the set of rational numbers in [0, 1], i.e., pq with p and q positive integers p ≤ q,
is a countable disjoint union of measurable sets, is by countable additivity P - negligible.
. . . (statistics) has never produced any definition of the term ’random variable’ that could actually
be used in practice to decide whether some specific quantity, such as the number of beans in a can,
is or is not ’random’.
Random variables and later random processes are in a very useful manner seen as mathematical models of
physical noise. As examples an engineer might quote thermal noise (a.k.a. Nyquist-Johnson noise, produced by
the thermal motion of electrons inside an electrical conductor), quantum noise and shot noise, see [11, 33, 71, ?].
Does this provide grounds for claiming a physical countably additive probability measure? The foundational
question of how to define randomness is, certainly, not resolved by this manœuver, at any rate not, if one in a
circular manner describes the physical noise as the result of many random events happening at the microscopic
level.
Remark 1.5.1 Physical noise, in particular measurement error (mätfel), is described as follows in [52, p.13]:
. . . felens storlek och tecken (kan) inte individuellt påvisas någon lag och de kan alltså inte i förväg
beräknas eller individuellt korrigeras. . . . Vanligen antas en viss relation föreligga emellan de oregel-
bundna felens storlek och deras frekvens.
As an interpretation in English, the quoted Swedish author describes random measurement errors as quantities,
whose magnitude and sign do not follow any known law and cannot be compensated for in advance as individual
26 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
items. One assumes, however, that there is a statistical relation or regularity between the magnitudes and their
frequencies.
One foundational approach to discussing randomness is due to G. Chaitin [21]. Chaitin argues that randomness
has to do with complexity. Or, a random object cannot be compressed at all: since in randomness there is no
structure or pattern (’lag’, a known law in the Swedish quote above), you cannot give a more concise or less
complex description (by a computer program or a rule) other than the object itself. For a statistical modelling
theory with complexity as platform we refer to the lectures by J. Rissanen [86].
In the sequel we shall not pursue the foundational topics or the related critical discourses any further, but
continue by presenting probabilistic tools for modelling of noise and for modelling by means of noise.
The condition in (1.6) means in words that the pre-image of any A ∈ B is in F , and X is called measurable , or
a measurable function from Ω to R. We can also write
Example 1.5.1 Let (Ω, F , P) be a probability space. Take F ∈ F , and introduce the indicator function of
F , to be denoted by χF , as the real valued function defined by
(
1 if ω ∈ F
χF (ω) = (1.7)
0 if ω ∈ / F.
We show now that χF is a random variable. We take any A ∈ B and find that
∅ if 0∈/ A, 1∈/A
F if 0∈/ A, 1∈A
χ−1
F (A) = {ω : χF (ω) ∈ A} =
Fc if 0 ∈ A, 1∈/A
Ω if 0 ∈ A, 1 ∈ A.
For the next result one needs to recall the definition of a Borel function in 1.3.4.
Theorem 1.5.2 Let f : R 7→ R be a Borel function, and X be a random variable. Then Y defined by
Y = f (X)
is a random variable.
where the inverse image is f −1 (A) = {x ∈ R | f (x) ∈ A}. Since f is a Borel function, we have by definition
that f −1 (A) ∈ B, since A ∈ B. But then
{ω ∈ Ω | X(ω) ∈ f −1 (A)} ∈ F ,
since X is a random variable. But thereby we have established that Y −1 (A) ∈ F for any A in B, which by
definition means that Y is a random variable.
By this theorem we have, amongst other things, provided a slick mathematical explanation of one
basic tenet of statistics, namely that an estimator of a parameter in a probability distribution
is a random variable. Of course, for students of a first course in probability and statistics the
understanding of this fact may require much more effort and pedagogic ingenuity4 .
Definition 1.5.2 A sigma field generated by a real valued random variable X, denoted by FX and/or
σ(X), consists of all events of the form {ω : X(ω) ∈ A} ∈ F , A ∈ B, where B is the Borel σ algebra over R.
Example 1.5.3 In example 1.5.1 it was verified that χF is a random variable for any F ∈ F . Then it follows
by the same example and the definition above that the sigma-field generated by χF is
In view of example 1.3.13 FχF is the sigma-field generated by the set F , as seems natural.
Definition 1.5.3 A sigma field generated by a family {Xi | i ∈ I} of real valued random variables Xi ,
denoted by FXi ,i∈I , is defined to be the smallest σ algebra containing all events of the form {ω : Xi (ω) ∈ A} ∈ F ,
A ∈ B, where B is the Borel σ algebra over R and i ∈ I.
If it holds for all events in A in a sigma-field H that A ∈ F , then we say that H is a subsigma-field of F and
write
H ⊆ F.
Example 1.5.4 Let Y = f (X), where X is a random variable and f is a Borel function. Then
FY ⊆ FX .
We shall now establish this inclusion. The sigma field generated by a real valued random variable Y , or FY ,
consists of all events of the form {ω : Y (ω) ∈ A} ∈ F , A ∈ B. Now
Since f −1 (A) is a Borel set, we have by definition of FX that {ω : X(ω) ∈ f −1 (A)} ∈ FX . Therefore we have
shown that every event in FY is also in FX , and this finishes the proof of the inclusion FY ⊆ FX .
The result is natural, as events involving Y are in fact events determined by X. If f (x) is invertible in whole
of its domain of definition, then clearly FY = FX .
Theorem 1.5.5 (Doob-Dynkin) Let X be a real valued random variable and let Y be another real valued
random variable such that Y is σ(X) -measurable, or,
for all A in the Borel σ algebra over R. Then there is a (Borel) function H(x) (definition 1.3.4) such that
Y = H(X).
Proof is omitted, and is not trivial. The interested student can find one proof in [63, thm 23.2].
Note that our preceding efforts pay here a dividend: (−∞, x] is a Borel event, and as X is a random variable,
{ω ∈ Ω | X(ω) ∈ (−∞, x]} is an event in F and therefore we may rest assured that P ({ω ∈ Ω | X(ω) ∈ (−∞, x]})
is defined. In the chapters to follow it will contribute to clarity of thought to indicate the random variable
connected to the distribution function, so we shall be writing there
Remark 1.5.2 In statistical physics, see, e.g., [17], a distribution function pertains5 often to a different concept.
For example, the distribution function of the velocities v of molecules in a gas is the fraction, f (v)dv, of molecules
with velocities between v and v + dv, and is shown in [17, p. 48] or [18] to be
2
f (v)dv ∝ e−mv /kB T
dv, (1.9)
where m is the mass, kB is the Boltzmann constant, and T is temperature. In probability theory’s terms f (v)
is is obviously the probability density of the velocity. The density above will be re-derived in section 11.3 using
an explicit and well defined random process, known as the Ornstein-Uhlenbeck process.
1. F is non decreasing,
Proof Clear
Theorem 1.5.7 If F satisfies 1., 2. and 3. above, then it is the distribution function of a random variable.
Proof Consider Ω = (0, 1) and P the uniform distribution, which means that P((a, b]) = b−a, for 0 ≤ a < b ≤ 1.
Set
Firstly, notice that if ω ≤ F (x), then X(ω) ≤ x, since x 6∈ {y : F (y) < ω}. Next: Suppose ω > F (x). Since F
is right continuous, there is an ǫ > 0 such that F (x + ǫ) < ω. Therefore, X(ω) ≥ x + ǫ > x.
Next we define lim inf n→+∞ Xn and lim supn→+∞ Xn For the definitions of lim inf n→+∞ xn and lim supn→+∞ xn
for sequences of real numbers (xn )∞ n=1 we refer to Appendices 1.9 and 1.10. Here
lim inf Xn = sup inf Xm (1.10)
n→+∞ n m≥n
and
lim sup Xn = inf sup Xm . (1.11)
n→+∞ n m≥n
Proof Provided F is a σ algebra, it follows that {inf n Xn < a} = ∪∞n=1 {Xn < a} ∈ F . Now, the sets (−∞, a) are
in the Borel sigma algebra, proving that inf n Xn is measurable. Similarly, {supn Xn > a} = ∪∞
n=1 {Xn > a} ∈ F .
For the last two statements, the conclusion is clear in view of (1.10) and (1.11) and by what was just found.
P (A ∩ B) = P (A) · P (B) .
We shall now see that we can exploit this to define independence of random variables and of sigma fields.
Definition 1.6.1 Assume that we have a probability space (Ω, F , P) and random variables X and Y on it.
• Two sigma fields H ⊆ F and G ⊆ F are independent if any two events A ∈ H and B ∈ G are independent,
i.e.,
P(A ∩ B) = P(A)P(B).
30 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
• Two random variables X and Y are independent, if the sigma-algebras generated by them, FX and
FY , respectively, are independent.
It follows that two random variables X and Y are independent, if and only if
P (X ∈ A, Y ∈ B) = P (X ∈ A) · P (Y ∈ B)
for all Borel sets A and B. In particular, if we take A = (−∞, x] and B = (−∞, y], we obtain for all x ∈ R,
y∈R
FX,Y (x, y) = P (X ∈ (−∞, x], Y ∈ (−∞, y]) = FX (x) · FY (y), (1.12)
which is the familiar definition of independence for X and Y , see, e.g., [15], in terms of distribution functions.
Theorem 1.6.1 Let X and Y be independent random variables and f and g be two Borel functions. Then
f (X) and g(Y ) are independent.
Proof Set U = f (X) and V = g(Y ). These are random variables by theorem 1.5.2. Then
FU ⊆ FX , FV ⊆ FY ,
as shown in example 1.5.4. Thus, if we take any A ∈ FU and any B ∈ FV , it holds that A ∈ FX and B ∈ FY .
But FX and FY are independent sigma fields, since X and Y are independent. Therefore it holds for every set
A ∈ FU and every B ∈ FV , that P(A ∩ B) = P(A)P(A), and therefore FU and FV are independent, and this
means that U = f (X) and V = g(Y ) are independent, as was asserted.
If we have two independent random variables X and Y that have the same distribution (i.e., FX (x) =
FY (x) for all x), we say that X and Y are independent, identically distributed r.v.’s and abridge
this with I.I.D.. The same terminology can be extended to state X1 , . . . , Xn as being I.I.D. r.v.’s.
Sequences of I.I.D. r.v.’s will be a main theme in the sequel.
If Gn in (1.13) occurs, this means that all Ak for k ≥ n occur. If there is some such n, this means in other
words that from this n on all Ak occur for k ≥ n. With
∞
[ ∞ \
[ ∞
H= Gn = Ak
n=1 n=1 k=n
we have that if H occurs, then there is an n such that all Ak with k ≥ n occur. Sometimes we denote H with
lim inf Ak .
1.7. THE BOREL-CANTELLI LEMMAS 31
The fact that Fn occurs implies that there is some Ak for k ≥ n which occurs. If Fn in (1.13) occurs for all
n this implies that infinitely many of the Ak :s occur. We form therefore
∞
\ ∞ [
\ ∞
E= Fn = Ak . (1.14)
n=1 n=1 k=n
If E occurs, then infinitely many of Ak :s occur. Sometimes we write this as E = {An i.o.} where i.o. is to be
read as ”infinitely often”, i.e., infinitely many times. E is sometimes denoted with lim sup Ak , c.f. Appendix
1.10.
P∞
Lemma 1.7.1 Borel-Cantelli lemma If n=1 P(An ) < ∞ then it holds that P (E) = P (An i.o) = 0, i.e.,
that with probability 1 only finitely many An occur.
Proof One notes that Fn is a decreasing set of events. This is simply so because
∞ ∞
!
[ [ [ [
Fn = Ak = An Ak = An Fn+1
k=n k=n+1
and thus
Fn ⊃ Fn+1 .
Thus the theorem 1.4.9 above gives
∞
\ ∞
[
P(E) = P( Fn ) = lim P(Fn ) = lim P( Ak ).
n→∞ n→∞
n=1 k=n
One can observe that no form of independence is required, but the proposition holds in general, i.e., for any
sequence of events.
A counterpart to the Borel-Cantelli lemma is obtained, if we assume that the events A1 , A2 , . . . are inde-
pendent.
then it holds that P(E) = P(An i.o) = 1, i.e., it holds with probability 1 that infinitely many An occur.
P∞
If now n=1 P(An ) = ∞, then the sum in the exponent diverges and we obtain
∞
\
P( Ack ) = 0.
k=n
P (X = xi ) ≡ P ({ω ∈ Ω | X(ω) = xi }) .
pX (xi ) = P (X = xi ) .
The definition in (1.15) clearly depends only on the probality measure P for given X.
Now, we want to interpret in E [X] in (1.15) as an integral of X and use this inspirational recipe for non-simple
X. For this we must develop a more general or powerful concept of integration, than the one incorporated in
the Rieman integral treated in basic integral and differential calculus.
Here is an outline. We consider first an arbitrary nonnegative random variable X ≥ 0. Then we can find
(see below) a infinite sequence of simple random variables X1 , X2 , . . . , such that
• for all ω ∈ Ω
X1 (ω) ≤ X2 (ω) ≤ . . .
as n → ∞.
1.8. EXPECTED VALUE OF A RANDOM VARIABLE 33
Then E [Xn ] is defined for each n and is non-decreasing, and has a limit E [Xn ] ↑ C ∈ [0, ∞], as n → ∞. The
limit C defines E [X]. Thus
def
E [X] = lim E [Xn ] . (1.16)
n→∞
This is well defined, and it can happen that E [X] = +∞. Let us take a look at the details of this procedure.
The discussion of these details in the rest of this section can be skipped as the issues inolved will NOT be
actively examined, but are recommended for the specially interested.
For an arbitrary nonnegative random variable X ≥ 0 we define the simple random variable Xn ,
n ≥ 1, as (an electrical engineer might think of this as a digitalized signal)
(
k
2n if 2kn ≤ X (ω) ≤ k+1 n
2n , k = 0, 1, 2, . . . , n2 − 1
Xn (ω) =
n else.
This means that we partition for each n the range of X (not its domain !), R+ ∪ 0, the nonnegative
real line, so that [0, n[ is partitioned into n2n disjoint intervals of the form
k k+1
En,k = n , n ,
2 2
and the rest of the range R+ ∪ 0 is in En = [n, ∞]. Then we see that
1
| Xn (ω) − X(ω) |≤ if X(ω) < n (1.17)
2n
and
Xn (ω) = n, if X(ω) ≥ n. (1.18)
k
When we next go over to n+1, [0, n+1[ is partitioned into intervals of the form En+1,k = 2n+1 , 2k+1
n+1 .
But then it is clear that Xn ≤ Xn+1 . We show this for each ω. First, if Xn (ω) = 2kn , then by (1.19
2k
) either Xn+1 (ω) = 2n+1 = 2kn or Xn+1 (ω) = 2k+1 2k k
2n+1 > 2n+1 = 2n , and thus Xn (ω) ≤ Xn+1 (ω). If
Xn (ω) = n, then Xn (ω) ≤ Xn+1 (ω). By this and by (1.17 )- (1.18) we see that for each ω
Then n
n2
X −1
k
E [Xn ] = P (X ∈ En,k ) + nP (X ≥ n)
2n
k=0
n2
X
n
−1
k k+1 k
= FX − FX + nP (X ≥ n) .
2n − 1 2n 2n
k=0
We write this (for some omitted details see eq. (1.35) in an exercise of section 1.12.3) as
n2
X
n
−1 Z Z
E [Xn ] = Xn (ω)P(dω) + Xn (ω)P(dω)
k=0 {ω|Xn (ω)∈En,k } {ω|Xn (ω)∈En }
n
and since Ω = {ω | Xn (ω) ∈ En,k }n2
k=1 , {ω | Xn (ω) ∈ En } is a partition of Ω, we set
Z
= Xn (ω)dP(ω).
Ω
34 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
Then it is seen by Xn (ω) ≤ Xn+1 (ω) for all ω and as En,k = En+1,2k ∪ En+1,2k+1 that
E [Xn ] ≤ E [Xn+1 ] .
is a Borel function. Let X be a random variable. Then χA (X) is a random variable by theorem 1.5.2 and is
non negative and simple. We get
E [χA (X)] = 0 · P (X ∈ Ac ) + 1 · P (X ∈ A) .
We shall define E[X] for an arbitrary random variable in the next section.
Let F = FX be the distribution function associated with X, then this may be rewritten as
Z ∞
E[X] = xdF (x),
0
as follows by the considerations in the preceding example 1.8.1.
R∞
For a real valued random variable, its expectation exists if E[X + ] := 0 xF (dx) < +∞ and E[X − ] :=
R0
− −∞ xF (dx) < +∞. Then the expectation is given by
Theorem 1.8.2 Let X be a random variable and g a Borel function such that E [g(X)] < ∞.
Z Z ∞
E [g(X)] = g(X)dP = g(x)dF (x) (1.22)
Ω −∞
1.8. EXPECTED VALUE OF A RANDOM VARIABLE 35
Proof We follow [103, p. 317]. We assume that g(x) ≥ 0, since otherwise we can use decomposition into the
negative and positive parts g + and g − as shown above. We assume in addition that g is simple, i.e., there are
Borel sets G1 , G2 , . . . , Gm that are a partition of R such that
g(x) = gk , if x ∈ Gk k = 1, 2, . . . , m.
and ∪m
k=1 Gk = R. We can use the construction in (1.15) with Y = g(X)
m
X
E [Y ] = gk P (Y = gk ) . (1.23)
k=1
Here
{Y = gk } = {ω | g(X(ω)) = gk } = {ω | X(ω) ∈ Gk } = {X ∈ Gk }.
And thus
P (Y = gk ) = P (X ∈ Gk ) .
Hence in (1.23)
m
X m Z
X Z
E [Y ] = gk P (X ∈ Gk ) = g(X(ω))dP(ω) = g(X(ω))dP(ω),
k=1 k=1 X∈Gk Ω
m
where we used the result (1.35) in the exercises of this chapter, since ({X ∈ Gk })k=1 is a partition of Ω, and
thus Z
E [Y ] = g(X(ω))dP(ω). (1.24)
Ω
On the other hand, the discussion in example 1.8.1 and the expression (1.21) tell us that
Z
P (X ∈ Gk ) = dFX (x),
Gk
and thus m m Z m Z
X X X
E [Y ] = gk P (X ∈ Gk ) = gk dFX (x) = g(x)dFX (x)
k=1 k=1 Gk k=1 Gk
Z
= g(x)dFX (x),
R
and thus Z
E [Y ] = g(x)dFX (x).
R
Hence we have established the law of the unconscious statistician for non negative and simple g. The general
statement follows by approximating a non negative g by simple functions (see the preceding) and then using
g + and g − .
Then
E[φ(X)] ≥ φ(E[X]).
36 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
Proof Let c = E[X] and let l(x) = ax + b, where a and b are such that l(c) = φ(c) and φ(x) ≥ l(x). Choose a
such that
1 1
Theorem 1.8.4 (Hölder’s Inequality) If p, q ∈ [1, ∞] with p + q = 1, then
Proof By dividing through by E[|X|p ]1/p E[|Y |q ]1/q , one may consider the case of E[|X|p ]1/p = E[|Y |q ]1/q = 1.
Furthermore, we use the notation
def
E[|X|p ]1/p = kXkp.
In chapter 7 we shall be specially interested in E[|X|2 ]1/2 .
φx (x, y) = xp−1 − y
and
φxx (x, y) = (p − 1)xp−2 .
For fixed y, it follows that φ(x, y) has a minimum (in x) at x0 = y 1/(p−1) . Note that xp0 = y p/(p−1) = y q , so
that
1 1
φ(x0 ) = ( + )y p − y 1/(p−1) y = 0.
p q
Since x0 is a minimum, it follows that xy ≤ p1 xp + 1q y q . Setting x = X, y = Y and taking expectations yields
1 1
E[|XY |] ≤ + = 1 = kXkp kY kq .
p q
Theorem 1.8.5 (Chebychev’s Inequality) Suppose that φ : R → R+ . Let iA = inf{φ(y) : y ∈ A}. Then
for any measurable set A,
Proof Exercise
One example of Chebychev’s inequality as stated above is
Var [X]
P (| X − E [X] |> a) ≤ . (1.27)
a2
We say that a property, described by an event A, for a random variable X holds almost surely, if
Theorem 1.8.8 (Dominated Convergence Theorem) If Xn → X almost surely, and |Xn | < Y for all n
and E[| Y |] < +∞, then E[Xn ] → E[X].
n
1
xn = 1 + .
n
b = lim sup xn ,
n→∞
n>N ⇒ xn < c.
′
• 2) For every c < b and for every integer N there is an n > N such that
′
xn > c .
In other words, for any ǫ > 0 only finitely many of xn can be larger than b + ǫ. Also, there are infinitely
many xn larger than b − ǫ.
a = lim inf xn ,
n→∞
xn < c.
n>N ⇒ xn > c.
In other words, for any ǫ > 0 there are infinitely many xn smaller than a + ǫ. Only finitely many of xn
can be smaller than a − ǫ.
a − ǫ ≤ xn ≤ b + ǫ
∞
But if a = b, then this yields the definition of a limit x = (a = b) of (xn )n=1 .
1.10. APPENDIX: LIM SUP AN AND LIM INF AN 39
Example 1.9.1
1n
xn = (−1) 1 + , n = 1, 2, 3, . . . , .
n
Then
lim inf xn = −1, lim sup xn = 1.
n→∞ n→∞
We show the latter.
1
1) If c > 1, take an integer N ≥ c−1 . Then if n > N
1
xn ≤ 1 + < c.
n
′
2) If c < 1 and N is an integer, then if n = 2k and 2k > N , then
1 1 ′
xn = x2k = (−1)2k 1 + =1+ >c.
2k 2k
Hence
lim sup xn = 1
n→∞
and
def
lim inf An = ∪∞ ∞
n=1 ∩m=n Am . (1.29)
n→∞
Then lim supn→∞ An ∈ A and lim inf n→∞ An ∈ A (you should convince yourself of this).
Clearly
lim inf An ⊂ lim sup An .
n→∞ n→∞
If lim inf n→∞ An = lim supn→∞ An , we say that
and we say (clearly?) that An happens ultimately (i.e, for all but finitely many n). Then (a quiz for self-studies)
χ∩ ∞ ∞
n=1 ∪m=n Am
= lim sup χAn
n→∞
and
χ∪ ∞ ∞
n=1 ∩m=n Am
= lim inf χAn .
n→∞
40 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
n! = n · (n − 1) · . . . · 3 · 2 · 1
with the convention 0! = 1, is the number of ways of ordering n items in a collection. Then we define the
expression (n)k for integers 0 ≤ k ≤ n by
def
(n)k = n · (n − 1) · . . . · (n − k + 1).
It is seen that (n)k equals the number of ways to pick k items from the the collection n items without replacement
and with the order of the items in the subcollection taken into account.
Let P (n, k) be the number of ways to pick k items from the collection n items without replacement and
without the order of the items in the subcollection taken into account. Then by the multiplication principle
k1 + k2 + . . . + kn = k
We call ki ’s occupation numbers. We define the occupancy distribution by the n -tuple (k1 , k2 , . . . , kn ). Two
ways of distributing k indistinguishable objects into cells are called indistinguishable, if their occupancy dis-
tributions are identical. Let us now consider the following fundamental question: how many distinguishable
occupancy distributions
! can be formed by distributing k indistinguishable objects into n cells ? The answer is
n+k−1
. To prove this, we use a device invented by William Feller. This consists of representing n cells
k
by the space between n + 1 bars and the balls in the cells by stars between the bars. Thus
and we have established the last of the arrays in the tables above.
In statistical physics one thinks of a phase space subdivided into a large number, n, regions (cells) and k
indistinguishable particles each of which falls into one of the cells. One could guess that each of the possible
nk arrangements of the k particles is equally probable, this is called Maxwell -Boltzmann statistics. If on the
other hand, each of the possible occupancy distributions for the particles is considered equally probable, and
no restrictions are !made on the number of particles in each cell, then probability of each distinct arrangement
n+k−1
is 1/ . This is called Bose-Einstein statistics [17, p.354, Exercise 29.6 (b)]. If the k particles are
k
indistinguishable particles and one imposes the restriction that no more than one particle can!found in a cell,
n
and the arrangments are equally probable, then the probability of an arragement is 1/ , and one talks
k
about Fermi-Dirac statistics [17, p.354, Exercise 29.6 (a)]. The reference [17] loc.cit. shows how one derives
Fermi-Dirac and Bose-Einstein distribution functions (which are not distribution functions in the sense defined
above) from these expressions. One needs physical experiments to decide, which model of statistics holds for
a certain system of particles (e.g., hydrogen atoms, electrons, neutrons, protons). In other words, one cannot
argue solely from abstract mathematical principles as to what is to be regarded as equally likely events in reality
[53].
42 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
1.12 Exercises
1.12.1 Easy Drills
1. (Ω, F , P) is a probability space. A ∈ F and B ∈ F . P((A ∪ B)c ) = 0.5 and P(A ∩ B) = 0.2. What is the
probability that either A or B but not both will occur. (Answer: 0.3).
2. (Ω, F , P) is a probability space. A ∈ F and B ∈ F . If the probability that at least one of them occurs is
0.3 and the probability that A occurs but B does not occur is 0.1, what is P(B) ? (Answer: 0.2).
2. Let Ω = [0, 1). For each of the set functions µ defined below, determine whether µ satisfies the axioms of
probability. 0 ≤ a < b ≤ 1.
b−a
1. µ([a, b)) = b+a .
2. µ([a, b)) = b − a2 .
2
3. Ω = the non negative integers = {0, 1, 2, . . .}. Since Ω is countable, we can take F = all subsets of Ω. Let
0 < θ < 1 be given. For which values of θ is it possible to give a probability measure P on (Ω, F ) such
that P ({i}) = θi , i = 0, 1, 2 . . .?
4. (From [43]) Let Ω = [0, ∞). let F the sigma field of subsets of Ω generated by sets of the form (n, n + 1)
for n = 1, 2, . . ..
If there is no k such that (k + 1/2) ∈ A, then the sum is taken as zero. Is P a probability measure
on (Ω, F ), and if so, what is the value of c?
1.12. EXERCISES 43
(c) Repeat part (b) for F replaced by the Borel sigma field.
(d) Repeat part (b) for F replaced by the power set of Ω.
5. Show that P defined in (1.3) in example 1.4.7 is a countably additive probability measure.
P (B ∩ Ac ) = P (B) − P (A) .
7. Show that the probability that one and only one of the events A and B occurs is
This is known as the symmetric difference of A and B. You should convince yourself of the fact that
A△B ∈ F .
The sharp-eyed reader will recognize this as a form of the triangle inequality. One can in fact regard
P (A△B) as a distance or metric on events.
10. Show that if P (A) ≥ 1 − δ and P (B) ≥ 1 − δ, then also P (A ∩ B) ≥ 1 − 2δ. In words, if two events have
probability near to one, then their intersection has probability nearly one.
(a)
n
X
P ∪nj=1 Aj ≤ P(Aj ).
j=1
(b)
n
X
P ∩nj=1 Aj ≥ 1 − (1 − P (Aj )) .
j=1
44 CHAPTER 1. PROBABILITY SPACES AND RANDOM VARIABLES
13. A1 ,A2 , A3 ,. . ., and An are independent events. Prove that their complements are Ac1 ,Ac2 , Ac3 ,. . ., and Acn
are independent events, too.
14. Suppose that A ∩ B ⊆ C holds for the events A, B and C. Show that
P (C c ) ≤ P (Ac ) + P (B c ) . (1.33)
15. [68] Suppose that P is a finitely additive probability measure defined on a field G of subsets of a space Ω.
Assume that P is continuous at ∅, i.e., if An ∈ G for all n and An ↓ ∅, then P (An ) ↓ 0. Show that P is a
probability measure on G.
This result is useful, since in applications one often encounters a finitely additive measure on a field G
rather than a mesure on a σ-field F .
Gi = {ω ∈ Ω | X(ω) = xi }
Gi ∩ Gj , i 6= j, Ω = ∪m
i=1 Gi .
Then χA · X is a simple random variable with the range VχA ·X containing those xi for which Gi ∩ A 6= ∅.
R
VχA ·X is augmented with zero 0, if needed. Then we define A XdP by
Z Z
XdP = χA · XdP = E [χA · X] .
A Ω
2. Let X be a positive random variable, P (X > 0) = 1, with E [X] < ∞. Show that
1 1
E ≥ . (1.36)
X E [X]
1
Note that this inequality is trivially true if E X = +∞.
3. Let X and Y be independent random variables. Assume that E [|X|] < ∞, E [|Y |] < ∞, E [|XY |] < ∞.
Show that
E [X · Y ] = E [X] · E [Y ] . (1.37)
We do this in steps, c.f. [13, p. 403]. Our tools for this are the small pieces of integration theory in section
1.8 and the definition 1.6.1.
1.12. EXERCISES 45
(b) By means of the item (a) check in detail that (1.37) holds for all simple random variables X and Y .
(c) Explain how you can obtain (1.37) for X ≥ 0 and Y ≥ 0.
χ∩ ∞ ∞
n=1 ∪m=n Am
= lim sup χAn
n→∞
and
χ∪ ∞ ∞
n=1 ∩m=n Am
= lim inf χAn .
n→∞
6. Markov’s inequality Let X be such that P (X ≥ 0) = 1, i.e., X is almost surely nonnegative. Show
that for any c > 0
E [X]
P (X ≥ c) ≤ . (1.38)
c
Aid: Let A = {ω ∈ Ω | X(ω) ≥ c}. Let χA be the corresponding indicator function, i.e.,
(
1 if ω ∈ A
χA (ω) =
0 if ω ∈ / A.
8. [30] Let {An }n≥1 be as sequence of pairwise disjoint sets. Show that
lim An ∈ M
n→∞
for any increasing or decreasing sequence of sets {An }n≥1 in M. Show that an algebra M is a sigma
algebra if and only if it is a monotone class. Aid: In one direction the assertion is obvious. In the other
direction, consider Bn = ∪nk=1 Ak .
F1 ∩ F2
Probability Distributions
2.1 Introduction
In this chapter we summarize, for convenience of reference, items of probability calculus that are in the main
supposed to be already familiar. Therefore the discourse will partly be sketchy and akin in style to a chapter
in a handbook like [92].
We shall first introduce the distinction between continuous and discrete r.v.’s. This is done by specifying
the type of the distribution function. Appendix 2.5 provides a theoretical orientation to distribution functions
and can be skipped by first reading.
We start by defining the continuous random variables. Let first fX (x) be function such that
Z∞
fX (x) dx = 1, fX (x) ≥ 0, for all x in R.
−∞
is the distribution function of a random variable X, as can be checked by theorem 1.5.7, and as was in advance
suggested by the notation. We say that X is a continuous (univariate) random variable. The function
fX (x) is called the probability density function p.d.f. (p.d.f.) of X. In fact (c.f. appendix 2.5) we have
for any Borel set A that Z
P (X ∈ A) = fX (x)dx.
A
In this chapter an array of continuous random variables will be defined by means of families of probability
densities fX (x). By families of probability densities we mean explicit expressions of fX (x) that depend on a
finite set of parameters, which assume values in suitable (sub)sets of real numbers. Examples are normal
(Gaussian), exponential, Gamma e.t.c. distributions.
The parameters will be indicated in the symbolical codes for the distributions, e.g., Exp(a) stands for
the exponential distribution with the parameter a, a > 0. The usage is to write, e.g., X ∈ Exp(a),
when saying that the r.v. X has an exponential distribution with parameter a.
Next we say that X is a discrete (univariate) random variable, if there is a countable (finite or infinite) set
47
48 CHAPTER 2. PROBABILITY DISTRIBUTIONS
of real numbers {xk }, one frequent example is the non negative integers, such that
X
FX (x) = pX (xk ), (2.2)
xk ≤x
where
pX (xk ) = P (X = xk ) .
The function pX (xk ) is called the probability mass function (p.m.f.) of X. Then it must hold that
∞
X
pX (xk ) = 1, pX (xk ) ≥ 0.
k=−∞
Again we shall define discrete random variables by parametric families of distributions (Poisson, Binomial,
Geometric, Waring e.t.c).
It is found in appendix 2.5 that there are random variables that are neither continuous or discrete
or mixed cases of continuous and discrete. In other words, there are distribution functions that do
not have either a p.d.f. or a p.m.f. or a mixture of those. A well known standard instance is the
Cantor distribution, which is the topic in an exercise to this chapter.
R
In addition, since the calculus of integrals teaches us that P (X = x) = x fX (t)dt = 0, there is
a foundational difficulty with continuous random variables likely connected to the nature of real
numbers as a description of reality.
If the expectation (or mean) of X, as defined in section 1.8.2, exists, it can be computed by
∞
P
xk pX (xk ) discrete r.v.,
k=−∞
E [X] = (2.3)
R∞
xfX (x) dx continuous r.v.
−∞
The law of the unconscious statistician proved in theorem 1.8.2, see (1.22), can now be written as
∞
P
H(xk )pX (xk ) discrete r.v.,
k=−∞
E [H(X)] = (2.4)
R∞
H(x)fX (x) dx continuous r.v..
−∞
2
Thereby we have, with H(x) = (x − E [X]) , the variance, when it exists, expressed by
∞
P 2
(xk − E [X]) pX (xk ) discrete r.v.,
k=−∞
Var [X] = E [H(X)] = (2.5)
R∞ 2
(x − E [X]) fX (x) dx continuous r.v.
−∞
This formula facilitates computations, and is applicable in both discrete and continuous cases.
2.2. CONTINUOUS DISTRIBUTIONS 49
Remark 2.1.1 In the sequel we shall frequently come across with Γ(z), which is, for z with positive real part,
the Euler gamma function, see [93, p. 302] for a quick reference, and [54, ch. 6] for a story,
Z ∞
Γ(z) = tz−1 e−t dt. (2.7)
0
We say that X has the uniform distribution on the interval (a, b). The parameters are a and b. We have
a+b (b − a)2
E [X] = , Var [X] = .
2 12
Frequently encountered special cases are U (0, 1) and U (−1, 1). The uniform distribution has been discussed in
terms of ’complete ignorance’.
Example 2.2.3 (General Triangular Distribution) X ∈ Tri(a, b) means that the p.d.f. of X is
(
2 2 a+b
b−a 1 − b−a | x − 2 | a<x<b
fX (x) = (2.13)
0 elsewhere.
a+b (b − a)2
E [X] = , Var [X] = .
2 24
The univariate normal or Gaussian distribution will be the platform on which to construct the multivariate
Gaussian distribution in chapter 8 and then eventually Gaussian processes in section 9.
Example 2.2.5 (Standard Normal Distribution or Standard Gaussian Distribution ) The special case
X ∈ N (0, 1) of (2.14) is called the standard normal distribution or the standard Gaussian distribution and its
p.d.f. is important enough to have a special symbol reserved to it, namely
def 1 2
φ(x) = √ e−x /2 , −∞ < x < +∞. (2.15)
2π
The corresponding distribution function is designated by Φ(x), i.e.,
Z x
def
Φ(x) = φ(t)dt, −∞ < x < +∞. (2.16)
−∞
Clearly,
erfc(x) = 1 − erf(x).
By a change of variable in (2.19) we find that
1 x
Φ (x) = 1 + erf √ ,
2 2
and
1 x
Φ (x) = erfc − √ .
2 2
The distribution function of X ∈ N (0, 1), Φ(x), is often numerically calculated for x > 0 by means of the
’Q-function’ or the error function. or
Z ∞
def
Q(x) = φ(t)dt, Φ(x) = 1 − Q(x), (2.21)
x
2
where the following approximation is known to be very accurate
!
1 1 2
Q(x) ≈ 1
1
√ √ e−x /2 .
2
1 − π x + π x + 2π 2π
Example 2.2.6 (Skew-Normal Distribution) A random variable X is said to have a skew-normal distri-
bution, if it has the p.d.f.
fX (x) = 2φ(x)Φ(λx), −∞ < x < ∞, (2.22)
where the parameter −∞ < λ < ∞ and φ(x) is the p.d.f. of N (0, 1), and Φ(x) is the distribution function of
N (0, 1). We write X ∈ SN (λ) and note by (2.18) that SN (0) = N (0, 1). We have two plots of fX (x) in figure
2.1.
If λ → ∞, then fX (x) converges (pointwise) to
(
2φ(x) if x ≥ 0
fX (x) =
0 if x < 0,
which is another folded normal distribution. The mean and variance as well as other properties of SN (λ) are
established in the exercises to this chapter and to a later chapter.
Example 2.2.7 (Exponential Distribution) X ∈ Exp (λ), λ > 0, and the p.d.f. is
1 −x/λ
λe 0≤x
fX (x) = (2.23)
0 x < 0.
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
Figure 2.1: The p.d.f.’s of SN (−3) (the left hand function graph) and SN (3) (the right hand function graph).
Example 2.2.8 (Laplace Distribution) X ∈ L (a), a > 0, means that X is a continuous r.v., and that its
p.d.f. is
1 −|x|/a
fX (x) = e , −∞ < x < +∞. (2.24)
2a
We say that X is Laplace -distributed with parameter a.
The distribution in this example is for obvious reasons also known in the literature as the Double Exponential
distribution. We shall in the sequel provide exercises generating the Laplace distribution as the distribution of
difference between two independent exponential r.v.’s.
Example 2.2.9 (Gamma Distribution) Let X ∈ Γ (p, a), p > 0, a > 0. The p.d.f. is
1 xp−1 −x/a
Γ(p) ap e 0≤x
fX (x) = (2.25)
0 x < 0.
Example 2.2.10 (Erlang Distribution) The special case Γ (k, a) of the Gamma distribution, where k is a
positive integer, is known as the Erlang distribution, say Erlang (k, a) 3 .
Example 2.2.11 (Weibull Distribution) Let X ∈ Wei (α, β), α > 0, β > 0. The p.d.f. is
α α−1 −(x/β)α
βα x e 0≤x
fX (x) = (2.26)
0 x < 0.
Here α is the shape parameter, β > 0 is the scale parameter. Note that Exp (a) = Wei (1, a). The
exponential distribution is thus a special case of both the Gamma distribution and the Weibull distribution.
There are, however, Gamma distributions that are not Weibull distributions and vice versa. The distribution
was invented by Waloddi Weibull4 .
" 2 #
1 2 2 1
E [X] = βΓ 1 + , Var [X] = β Γ 1 + − Γ 1+ .
α α α
In fracture mechanics one finds the three parameter Weibull distribution Wei (α, β, θ) with the
p.d.f.
α x−θ α−1 −( x−θ α
β β e β ) x≥θ
fX (x) =
0 x < θ.
α is, as above, the shape parameter, β > 0 the scale parameter and θ is the location parameter.
If θ = 0, then Wei (α, β, 0) = Wei (α, β).
Example 2.2.12 (χ2 (f )- Distribution with f Degrees of Freedom) If the random variable X has the
p.d.f. for f = 1, 2, . . .
f −1 −x/2
x2 e
if x > 0
fX (x) = Γ(f /2)2f /2
0 if x ≤ 0,
The following theorem explains the genesis of χ2 (f ) and is included in section 4.7 as an exercise.
3 This distribution has been named after A.K. Erlang (1878−1929), Danish mathematician and engineer, a pioneer in the
development of statistical models of telephone traffic, see, e.g., [84].
4 (1887−1979), was an engineer, a commissioned officer of coastal artillery, and a mathematician. He was professor in machine
components at KTH. He studied strength of materials, fatigue, bearings, and introduced what we now call the Weibull distribution
based on case studies, i.e., not on generative models.
54 CHAPTER 2. PROBABILITY DISTRIBUTIONS
Example 2.2.14 (Student’s t-distribution) If the random variable X has the p.d.f. for n = 1, 2, . . .
Γ n+1
2 1
fX (x) = √ (n+1)/2 , −∞ < x < ∞,
nπΓ n2 1 + x2
n
The following theorem about Student’s t-distribution is recognized from courses in statistics. It is in the sequel
an exercise on computing the p.d.f. of a ratio of two continuous r.v.’s
Theorem 2.2.15 X ∈ N (0, 1), Y ∈ χ2 (n), where X and Y are independent. Then
X
q ∈ t(n). (2.28)
Y
n
1 a
fX (x) = , −∞ < x < +∞. (2.29)
π a + (x − m)2
In particle physics, the Cauchy distribution C (m, a) is known as the (non-relativistic) Wigner distribution [37]
or the Breit-Wigner distribution [64, p.85]. An important special case is the standard Caychy distribution
X ∈ C (0, 1), which has the p.d.f.
1 1
fX (x) = , −∞ < x < +∞. (2.30)
π 1 + x2
Example 2.2.17 (Rayleigh Distribution) We say that X is Rayleigh distributed, if it has the density, a > 0,
(
2x −x2 /a
a e x≥0
fX (x) =
0 elsewhere.
We write X ∈ Ra (a).
1√ 1
E [X] = πa, Var [X] = a 1 − π .
2 4
The parameter in the Rayleigh p.d.f. as recapitulated in [92] is defined in a slightly different manner. The
Rayleigh distribution is a special case of the Rice distribution presented an exercise, which therefore is a
generative model for Ra (a).
Example 2.2.18 (Beta Distribution) The Beta function B (x, y) (see, e.g., [3, pp. 82−86]) is defined for
real r > 0 and s > 0 as Z 1
Γ(r)Γ(s)
B (r, s) = xr−1 · (1 − x)s−1 dx = . (2.31)
0 Γ(r + s)
Taking this for granted we have
Z 1
Γ(r + s)
xr−1 · (1 − x)s−1 dx = 1. (2.32)
Γ(r)Γ(s) 0
Γ(r+s) r−1
Since Γ(r)Γ(s) x · (1 − x)s−1 ≥ 0 for 0 ≤ x ≤ 1, we have found that
( Γ(r+s) r−1
Γ(r)Γ(s) x · (1 − x)s−1 0≤x≤1
fX (x) = (2.33)
0 elsewhere,
is a p.d.f. to be called the Beta density. We write X ∈ β (r, s), if X is a random variable that has a Beta
density. This p.d.f. plays an important role in Bayesian statistics.
r rs
E [X] = , Var [X] = .
r+s (r + s)2 (r + s + 1)
The function Z x
Bx (r, s) = ur−1 · (1 − u)s−1 du, (2.34)
0
One should check the sufficient conditions of theorem 1.5.7 ensuring that F (x) is the distribution function
of some random variable X. The probability distribution corresponding to (2.35) is known as the Gumbel
distribution, and the compact notation is X ∈ Gumbel. The Gumbel distribution belongs to the family of
extreme value distributions. This indicates that it emerges as a model for the distribution of the maximum
(or the minimum) of a number of samples of various distributions. This will be demonstrated for sequences of
independent and identically exponentially distributed X in chapter 6 below.
E [X] = γ,
56 CHAPTER 2. PROBABILITY DISTRIBUTIONS
Example 2.2.20 (Continuous Pareto Distribution) A continuous random variable X has the p.d.f.
(
αkα
xα+1 x > k,
fX (x) = (2.36)
0 x ≤ k,
where k > 0, α > 0, which is called a Pareto density with parameters k and α. We write X ∈ Pa(k, α).
αk αk 2
E [X] = , Var [X] = , α > 2.
α−1 (α − 2)(α − 1)2
This distribution was found by and named after the economist and sociologist Vilfredo Pareto (1848-1923)7,
as a frequency of wealth as a function of income category (above a certain bottom level). In plain words this
means: most success seems to migrate to those people or companies, who are already popular.
Example 2.2.21 (Inverse Gaussian Distribution) A continuous random variable X with the p.d.f.
1/2
λ −λ(x−µ)2
fX (x) = e 2µ2 x , x > 0, (2.37)
2πx3
is said to have the inverse Gaussian distribution a.k.a. Wald distribution. We write X ∈ IG(µ, λ), where
µ3
E [X] = µ > 0, Var [X] = .
λ
The inverse Gaussian distribution is the distribution of the time a Wiener process with positive drift takes to
reach a fixed positive level.
is said to have the K-distribution. Here Iν−µ (z) is the modified Bessel function of the second kind. We write
X ∈ K(L, µ, ν). It holds that
ν +L+1
E [X] = µ, Var [X] = µ2 .
Lν
X ∈ K(L, µ, ν) is the distribution of the product of two independent random variabels
X = X1 · X2 ,
where X1 ∈ Γ (1/L, L), and X2 ∈ Γ (µ/ν, ν). K-distribution is used as a probabilistic model in Synthetic
Aperture Radar (SAR) imagery.
Example 2.2.23 (Logistic Distribution) We say that X has a logistic distribution, X ∈ logistic(0, 1) , if its
p.d.f. is
ex
fX (x) = 2 , −∞ < x < +∞. (2.39)
(1 + ex )
The corresponding distribution function is
Z x
Fx = fX (t)dt = σ(x).
−∞
The function σ(x) = 1+e1−x is known as the logistic function, hence the name of the probability distribution.
The function σ(x) appears also, e.g., in mathematical biology and artificial neural networks.
π2
E [X] = 0, Var [X] = .
3
If Z x Z y
FX,Y (x, y) = fX,Y (u, v)dudv,
−∞ −∞
where Z ∞ Z ∞
fX,Y (x, y)dxdy = 1, fX,Y (x, y) ≥ 0,
−∞ −∞
then (X, Y ) is a continuous bivariate random variable. fX,Y (x, y) is called the joint probability density for
(X, Y ). The main explicit case of a continuous bivariate (X, Y ) to be treated in the sequel is the bivariate
Gaussian in chapter 8. The marginal distribution function for Y is
Z y Z ∞
FY (y) = FX,Y (∞, y) = fX,Y (x, v)dxdv
−∞ −∞
and Z ∞
d
fY (y) = FY (y) = fX,Y (x, y)dx
dy −∞
58 CHAPTER 2. PROBABILITY DISTRIBUTIONS
is the marginal probability density for Y . Then, of course, the marginal distribution function for X is
Z x Z ∞
FX (x) = FX,Y (x, ∞) = fX,Y (u, y)dydu.
−∞ −∞
and Z ∞
d
fX (x) = FX (x) = fX,Y (x, y)dy
dx −∞
is the marginal probability density for X. It follows in view of (1.12) that X and Y are independent random
variables, if and only if
fX,Y (x, y) = fX (x)fY (y), for all (x, y). (2.40)
We have even the bivariate version of the law of the unconscious statistician for an integrable Borel function
H(x, y) as Z ∞Z ∞
E [H(X, Y )] = H(x, y)fX,Y (x, y)dydy. (2.41)
−∞ −∞
This is in the first place applied to H(x, y) = x · y, i.e., to computing covariances, which are defined or recalled
next. The covariance Cov(X, Y ) of the r.v.’s X and Y is
def
Cov(X, Y ) = E [(X − E [X]) (Y − E [Y ])] (2.42)
Here E [X] and E [Y ] are computed as in (2.3) using the respective marginal p.d.f.’s. It follows by properties
of integrals that
Cov(X, Y ) = E [(X · Y )] − E [X] · E [Y ] . (2.43)
The converse implication is not true in general, as shown in the next example.
Example 2.2.24 Let X ∈ N (0, 1) and set Y = X 2 . Then Y is clearly functionally dependent on X. But we
have
Cov(X, Y ) = E [(X · Y )] − E [X] · E [Y ] = E X 3 − 0 · E [Y ] = E X 3 = 0.
The last equality holds, since with (2.15) one has g(x) = x3 φ(x), so that g(−x) = −g(x). Hence E X 3 =
R +∞
−∞
g(x)dx = 0, c.f., (4.50) in the sequel, too.
It will be shown in chapter 8 that the converse implication holds for bivariate Gaussian (X, Y ).
We standardize covariance8 to get the coefficient of correlation between X and Y
def Cov(X, Y )
ρX,Y = p p . (2.45)
Var [X] · Var [Y ]
|ρX,Y | ≤ 1. (2.46)
The cases ρX,Y = ±1 correspond to Y and X being affine functions (e.g., Y = aX + b) of each other, the topic
of another exercise.
8 in order to measure dependence in the common unit of the variables.
2.3. DISCRETE DISTRIBUTIONS 59
n
X n
X n−1
X n
X
Var[ ai X i ] = a2i Var(Xi ) + 2 ai aj Cov(Xi , Xj ),
i=1 i=1 i=1 j=i+1
(2.48)
n
X m
X n X
X m
Cov ai X i , b j Xj = ai bj Cov(Xi , Xj ).
i=1 j=1 i=1 j=1
These expressions are valid for both continuous and discrete distributions.
Example 2.3.2 (Symmetric Bernoulli Distribution) We say that X ∈ SymBe, if the p.m.f. is
1
2 x = −1
pX (x) = (2.50)
1
2 x = 1.
Then
E [X] = 0, Var [X] = 1.
Example 2.3.3 (Discrete Uniform Distribution) X ∈ U (1, 2, . . . , n), where n > 1. The p.m.f. is
1
n x = 1, 2, . . . , n
pX (x) = (2.51)
0 else.
pX (k) = q k p, k = 0, 1, 2, . . .
Suppose p is the probability of an event occurring in a trial. Consider the trial of tossing a coin. Let us say
that the event of interest is ’heads’. We are interested in the probability of the number of independent trials
we perform, before we see the event ’heads’ occuring for the first time NOT INCLUDING the successful trial.
Let X be this random number. Then we write X ∈ Ge(p).
q q
E [X] = , Var [X] = 2 .
p p
Example 2.3.5 (First Success Distribution) X ∈ Fs(p), 0 < p < 1, q = 1 − p. The p.m.f. is
pX (k) = q k−1 p, k = 1, 2, . . . .
Suppose again p is the probability of an event occurring in a trial. Consider the trial of tossing a coin (modelled
as an outcome of a r.v. ∈ Be(p)). Let us say again that the event of interest is ’heads’. We are interested in
the probability of the number of independent trials we perform, before we see the event ’heads’ occuring for the
first time INCLUDING the successful trial. Let X be this random number. Then we write X ∈ Fs(p).
1 q
E [X] = , Var [X] = 2 .
p p
∞
Condsider the measurable space (Ω = {(ωi )i=1 |ωi ∈ {0, 1}}, F ), c.f., example 1.4.7 above, Fo ⊂ F .
Then X above is defined as a map on Ω as
The difficulty is that we have defined r.v.’s as measurable maps from (Ω, F ) to the real numbers,
and +∞ is not a real number. Hence X is in principle an extended random variable with values
in {+∞} ∪ R. However, if we are computing with the probability model of an infinite sequence of
independent Be(p) trials, we have X ∈ Fs(p). Then we must have
P (X = +∞) = 0,
P∞ c
since k=1 q k−1 p = 1 and {X = +∞} = (∪∞ k=1 {X = k}) . Therefore we can define X(ω0 ) in any
preferred way, since this choice has no impact whatsoever on the computations of probabilities.
The literature in probability calculus is not unanimous about the terminology regarding the geometric distri-
bution. It occurs frequently (mostly?) that Fs(p) in our sense above is called the geometric distribution, see,
e.g., [48, p. 61], [55, p. 62].
2.3. DISCRETE DISTRIBUTIONS 61
Example 2.3.6 (Binomial Distribution) X ∈ Bin (n, p), 0 ≤ p ≤ 1, q = 1 − p, and the p.m.f. is
!
n
pX (k) = pk q n−k , k = 0, 1, . . . , n.
k
where we used the beta function and the incomplete beta function in (2.31) and (2.34), respectively.
λk
pX (k) = e−λ , k = 0, 1, 2, . . . (2.53)
k!
E [X] = λ, Var [X] = λ.
We shall in example 6.6.2 below derive the Poisson distribution as an approximation of Bin (n, p) for large n
and small p.
Example 2.3.9 (Compound Poisson Distribution) X ∈ ComPo(λ, µ), λ > 0, µ > 0, then its p.m.f. is
∞
X (rµ)k λr −λ
pX (k) = e−rµ e , k = 0, 1, 2, . . . (2.54)
r=0
k! r!
This expression is somewhat unwieldy, and the Compound Poisson distribution is more naturally treated by
the methods of probability generating functions developed in chapter 5.
The Compound Poisson distribution has many applications, e.g., in particle physics [37, 64] and queuing theory.
9 p. xv in H.G. Romig: 50−100 Binomial Tables, John Wiley & Sons, Inc., New York, 1947.
62 CHAPTER 2. PROBABILITY DISTRIBUTIONS
Example 2.3.10 (Pascal Distribution) Suppose p is the probability of an event occurring in a trial. Con-
sider the trial of tossing a coin. Let us say that the event is ’heads’. We are interested in the probability of the
number of independent trials we perform, before we see the event ’heads’ occuring n times INCLUDING the
nth success.
Texts in engineering statistics suggest the Pascal Distribution as a model of, e.g., the number of
days a certain machine works before it breaks down for the nth time. Or, a text can insist upon
that ’the number of days a certain machine works before it breaks down for the nth time’ is Pascal
distributed. One can remark that ’ applications of probability calculus are based on analogies, which
are to a certain degree halting’ (J.W. Lindeberg, [75, p.120]). One could once again repeat the words
’mind projection fallacy’, too.
Let now X be the number of independent trials we perform, before we have seen the event occurring n times. The
random variable X has then said to have the Pascal Distribution, X ∈ Pascal(n, p), n = 1, 2, 3, . . ., 0 < p < 1
and q = 1−p. Its p.m.f. can be found, using the same kind of reasoning that underlies the Binomial distribution,
[101, p.58], as !
k−1
pX (k) = P (X = k) = pn q k−n , k = n, n + 1, n + 2, . . . (2.55)
n−1
!
k−1
Note that we must understand as = 0 for k = 0, 1, 2, . . . , n − 1. Note also that Pascal(1, p) = Fs(p).
n−1
n (1 − p)
E [X] = , Var [X] = n .
p p2
Example 2.3.11 (Negative Binomial Distribution) X is said to follow the Negative Binomial distribu-
tion, X ∈ NBin(n, p), 0 < p < 1, q = 1 − p, if its p.m.f. is
!
n+k−1
pX (k) = pn q k , k = 0, 1, 2, . . . (2.56)
k
q q
E [X] = n , Var [X] = n 2 .
p p
Observe that Ge(p) = NBin(1, p). The p.m.f. in (2.56) can be established using the same kind of reasoning
that underlies the Binomial distribution, where one needs the interpretation of the coefficient (1.31) as given in
appendix 1.11.
There is a fair deal of confusing variation in the literature w.r.t. the terminology in the two examples above.
Sometimes the Pascal distribution defined as above and, e.g., in [101], is called the negative binomial distribution.
In some textbooks, e.g., in [49], the negative binomial distribution is as above, but in others it is known as the
Pascal distibution. The handbook [92] compromises with (2.55) as ’Negative Binomial or Pascal’ (!).
A p.m.f. pX (k) has a power-law tail, or is a power law, if it holds that
pX (k) = P (X = k) ∼ k −γ , as k → ∞. (2.57)
Remark 2.3.1 The notation f (x) ∼ g(x) (at x = a) has the following meaning.
f (x)
lim = 1. (2.58)
x→a g(x)
This means that the functions grow at the same rate at a. For example, if
f (x) = x2 , g(x) = x2 + x,
then
f (x) 1
lim = lim 1 = 1,
x→∞ g(x) x→∞ 1 + x
but at the same time g(x) − f (x) = x.
Example 2.3.12 (Benford’s Law) We say that a random variable X follows Benford’s Law if it has the
p.m.f.
1
pX (k) = P (X = k) = log10 1 + , k = 1, . . . , 9. (2.59)
k
This law is, by empirical experience, found as the distribution of the first digit in a large material of numbers.
Note that this is not the uniform distribution p(k) = 91 , for k = 1, 2 . . . 9 that might have been expected.
Benford’s Law is known to be valid for many sources of numerical data.
Example 2.3.13 (Zipf ’s Law (rank-frequency form)) We count the frequencies of occurrencies of some
N events (e.g., Swedish words in today’s issue of some Stockholm daily, digital or paper edition). Then we
determine the rank k of each event by the frequency of occurrence (the most frequent is number one and so
on). Then, if we consider pX (k) as the frequency of a word of rank k, this is very likely found to be
pX (k) = c · k −γ , (2.60)
The probability mass function in (2.60) is known as Zipf ’s law, and is an empirical or experimental assertion,
which seems to arise in many situations, and is not based on any theoretical generative model. The case with
γ = 2 is known as Zipf-Lotka’s Law, and was discovered as a bibliometric law on the number of authors
making k contributions.
Example 2.3.14 (Waring distribution) We write X ∈ War(ρ, α) and say that X is Waring distributed
with parameters α > 0 and ρ > 0 , if X has the p.m.f.
α(k)
pX (k) = ρ , k = 0, 1, 2, . . . (2.61)
(α + ρ)(k+1)
Here we invoke the ascending factorials
Γ(α + k)
α(k) = α · (α + 1) · . . . · (α + k − 1) = ,
Γ(α)
64 CHAPTER 2. PROBABILITY DISTRIBUTIONS
and analogously for (α + ρ)(k+1) . If ρ > 1, then E [X] exists, and if ρ > 2, then Var [X] exists, too. It can be
shown that there is the power-law tail
1
pX (k) ∼ 1+ρ .
k
We shall in an exercise to chapter 3 derive War(ρ, α) under the name Negative-Binomial Beta distribution.
α α ρ+α α
E [X] = n , Var [X] = + .
ρ−1 ρ−1 ρ−2 (ρ − 1)(ρ − 2)
This distribution was invented and named by J.O. Irwin in 196310. It has applications, e.g., in accident theory
and in the measurement of scientific productivity.
Example 2.3.15 (Skellam distribution) We write X ∈ Ske(µ1 , µ2 ) and say that X is Skellam distributed
with parameters µ1 > 0 and µ2 > 0 , if X has the p.m.f. for any integer k
k/2
µ1 √
pX (k) = e−(µ1 +µ2 ) I|k| (2 µ1 µ2 ), (2.62)
µ2
where Ik (z) is the modified Bessel function of the first kind of order k.
It can be shown that if X1 ∈ Po(µ1 ) and X2 ∈ Po(µ2 ) and X1 and X2 are independent, then X1 − X2 ∈
Ske(µ1 , µ2 ).
Skellam distribution is applied to the difference of two images with photon noise. It is also been found useful
as a model for the point spread distribution in baseball, hockey and soccer, where all scored points are equal.
If X X
FX,Y (x, y) = pX,Y (xj , yk ),
−∞<xj ≤x −∞<yk ≤y
where X X
pX,Y (xj , yk ) = 1, pX,Y (xj , yk ) ≥ 0,
−∞<xj <∞ −∞<yk <∞
then (X, Y ) is a discrete bivariate random variable. The function pX,Y (xj , yk ) is called the joint probability
mass function for (X, Y ). Marginal distributions are defined by
X X
pX (xj ) = pX,Y (xj , yk ), pY (yk ) = pX,Y (xj , yk ).
−∞<yk <∞ −∞<xj <∞
10 J.O.
Irwin: The Place of Mathematics in Medical and Biological Sciences. Journal of the Royal Statistical Society, Ser. A,
126, 1963, p. 1−14.
2.4. TRANSFORMATIONS OF CONTINUOUS DISTRIBUTIONS 65
where we know how to compute with the joint p.m.f. and with the marginal p.m.f.’s and the law of the
unconscious statistician.
Example 2.3.16 Bivariate Bernoulli distribution Let (X, Y ) be a bivariate random variable, where both
X and Y are binary, i.e., their values are 0 or 1. Then we say that (X, Y ) has a bivariate Bernoulli distribution,
if the p.m.f is
pX,Y (x, y) = θx (1 − θ)1−x λy (1 − λ)1−y , x ∈ {0, 1}, y ∈ {0, 1}. (2.64)
Here 0 ≤ θ ≤ 1, 0 ≤ λ ≤ 1.
Here H −1 (y) is the inverse of H(x). In case the domain of definition of the function H(x) can be decomposed
into disjoint intervals, where H(x) is strictly monotonous, we have
X d −1
fY (y) = fX Hi−1 (y) · | H (y) | χIi (y), (2.66)
i
dy i
where Hi indicates the function H restricted to the respective domain Ii of strict monotonicity, and χIi is the
corresponding indicator function.
Example 2.4.1 Let X ∈ U − π2 , π2 . Set Y = sin(X). We want to find the p.d.f. fY (y). When we recall the
graph of sin(x), we observe that sin(x) is strictly increasing on − π2 , π2 . Then for −1 ≤ y ≤ 1 we have
FY (y) = P (Y ≤ y) = P (X ≤ arcsin(y)) ,
since arcsin(y) is the inverse of sin(x) for x ∈] − π/2, π/2[. As X ∈ U − π2 , π2 we have
arcsin(y) − (−π/2)
FY (y) = , −1 ≤ y ≤ 1. (2.67)
π
Then
1
fY (y) = p , −1 < y < 1. (2.68)
π 1 − y2
Example 2.4.2 Let X ∈ U (0, 2π) and Y = sin(X). We want again to determine the p.d.f. fY (y). The
function H(x) = sin(x) is not strictly monotonous in (0, 2π), hence we shall find the the p.d.f. fY (y) by means
of (2.66).
66 CHAPTER 2. PROBABILITY DISTRIBUTIONS
We make the decomposition (0, 2π) = I1 ∪ I2 ∪ I3 , where I1 = (0, π/2), I2 = (π/2, 3π/2) and I3 = (3π/2, 2π).
Then for i = 1, 2, 3, Hi (x) = sin(x) | Ii , i.e., the function sin(x) restricted to Hi , is strictly monotonous. In fact,
π
H1 (x) = sin(x) 0≤x≤ ,
2
π 3π
H2 (x) = H2 (x − π/2), ≤x≤ ⇔ H2 (t) = cos(t), 0≤t≤π
2 2
π 3π π
H3 (x) = H3 (x − 2π), ≤x≤ ⇔ H3 (t) = sin(t), − ≤ t ≤ 0.
2 2 2
Then we have two cases (i)-(ii) to consider:
1
= P (0 ≤ X ≤ arcsin(y)) + P (arccos(y) ≤ X ≤ 3π/2) +
4
arcsin(y) 3π/2 − arccos(y) 1
= + + .
2π 2π 4
Then !
d 1 −1
fY (y) = FY (y) = p − p
dy 2π 1 − y 2 2π 1 − y 2
1
= p , 0 ≤ y < 1.
π 1 − y2
The p.d.f. derived above appears in the introduction to stochastic processess in chapter 9.
The p.d.f.s fY (y) derived in examples 2.4.1 and 2.4.2 are identical. Hence, if we were given a sample
set of I.I.D. outcomes of Y = sin(X) for X ∈ U − π2 , π2 or of Y = sin(X) for X ∈ U (0, 2π),
we would have no statistical way of telling from which of the mentioned sources the observations
emanate.
2.4. TRANSFORMATIONS OF CONTINUOUS DISTRIBUTIONS 67
where gi are continuously differentiable and (g1 , g2 , . . . , gm ) is invertible (in a domain) with the inverse
Xi = hi (Y1 , . . . , Ym ) , i = 1, 2, . . . , m,
where hi are continuously differentiable. Then the joint p.d.f. of Y is (in the domain of invertibility)
The main point of the proof in loc.cit. may perhaps be said to be the approximation of the domain of
invertibility of (g1 , g2 , . . . , gm ) by intervals Ik in Rm with volume V (Ik ), and then to show that these
intervals are mapped by (g1 , g2 , . . . , gm ) to parallelepipeds Pk with volume V (Pk ). The volume
change incurred by this mapping is then shown to be
V (Pk ) =| J | V (Ik ) .
Example 2.4.3 X has the probability density fX (x), Y = AX + b, and A is m × m and invertible. In this
case one finds that the Jacobian is J = det(A−1 ) and by general properties of determinants det(A−1 ) = det1 A .
Then Y has in view of (2.69) the p.d.f.
1
fY (y) = fX A−1 (y − b) . (2.71)
| det A |
Example 2.4.4 (Ratio of two random variables) Let X and Y be two independent continuous r.v.’s with
p.d.f.’s fX (x) and fY (y), respectively. We are interested in the distribution of X
Y . We shall apply (2.69) by the
following transformation
X
U = , V = Y.
Y
This is one typical example of the application of the change of variable formula. We are in fact
interested in a single r.v., here U , but in to order find its distribution we need an auxiliary variable,
here V , to use the terminology of [34, p. 68]. Then we determine the joint p.d.f., here fU,V (u, v),
and marginalize to U to find the desired p.d.f..
68 CHAPTER 2. PROBABILITY DISTRIBUTIONS
Example 2.4.5 (Bivariate Logistic Normal Distribution) (From the exam in sf2940 2010-01-12) X1 ,X2
are two independent standard normal random variables. We introduce two new random variables by
eX1
Y1 1+eX1 +eX2
= .
X2
e
Y2 1+eX1 +eX2
We wish to find the probability density of (Y1 , Y2 ). We write first, for clarity of thought,
eX1
Y1 g1 (X1 , X2 ) 1+eX1 +eX2
= = .
X2
e
Y2 g2 (X1 , X2 ) 1+eX1 +eX2
and similarly
X2 = h2 (Y1 , Y2 ) = ln Y2 − ln (1 − (Y1 + Y2 )) .
Then we find the Jacobian, or
∂x1 ∂x1
∂y1 ∂y2
J = ∂x2 ∂x2 .
∂y1 ∂y2
Entry by entry we get
∂x1 1 1
= +
∂y1 y1 1 − (y1 + y2 )
∂x1 1
=
∂y2 1 − (y1 + y2 )
∂x2 1
=
∂y1 1 − (y1 + y2 )
2.4. TRANSFORMATIONS OF CONTINUOUS DISTRIBUTIONS 69
∂x2 1 1
= + .
∂y2 y2 1 − (y1 + y2 )
Thus, the Jacobian determinant is
∂x1 ∂x2 ∂x1 ∂x2
J= · − ·
∂y1 ∂y2 ∂y2 ∂y1
1 1 1 1 1 1
= + + +
Y1 y2 1 − (y1 + y2 ) 1 − (y1 + y2 ) y2 1 − (y1 + y2 )
2
1
−
1 − (y1 + y2 )
2 2
1 1 1 1 1 1 1 1
= + + + −
y1 y2 y1 1 − (y1 + y2 ) y2 1 − (y1 + y2 ) 1 − (y1 + y2 ) 1 − (y1 + y2 )
1 1 1 1 1 1
= + +
y1 y2 y1 1 − (y1 + y2 ) y2 1 − (y1 + y2 )
1 1 1 1 1
= + +
y1 y2 1 − (y1 + y2 ) y1 y2
1 1 1 y1 + y2
= +
y1 y2 1 − (y1 + y2 ) y1 y2
1 − (y1 + y2 ) + y1 + y2
=
y1 y2 (1 − (y1 + y2 ))
1
= .
y1 y2 (1 − (y1 + y2 ))
Let us note that by construction J > 0. Thus we get by (2.69) that
Example 2.4.6 Exponential Order Statistics Let X1 , . . . , Xn be I.I.D. random variables with a continuous
distribution. The order statistic of X1 , . . . , Xn is the ordered sample:
Here
X(1) = min (X1 , . . . , Xn )
and
X(k) = kth smallest of X1 , . . . , Xn .
The variable X(k) is called the kth order variable. The following theorem has been proved in, e.g., [49, section
4.3., theorem 3.1.].
70 CHAPTER 2. PROBABILITY DISTRIBUTIONS
Theorem 2.4.7 Assume that X1 , . . . , Xn are I.I.D. random variables with the p.d.f. f (x). The joint p.d.f. of
the order statistic is
( Q
n! nk=1 f (yk ) if y1 < y2 < . . . < yn ,
fX(1) ,X(2) ,...,X(n) (y1 , . . . , yn ) = (2.74)
0 elsewhere.
Let X1 , . . . , Xn be I.I.D. random variables with the distribution Exp(1). We are interested in the differences of
the order variables
X(1) , X(i) − X(i−1) , i = 2, . . . , n.
Note that we may consider X(1) = X(1) − X(0) , if X(0) = 0. The result of interest in this section is the following
theorem.
Theorem 2.4.8 Assume that X1 , . . . , Xn are I.I.D. random variables Xi ∈ Exp(1), i = 1, 2, . . . , n. Then
(a)
1 1
X(1) ∈ Exp , X(i) − X(i−1) ∈ Exp ,
n n+1−i
Then we introduce
1 0 0 ... 0 0
−1 1 0 ... 0 0
A=
0 −1 1 ... 0 0 .
(2.75)
.. .. .. .. ..
. . . ... . .
0 0 0 . . . −1 1
so that if
Y1 X(1)
Y2 X(2)
Y=
Y3 ,X =
X(3) ,
.. ..
. .
Yn X(n)
we have
Y = AX.
It is clear that the inverse matrix A−1 exists, because we can uniquely find X from Y by
X = A−1 Y.
Hence, if we insert the last result in (2.76) and distribute the factors in n! = n(n − 1) · · · 3 · 2 · 1 into the product
of exponentials we get
1
fY (y) = ne−nx(1) (n − 1)e−(n−1)(x(2) −x(1) ) · · · e−(x(n) −x(n−1) ) . (2.78)
| det A|
Since A in (2.75) is a triangular matrix, its determinant equals the product of its diagonal terms, c.f. [92, p. 93].
Hence from (2.75) we get det A = 1. In other words, we have obtained
fX(1) ,X(2) −X(1) ,...,X(n) −X(n−1) x(1) , x(2) − x(1) , . . . , x(n) − x(n)
(which is also seen above), if X1 , . . . , Xn are I.I.D. random variables under Exp(1). Then one can argue by
independence and the lack of memory of the exponential distribution that X(i) − X(i−1) is the minimum of
lifetimes of n + 1 − i independent Exp(1) -distributed random variables.
sum of an absolutely continuous part, a discrete part and a singular part, in the sense to be made clear in the
sequel. Then we check how such dispositions are related to continuous and discrete r.v.’s. We mention here the
lecture notes [87], not because these are the authentic source with priority on the results to be discussed, but
because we shall follow the good detail of presentation as loc.cit..
We start by a theorem that shows that a probability measure on the real line and its Borel sets can be
’induced’ (in the sense given in the proof below) by a random variable.
Theorem 2.5.1 If F satisfies 1., 2. and 3. in theorem 1.5.6, then there is a unique probability measure µ on
(R, B) such that µ((a, b]) = F (b) − F (a) for all a, b.
Proof: The sets (a, b] lie in the Borel σ field. The theorem 1.5.7 gives the existence of a random variable X
with distribution F . Consider the measure this X induces on (R, B), which means that for any A ∈ B we define
def
µX (A) = P (X ∈ A) . (2.80)
def
Then, of course, µ((a, b]) = µX ((a, b]) = F (b) − F (a). The uniqueness follows because the sets (a, b) generate
the σ field and we can hence apply theorem 1.4.1.
We shall return the result in the theorem above. But we continue first by considering a generic probability
measure µ on (R, B).
{x|p(x) > 0} = B1 ∪ B2 ∪ B3 ∪ . . .
This shows that {x|p(x) > 0} is a countable union of finite sets, and such a union is a countable set.
The singletons {x} such that p(x) > 0 are also called atoms of µ. In view of this lemma we may define a
discrete part or pure point mass part of µ as measure on (R, B) by the countable sum
X
µD (A) = p(x), A ∈ B.
x∈A|p(x)>0
We say that a probability measure µ on (R, B, ) is continuous, if its pure point mass measure is identically
zero, or
p(x) = µ ({x}) = 0, for all x ∈ R
Then we define for any A ∈ B the measure
def
µC (A) = µ(A) − µD (A).
Note that it must hold µC (A) ≥ 0 for a measure, so we must and can check that µC (A) is a measure. Clearly,
µC is a continuous measure.
2.5. APPENDIX: DECOMPOSITIONS OF PROBABILITY MEASURES ON THE REAL LINE 73
Theorem 2.5.3 Every measure µ on (R, B) can be expressed uniquely with an additive deomposition to its
continuous part and its discrete part by
µ = µC + µD . (2.82)
Then it follows that an absolutely continuous measure is a continuous measure. This is plausible, since
Z x+h
µ ({x}) ≤ µ ([x − h, x + h]) = f (x)dx → 0,
x−h
as h → 0.
Theorem 2.5.4 For every probability measure µ on (R, B) with density f (x) it holds for almost all x that
1
lim µ ({x − h, x + h}) = f (x). (2.84)
h→0 2h
R∞
It can be shown that f (x) ≥ 0 and −∞ f (x)dx = 1. Conversely, any function with the last two properties
defines an absolutely continuous measure with density f (x).
Theorem 2.5.5 For every probability measure µ on (R, B) it holds for almost all x that
1
lim µ ({x − h, x + h}) = g(x). (2.85)
h→0 2h
Proof: is omitted.
Let µ be a probability measure on (R, B) and the corresponding g(x) be defined as in (2.85). By the absolutely
continuous part of µ we mean the absolutely continuous measure µA with density the g(x). It can be shown
R +∞
that g(x) ≥ 0 and −∞ g(x)dx = 1.
µS = µ − µA . (2.86)
The proof is omitted. The measure µS is called the singular part of µ. There are trivial examples of singular
measures, like the one that assigns measure zero to every Borel set. One can describe a purely singular
measure µ as follows:
• µS is a continuous measure.
By theorem 2.5.3 we have for any probability measure on (R, B) that µ = µC + µD . By theorem 2.5.6 we have
µC = µA + µS for any continuous measure µC on (R, B). This we summarise in the theorem below.
Theorem 2.5.7 Every probability measure µ on (R, B) can be expressed uniquely with an additive deompo-
sition to its absolutely continuous part, its singular part and its discrete part
µ = µA + µS + µD . (2.87)
Now we start a journey backwards to the familiar notions in the bulk of this chapter.
The measure µ is uniquely determined by Fµ (x). It follows that Fµ (x) satisfies theorem 1.5.6 and in 5. of
theorem 1.5.6 and we find
p(x) = Fµ (x) − Fµ (x−),
where p(x) is the point mass function of µ as defined in (2.81). Indeed, by continuity from above of the
probability measure µ we get
Let µ be an absolutely continuous measure with the density fµ (x). Then in view of (2.83)
Z x
Fµ (x) = fµ (u)du, −∞ < x < ∞.
−∞
Theorem 2.5.8 Let µ be a probability measure on (R, B) and let Fµ (x) be its distribution function. Then
d
dx Fµ (x) exists for almost all x and
d
Fµ (x) = g(x), (2.89)
dx
where g(x) is given in (2.85). In addition, the absolutely continuous part µA of µ is the probability measure
given by the density g(x).
As a consequence of the theorem above we can describe the distribution function Fµ (x) of a singular measure
µ by
Finally,
Theorem 2.5.9 Let µ be a probability measure on (R, B) and let Fµ (x) be its distribution function. Then µ
Rx d
lacks a purely singular part µS , if Fµ (x) = −∞ dx Fµ (u)du except for a countable number of points.
This preceding narrative has amounted to the following. Let as in the proof of theorem 2.5.1 X be a random
variable. The probability measure µX , which X induces on (R, B) is
def
µX (A) = P (X ∈ A) .
Then by (2.87),
µX = µX X X
A + µS + µD , (2.90)
If the parts µX X
S and µD are missing in (2.90), we have what has been in the preceding called X a continuous
X X
r.v.. If µC and µS are missing in (2.90), we have what has been in the preceding called X a discrete r.v.. In
addition, if µX
S is missing in (2.90), we could call X a mixed r.v., and such r.v.’s are not much in evidence in
these notes and other texts at the same level 11 . If µX X
C and µD are missing in (2.90), the random variable X is
called singular. The most famous example of a singular r.v. is the r.v. with a Cantor distribution.
2.6 Exercises
2.6.1 Distribution Functions
1. A stochastic variable X is said to follow the two-parameter Birnbaum-Saunders distribution, we
write X ∈ BS (α, β), if its distribution function is
q q
Φ 1 x
− β
if 0 < x < ∞
α β x
FX (x) =
0 elsewhere,
The two-parameter Birnbaum-Saunders distribution is a life time distribution and has been derived from
basic assumptions as a probabilistic generative model of failure times of material specimen.
2. Let X ∈ BS (α, β) (c.f. the exercise above). Let Y = ln(X). Show that the distribution function of Y is
2 y−µ
FY (y) = Φ sinh , −∞ < y < ∞,
α 2
where Φ is the cumulative distribution function of N (0, 1) and where µ = ln(β). This is known as the
distribution function of the sinh-normal distribution with parameters α, µ and 2.
11 We would need the Lebesgue-Stieltjes theory of integration to compute, e.g., the expectations and variances of such X.
76 CHAPTER 2. PROBABILITY DISTRIBUTIONS
i.e, there is neither p.d.f. nor p.m.f., is the distribution function of a singular probability measure on
the real line. One example of such a distribution function is the Cantor function. We require first the
construction of the Cantor set or more precisely the Cantor ternary set.
One starts by deleting the open middle third E1 = (1/3, 2/3) from the interval [0, 1]. This leaves the union
of two intervals: [0, 1/3] ∪ [2/3, 1]. Next, the open middle third of each of these two remaining intervals
is deleted. The deleted open intervals are E2 = (1/9, 2/9) ∪ (7/9, 8/9) and the remaining closed ones are:
[0, 1/9] ∪ [2/9, 1/3] ∪ [2/3, 7/9] ∪ [8/9, 1]. This construction is continued: En is the union of the middle
intervals after E1 , E2 , . . . , En−1 have been removed. The Cantor set C contains all points in the interval
[0, 1] that are not deleted at any step in this infinite construction, or
def
C = [0, 1] \ ∪∞
i=1 Ei .
It follows that C is uncountable and it has length (=Lebesgue measure) zero, see [91, pp. 41, 81, 138, 168,
309]. Let now A1 , A2 , . . . , A2n −1 be the subintervals of ∪ni=1 Ei . For example
Then we define
0 x ≤ 0,
k
Fn (x) = 2n x ∈ Ak k = 1, 2, . . . , 2n − 1,
1 1 ≤ x,
with linear interpolation in between. Draw graphs of Fn (x) for n = 2 and for n = 3 in the same picture.
Show that Fn (x) → F (x) for every x. The limiting function F (x) is the Cantor function. Then
F (x) is continuous and increasing and F (x) is a distribution function of some random variable according
d
to theorem 1.5.7. dx F (x) = 0 for almost every x, and F (x) has no p.d.f.. Discuss this challenge for
understanding continuity and distribution functions with your fellow student12 .
4. We say that ’the r.v.’s X and Y are equal in distribution, if FX (z) = FY (z) for all z ∈ R, and write
this as
d
X = Y.
Note that this is a very special sort of equality, since for the individual outcomes ω, the numbers X(ω)
and Y (ω) need not ever be equal.
Let X ∈ N (0, 1). Show that
d
X = −X. (2.92)
In this case X(ω) 6= −X(ω), except when X(ω) = −X(ω) = 0, which has probability zero. In addition
d
X = −X means that the distribution of X is symmetric (w.r.t. the origin).
5. Let Z ∈ N µ, σ 2 . Let X = eZ . Show that the p.d.f. of X is
1 (ln x−µ)2
fX (x) = √ e− 2σ2 , x > 0. (2.93)
xσ 2π
This distribution is called the Log-Normal distribution.
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
The Hermite polynomials Hen (x), n = 0, 1, 2, . . . , are in [96, p.273], [3, pp.204−209] or [92,
pp. 266−267] defined by the Rodrigues formula
2 dn −x2
Hen (x) = (−1)n ex e .
dxn
This gives He0 (x) = 1, He1 (x) = 2x, He2 (x) = 4x2 − 2, He3 (x) = 8x3 − 12x, ldots e.t.c.. These
polynomials are known as the physicist’s Hermite polynomials14. We shall use another
definition to be given next.
In probability theory [24, p. 133], however, one prefers to work with probabilist’s Hermite
polynomials by
2 dn 2
Hn (x) = (−1)n ex /2 n e−x /2 . (2.97)
dx
13 Carl Vilhelm Ludwig Charlier (1862−1934) was Professor of Astronomy at Lund University. He is also known for the Charlier-
Poisson polynomials.
14 Indeed, see p. 10 of Formelsamling i Fysik, Institutionen för teoretisk fysik, KTH, 2006
http://courses.theophys.kth.se/SI1161/formelsamling.pdf
2.6. EXERCISES 79
0.5
0.45
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
-5 0 5
Figure 2.3: The p.d.f. of X ∈ HypSech and the p.d.f. of X ∈ N (0, 1) (the thicker function plot).
One can in addition define a system of multivariate Hermite polynomials, see [95, p.87]. The
Hermite polynomials, as given by (2.97), have the orthogonality property
Z ∞
2
e−x /2 Hn (x)Hm (x)dx = 0, n 6= m, (2.98)
−∞
and for n = m, Z ∞
2 2
√
e−x /2
(Hn (x)) dx = n! 2π. (2.99)
−∞
We can now explain the rationale behind the probabilist’s definition of Hermite polynomials.
Let now X ∈ N (0, 1). Then, by (2.98), if n 6= m, and the law of the unconscious statistician
(2.4) we have
Z ∞
1 2
E [Hn (X) Hm (X)] = √ e−x /2 Hn (x)Hm (x)dx = 0, (2.100)
2π −∞
and by (2.99) Z
∞
1 2 2
E Hn2 (X) = √ e−x /2
(Hn (x)) dx = n!. (2.101)
2π −∞
80 CHAPTER 2. PROBABILITY DISTRIBUTIONS
The technical idea of a Gram-Charlier expansion is as follows, [24, pp. 222−223]. Let fX (x) be a p.d.f..
We consider a symbolic expansion of the form
c1 (1) c2
fX (x) ↔ c0 φ(x) + φ (x) + φ(2) (x) + . . . , (2.102)
1! 2!
dn
where φ(x) is the p.d.f. of N (0, 1) in (2.15) and φ(n) (x) = dxn φ(x). The expansion has the attribute
’symbolic’, as we are not assured of convergence.
In view of (2.97) we have
φ(n) (x) = (−1)n φ(x)Hn (x). (2.103)
Thus the right hand side of (2.102) is an expansion in terms of orthogonal polynomials of the type (2.98)
and (2.99)15 . Then we can determine the coefficients cn by multiplying both sides of (2.102) with Hn (x)
and then integrating. The expressions (2.98), (2.99) and (2.103) give
Z ∞
n
cn = (−1) fX (x)Hn (x)dx. (2.104)
−∞
We set n
X ck
fbn (x) = φ(k) (x) (2.105)
k!
k=0
As stated above, we do not claim in general the convergence of fbn (x) to fX (x) (or to anything), as n → ∞.
In case convergence is there, the speed of convergence can be very slow. But this does not matter for us
here. We are interested in an expression like fb4 (x) giving us information about the ’near Gaussianness’ of
fX (x).
9. Exponential Growth Observed at a Random Time or a Generative Model for the Pareto
Distribution Let us consider the deterministic (i.e., no random variables involved) exponential growth,
or
x(t) = eµt , t ≥ 0, µ > 0.
We stop, or kill, the growth at an exponentially distributed time T ∈ Exp(1/ν). Then we observe the
state of the growth at the random time of stopping, or at random age, which is X = x(T ) = eµT . Show
15 The proper expansion in terms of Hermite polynomials is stated in theorem 9.7.1, but this is not Charlier’s concept.
2.6. EXERCISES 81
that X ∈ Pa 1, µν .
We have here a simple generative model for one of the continuous Pareto distributions in (2.36). Aid:
Note that since µ > 0 and T ∈ Exp(1/ν), we have P (X ≤ 1) = 0.
10. Prove the law of the unconscious statistician (2.4), when H(x) is strictly monotonous, by means of (2.65).
2. Prove that
P (a < X ≤ b, c < Y ≤ d) = FX,Y (b, d) − FX,Y (a, d) − FX,Y (b, c) + FX,Y (a, c).
Technical Drill
2.1 The four r.v.’s W, X, Y and Z have the joint p.d.f
fW,X,Y,Z (w, x, y, z) = 16wxyz, 0 < w < 1, 0 < x < 1, 0 < y < 1, 0 < z < 1.
Find P 0 < W ≤ 21 , 12 < X ≤ 1 . Answer: 3
16 .
3. (From [28]) The continuous bivariate random variable (X, Y ) has the p.d.f.
−x(1+y)
xe x ≥ 0, y ≥ 0,
fX,Y (x, y) =
0 elsewhere.
(a) Find the marginal p.d.f.’s of X and Y . Are X and Y independent ? Answers: fX (x) = e−x , x ≥ 0,
1
fY (y) = (1+y) 2 , y ≥ 0. No.
(b) What is the probability that at least one of X and Y exceeds a > 0 ? Aid: Consider P ({X ≥ a} ∪ {Y ≥ a})
and switch over to the complementary probability using De Morgan’s rules.
1 1
Answer: e−a + 1+a − 1+a e−a(1+a) .
X
U= .
Y
Show that U ∈ C(0, 1). Aid: The result (2.73) should be useful here.
5. X ≥ 0 and Y ≥ 0 are independent continuous random variables with probability densities fX (x) and
fY (y), respectively. Show that the p.d.f. of their product XY is
Z ∞ Z ∞
1 x 1 x
fXY (x) = fX fY (y)dy = fX (y) fY dy. (2.108)
0 y y 0 y y
Technical Drills
82 CHAPTER 2. PROBABILITY DISTRIBUTIONS
6. X and Y are independent random variables with p.d.f.’s fX (x) and fY (y), respectively. Show that their
sum Z = X + Y has the p.d.f.
Z ∞ Z ∞
fZ (z) = fX (x)fY (z − x)dx = fY (y)fX (z − y)dy. (2.110)
−∞ −∞
The integrals in the right hand side are known as convolutions of fX and fY . A convolution sum is valid
for the probability mass function of a sum of two indepedendent discrete random variables.
def
7. X ∈ Exp(1/λ) and Y ∈ Exp(1/µ) are independent, λ > 0, µ > 0. Let Z = X − Y .
1
(a) Show that E [Z] = λ − µ1 , and Var [Z] = 1
λ2 + 1
µ2 .
(c) In probabilistic reliability theory of structures, [32], X would denote the random stress re-
sulting in a bar of constant cross section subjected to an axial random force. Y would denote the
resistance, the allowable stress, which is also random. Then R, the reliability of the structure, is
def
R = P (X ≤ Y ) .
Show that
λ
R= .
λ+µ
(d) If λ = µ, which known distribution is obtained for Z?
8. (From [6]) (X, Y ) is a continuous bivariate r.v., and its joint p.d.f is
6
fX,Y (x, y) = x, 0 ≤ x, 0 ≤ y, 1 ≤ x + y ≤ 2.
7
Find the marginal p.d.f.’s fX (x) and fY (y). Answer:
(
6
7x 0≤x≤1
fX (x) = 12 6 2
7 x − 7x 1 ≤ x ≤ 2.
(
9
7 − 76 y 0≤y≤1
fY (y) = 3
7 (2 − y)2 1 ≤ y ≤ 2.
(b) Show that if c = 0, then X and Y are independent, and that if c > 0, X and Y are not independent.
11. (X, Y ) is a continuous bivariate r.v., and its joint p.d.f is for
1
fX,Y (x, y) = , −∞ < x < ∞, −∞ < y < ∞.
π 2 (1 + x2 )(1 + y 2 )
(a) Find the distribution function FX,Y (x, y). Aid: Plain to see.
(b) Find the marginal p.d.f.’s fX (x) and fY (y).
12. The continuous bivariate random variable (X, Y ) has the p.d.f.
−y
e 0≤x≤y
fX,Y (x, y) = (2.112)
0 elsewhere.
(a) Find the marginal p.d.f.’s of X and Y . Are X and Y independent ? Answers: fX (x) = e−x , x > 0
and = 0 elsewhere, fY (y) = ye−y , y > 0, and = 0 elsewhere.
X
(b) Show that X and Y are independent.
(c) Give a generative model for (X, Y ). Aid: Note that Y ∈ Γ(2, 1).
13. The t -distribution X ∈ N (0, 1), Y ∈ χ2 (n). X and Y are independent. Show that
X
q ∈ t(n). (2.113)
Y
n
14. The F-distribution Let X1 ∈ χ2 (f1 ), X2 ∈ χ2 (f2 ). X1 and X2 are independent. Consider the ratio
X1
def f1
U = X2
.
f2
This is the p.d.f. of what is known as F -distribution or Fisher -Snedecor -distribution. The distri-
bution is important in the analysis of variance and econometrics (F-test). Aid: You need the technique
of an auxiliary variable, take V = X2 . Then consider (U, V ) as a transformation of (X1 , X2 ). The
Jacobian of the transformation is J = ff21V . Find the joint p.d.f. f(U,V ) (u, v), and marginalize to get
fU (u).
16. (From [49]) X1 and X2 are independent and have the common the p.d.f.
(
4x3 0 ≤ x ≤ 1
fX (x) =
0 elsewhere.
√ √
Set Y1 = X1 X2 , Y2 = X2 X1 . Find the joint p.d.f. of (Y1 , Y2 ). Are Y1 , and Y2 independent? Answer:
( √
64 5/3
3 (y1 y2 ) 0 < y12 < y2 < y1 < 1
fY1 ,Y2 (y1 , y2 ) =
0 elsewhere.
Show that
2u
(a) fX+Y (u) = (1+u)3 , 0 < u.
1
(b) fX−Y (v) = 2(1+|v|)2 , −∞ < v < ∞.
18. In this exercise we study the bivariate Bernoulli distribution in example 2.3.16.
19. (X, Y ) is a discrete bivariate r.v., such that their joint p.m.f. is
(j + k)aj+k
pX,Y (j, k) = c ,
j!k!
where a > 0.
e−2a
(a) Determine c. Answer: c = 2a
e−a j
(b) Find the marginal p.m.f. pX (j). Answer: pX (0) = 2 , pX (j) = c aj! ea (j + a) for j ≥ 1.
r
(2a)
(c) Find P(X + Y = r). Answer: P(X + Y = r) = c (r−1)! , r ≥ 1, P(X + Y = 0) = 0.
(d) Find E [X]. Answer: 21 (e−a + a + 1).
X1
20. Let X1 ∈ Γ(a1 , b) and X2 ∈ Γ(a2 , b) be independent. Show that X2 and X1 + X2 are independent.
r − (r +v2 ) rv
2 2
fR (r) = e 2σ I0 , (2.114)
σ2 σ2
where I0 (x) is a modified Bessel function of the first kind with order 0. The distribution in this
exercise is known as the Rice distribution. We write
R ∈ Rice (v, σ) .
The Rice distribution of R is the distibution of the envelope of the narrowband Gaussian noise [3,
v2
section 8.3.1.]. The ratio 2σ 2 is known as the signal-to-noise ratio (SNR).
23. The Marcum Q-function16 is a special function important in communications engineering and radar
detection and is defined as
Z ∞ (r2 +v2 )
def 1
Qm (v, b) = m−1 rm e− 2 Im−1 (rv)dr, (2.115)
v b
where Im−1 (z) is a modified Bessel function of the first kind with order m − 1.
(b2 +v2 ) ∞
X v k
Qm (v, b) = e− 2 Ik (vb) . (2.116)
b
k=1−m
Show that v r
FR (r) = 1 − Q1
, .
σ σ
This is a useful statement, since there are effective algorithms for numerical computation of the
Marcum Q-function.
(c) Let Ri ∈ Rice (vi , σi ) for i = 1, 2, be independent. Show that
√ p ν2 p
− α+β
P (R2 > R1 ) = Q1 α, β − e 2 I
0 αβ ,
1 + ν2
v22 v12 σ1
where α = σ12 +σ22
and β = σ12 +σ22
and ν = σ2 .
24. Marcum Q-function and the Poisson distribution This exercise is found in the technical report in
the footnote above. The results are instrumental for computation of Q1 σv , σr . Let X ∈ Po(λ), Y ∈ Po(λ),
where X and Y are independent.
and associated radar detection probabilities. Australian Goverment. Department of Defence. Defence Science and Technology
Organisation. DSTO-RR-0304 (approved for public release), 2005.
86 CHAPTER 2. PROBABILITY DISTRIBUTIONS
which can be established be means of the appropriate generating function of modified Bessel functions
of the first kind.
Then in view of (2.117) and (2.116) we obtain that
√ √ 1
Q1 2λ, 2λ = 1 + e−2λ I0 (2λ) .
2
25. (From [14]) X1 , X2 , . . . , Xn are independent and identically distributed random variables with the distri-
bution function F (x) and p.d.f. f (x). We consider the range R = R (X1 , X2 , . . . , Xn ) defined by
def
R = max Xi − min Xi .
1≤i≤n 1≤i≤n
This is a function of the n r.v.’s that equals the distance between the largest and the smallest. The text
[51, ch. 12] discusses the range as applied in control charts of statistical quality engineering.
The task here is to show that the probability distribution function of R is
Z ∞
n−1
FR (x) = n [F (t + x) − F (t)] f (t)dt. (2.118)
−∞
In general, FR (x) cannot be evaluated in a closed form and is computed by numerical quadratures. Next
we find the formula in (2.118) by the sequence of steps (a)-(d).
def def
(a) Set Z = max1≤i≤n Xi and Y = min1≤i≤n Xi . Let
∂2
F (y, z) = P (Y ≤ y, Z ≤ z) , F (y, z) = f (y, z) .
∂y∂z
Now show that Z ∞
∂
FR (x) = F (y, z)|z=y+x
z=y dy. (2.119)
−∞ ∂y
(b) Show next that
n
P (Y ≥ y, Z ≤ z) = [F (z) − F (y)] (2.120)
(c) Show next using (2.120) that
n
F (y, z) = P (Y ≤ y, Z ≤ z) = P (Z ≤ z) − [F (z) − F (y)] (2.121)
(a) It needs first to be checked that the p.d.f. of X ∈ SN (λ) as given in (2.22) is in fact a p.d.f.. The
serious challenge is to show that
Z ∞
fX (x)dx = 1 for all λ.
−∞
def
Note that the chosen notation hides the fact that fX (x) is also a function of λ. Aid: Define Ψ (λ) =
R∞ d
f (x)dx. Then we have Ψ (0) = 1 and dλ
−∞ X
Ψ (λ) = 0 for all λ (check this) and thus the claim is
proved.
(b) Show that r
2 λ
E [X] = √ .
π 1 + λ2
def R∞
Aid: Introduce the auxiliary function Ψ (λ) = −∞
xfX (x)dx and find that
r
d 2 1
Ψ (λ) = .
dλ π (1 + λ2 )3/2
Then Z r
2 1
E [X] = dλ + C.
π (1 + λ2 )3/2
and the constant of integration C can be determined from Ψ (0).
(c) Show that
2 λ2
Var [X] = 1 − .
π 1 + λ2
′
Aid: Use Steiner s formula (2.6) and the fact that X 2 ∈ χ2 (1).
5. Chebychev’s inequality Let X1 , X2 , . . . , Xn be independent r.v.’s, and identically Xi ∈ U (−1, 1). Set
Pn
X = n1 i=1 Xi . Use the Chebychev inequality (1.27) to estimate how large n should be so that we have
P | X |> 0.05 ≤ 0.05.
Answer: n ≥ 2667.
6. |Coefficient of Correlation| ≤ 1 The coefficient of correlation is defined in (2.45). The topic of this
exercise is to show that (2.46), i.e., |ρX,Y | ≤ 1 holds true.
(a) Let now X and Y be two r.v.’s, dependent or not. Assume that E [X] = E [Y ] =h0 and Vari[X] =
2 2
Var [Y ] = 1. Show that E [XY ] ≤ 1. Aid: Since (X − Y ) ≥ 0, we get that E (X − Y ) ≥ 0.
2
Expand (X − Y ) to show the claim.
2
(b) The r.v.’s are as in (a). Show that E [XY ] ≥ −1. Aid: Consider (X + Y ) , and apply steps of
argument similar to the one in case (a).
(c) We conclude by (a) and (b) that |E [XY ] | ≤ 1 under the conditions there. Let now X and Y be
2
two r.v.’s, independent or dependent. Assume that E [X] = µX and E [Y ] = µY and Var [X] = σX ,
2 X−µX Y −µY
Var [Y ] = σY . Set Z1 = σX , and Z2 = σY . Now prove that |ρX,Y | ≤ 1 by applying the
conclusion above to Z1 and Z2 .
7. When is the Coefficient of Correlation = ±1 ? Show for the coefficient of correlation ρX,Y as
defined in (2.45) that
ρX,Y = ±1 ⇔ Y = aX + b.
2. Let X ∈ Exp (1/λ). Invoking again the integer part (2.122) we set
X
Lm =
m
and
Rm = X − m · Lm .
Show that Lm and Rm are independent r.v.’s. Determine even the marginal distributions of Lm and Rm .
3. X ∈ Exp(1/λ), and
D = X − ⌊X⌋.
D is the fractional part of X, as ⌊X⌋ is the integer part of X. Show that the p.d.f of D is
(
λe−λd
1−e−λ 0<d<1
fD (d) =
0 elsewhere.
4. Let X1 , X2 , . . . , Xn be I.I.D. random variables under a continuous probability distribution with the dis-
tribution function FX (x). Let θ be the median of the distribution, i.e., a number such that
1
= FX (θ).
2
Find the probability distribution of the number of the variables X1 , X2 , . . . , Xn that are less than θ.
5. Chen’s Lemma X ∈ Po(λ). H(x) is a locally bounded Borel function. Show that
Chen’s lemma is found, e.g., in [9]. The cited reference develops a whole theory of Poisson approximation
as a consequence of (2.123).
7. Skewness and Kurtosis of Poisson r.v.’s Recall again (2.95) and (2.96). Show that if X ∈ Po(λ),
λ > 0, then
(a)
1
κ1 = √ ,
λ
(b)
1
κ2 = 3 + .
λ
8. X ∈ Po(λ), λ > 0. Find
1
E .
1+X
1
Answer: 1+λ 1 − e−λ . Why can we not compute E X1 ?
90 CHAPTER 2. PROBABILITY DISTRIBUTIONS
9. Let X1 , X2 , . . . , Xn are I.I.D. and positive r.v.’s. Show that for any k ≤ n
X1 + X2 + . . . + Xk k
E = .
X1 + X2 + . . . + Xn n
12. Mill’s inequality and Chebychev’s Inequality Let X1 , . . . , Xn are I.I.D. and ∈ N (0, 1). Set X =
1
Pn
n i=1 Xi . Use Mill’s inequality (2.125) to find and upper bound for P (| X |> t) and make a comparison
with the bound by Chebychev’s Inequality. Aid: The reader is assumed to know that X ∈ N 0, n1 .
Chapter 3
3.1 Introduction
Conditional probability and conditional expectation are fundamental in probability and random processes in
the sense that all probabilities referring to the real world are necessarily conditional on the information at hand.
The notion of conditioning does not seem to play any significant role in general measure and integration theory,
as developed by pure mathematicians, who need not necessarily regard applied processing of information as a
concern of their theoretical work.
The conditional probability is in a standard fashion introduced as
def P (A ∩ B)
P (A | B) = , (3.1)
P(B)
which is called the conditional probability of the event A given the event B, if P(B) > 0. P (B | A) is defined
analogously. We shall in this chapter expand the mathematical understanding of the concepts inherent in the
definition of conditional probability of an event in (3.1).
91
92 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
In the sequel we shall use a special symbolic notation for conditional distributions P (Y ≤ y | X = x)
using the earlier distribution codes. For example, suppose that for any x > 0
1 −y/x
xe 0≤y
fY |X=x (y) =
0 elsewhere.
Our calculus should be compatible with (3.1). One difficulty is that the event {X = x} has the probability = 0,
since X is a continuous random variable. We can think heuristically that the conditioning event {X = x} is
more or less {x ≤ X ≤ x + dx} for an infinitesimal dx. We obtain (c.f., (3.1))
and thus
∂2
∂ ∂x∂y FX,Y (x, y) fX,Y (x, y)
fY |X=x (y) = FY |X=x (x, y) = = .
∂y fX (x) fX (x)
In the same way as in (3.2) we can write
pX,Y (xk , yj )
pY |X=xk (yj ) = for j = 0, 1, 2, . . . , .
pX (xk )
Example 3.2.1 The operation in (3.4) is used in Bayesian statistics and elsewhere in the following way. Let
fY |Θ=θ (y) be a probability density with the parameter θ, which is regarded as an outcome of the r.v. Θ. Then
fΘ (θ) is the prior p.d.f. of Θ.
To illustrate the idea precisely, think here of, e.g., the r.v. Y | Θ = θ ∈ Exp(θ), with the p.d.f.
fY |Θ=θ (y) = 1θ e−y/θ , where θ is an outcome Θ, which is a positive r.v. with, e.g., Θ ∈ Exp(λ).
defines a new p.d.f. fY (y) (sometimes known as a predictive p.d.f.), which may depend on the so called
hyperparameters (like λ in the preceding discussion) from fΘ (θ). Following the rules for conditional densities
we obtain also
fΘ,Y (θ, y)
fΘ|Y =y (θ) =
fY (y)
3.2. CONDITIONAL PROBABILITY DENSITIES AND CONDITIONAL EXPECTATIONS 93
and furthermore
fY |Θ=θ (y)fΘ (θ)
fΘ|Y =y (θ) = R ∞ . (3.6)
f
−∞ Y |Θ=θ
(y)fΘ (θ) dθ
This is Bayes’ rule (with p.d.f.’s), and constitutes an expression for the posterior p.d.f of Θ given Y = y.
In view of the preceding we define quite naturally the conditional expectation for Y given X = xk by
∞
X
def
E(Y | X = xk ) = yj · pY |X=xk (yj ).
j=−∞
Proof We deal with the continuous case. The conditional expectation E(Y | X = x) is a function of x, which
we denote by H(x). Then we have the random variable H(X) = E(Y | X) for some (what has to be a Borel)
function H. We have by the law of unconscious statistician (2.4) that
Z ∞
E [H(X)] = H(x)fX (x) dx
−∞
Z ∞
= E(Y | X = x)fX (x) dx,
−∞
where µY |X=x = E(Y | X = x). In addition Var(Y | X = x) = H(x) for some Borel function H(x). There is
no rule equally simple as (3.9) for variances, but we have the following theorem.
94 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
The law of total variance above will be proved by means of (3.46) in the exercises to this chapter. We shall next
develop a deeper and more abstract (and at the same time more expedient) theory of conditional expectation
(and probability) that relieves us from heuristics of the type ’{x ≤ X ≤ x + dx} for an infinitesimal dx’, and
yields the results above as special cases. We start with the simplest case, where we condition w.r.t. an event.
Definition 3.3.1 For any random variable E [|X|] < ∞ and any A ∈ F such that P(A) > 0. The conditional
expectation of X given A is defined by
Z
1
E [X | A] = XdP. (3.11)
P(A) A
Then χA is a random variable on (Ω, F , P). We take B ∈ F with P(B) > 0. Then we have
Z
1
E [χA | B] = χA dP
P(B) B
and by an exercise in chapter 1, Z
1
= χA · χB dP.
P(B) Ω
It holds that
χA · χB = χA∩B
(check this !). Then Z Z
χA · χB dP = χA∩B dP
Ω Ω
= 0 · P ((A ∩ B)c ) + 1 · P (A ∩ B) = P (A ∩ B) .
Thus
P (A ∩ B)
E [χA | B] = . (3.13)
P(B)
The alert reader cannot but recognize the expression in the right hand side of (3.13) as the conditional probability
of A given B, for which the symbol P (A | B) has been assigned in (3.1).
3.4. CONDITIONING W.R.T. A PARTITION 95
We can thus say that having access to a partition means having access to a piece of information. The partitions
are ordered by inclusion (are a lattice), in the sense that
P1 ⊂ P2
means that all cells in P2 have been obtained by partitioning of cells in P1 . P1 is coarser than P2 , and P2 is
finer than P1 , and P2 contains more information than P1 .
Then we can compute the conditional expectation of X given Ai using (3.11) or
Z
1
E [X | Ai ] = XdP. (3.14)
P(Ai ) Ai
Ω ∋ ω 7→ E [X | Ai ] , if ω ∈ Ai .
Then we can define, see [13, p. 495], the conditional expectation w.r.t. to a partition. We remind once more
about the definition of the indicator function in (3.12).
Definition 3.4.1 The conditional expectation given the information in partition P is denoted by
E [X | P], and is defined by
k
X
def
E [X | P] (ω) = χAi (ω)E [X | Ai ] . (3.15)
i=1
The point to be harnessed from this is that E [X | P] is not a real number, but a random variable. In fact it
is a simple random variable in the sense of section 1.8.1. We shall next pay attention to a few properties of
E [X | P], which foreshadow more general conditional expectations.
k
X Z Z
= E [X | Ai ] χAj (ω) · χAi (ω)dP(ω) = E [X | Aj ] χAj dP(ω),
i=1 Ω Ω
96 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
2
since χAi (ω) · χAj (ω) = 0 for all ω, unless i = j, and χAj (ω) = χAj (ω) for all ω, and thus we get
Z
= E [X | Aj ] dP(ω) = E [X | Aj ] P (Aj ) .
Aj
Our strategy is now to define conditional expectation in more general cases by extending the findings (a) and
(b) (i.e., (3.16)) about E [X | P]. The way of proceeding in the next section is necessary, because the restriction
to P(Ai ) > 0 will make it impossible to construct conditional expectation by an approach, where the mesh of
cells of the partition gets successively smaller (and the partition becomes finer and finer).
1. E [Y | X] is FX -measurable.
We shall say later a few words about the existence of E [Y | X] as defined here.
We can define conditional probability of an event A given X by
def
P(A | X) = E [χA | X] ,
Lemma 3.5.1 Let (Ω, F , P) be a probability space and let G be a sigma field contained in F . If X is a G
-measurable random variable and for any B ∈ G
Z
XdP = 0, (3.17)
B
The last equality is true by assumption, since {X ≥ ε} ∈ G. In the same way we have that P (X ≤ −ε) = 0.
Therefore
P (−ε ≤ X ≤ ε) = 1
3.6. A CASE WITH AN EXPLICIT RULE FOR CONDITIONAL EXPECTATION 97
as was to be proved.
Note that the Doob-Dynkin theorem 1.5.5 in Chapter 1 implies that there is a Borel function H such that
E [Y | X] = H(X).
We can every now and then give more or less explicit formulas for H. One such case is investigated in section
3.6 that comes next.
We assume that E [Y | X] exists. We shall show that the preceding definition 3.5.1 checks with the formula
(3.7).
Theorem 3.6.1 Let Y be a r.v. such that E [| Y |] < ∞, and let X be a random variable such that (X, Y ) has
the joint density fX,Y on all R × R. Then
R +∞
yfX,Y (x, y)dy
E [Y | X = x] = R−∞
+∞ . (3.18)
−∞
f X,Y (x, y)dy
Proof By virtue of definition 3.5.1 we need to find a Borel function, say H(x), such that for any Borel event A
we have Z Z
H(X)dP = Y dP. (3.19)
X∈A X∈A
Note that {X ∈ A} is an event in FX . Let us start with the right hand side of (3.19). Since A ∈ B,
Z Z
Y dP = χA (X(ω))Y (ω)dP (ω) ,
X∈A Ω
where χA (x) is the indicator of A ∈ B, see (1.26), and one may compare with the idea in an exercise in Chapter
1. But we can write this in the usual notation
Z +∞ Z +∞
= χA (x)ydFX,Y (x, y)
−∞ −∞
Z Z +∞
= yfX,Y (x, y)dy dx. (3.20)
A −∞
98 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
Z +∞
= χA (x)H(x)dFX (x)
−∞
R +∞
and as dFX (x) = fX (x)dx = −∞
fX,Y (x, y)dydx
Z +∞ Z ∞
= χA (x)H(x) fX,Y (x, y)dy dx
−∞ −∞
Z Z +∞
= H(x) fX,Y (x, y)dy dx. (3.21)
A −∞
Now, (3.19) requires that we can choose a Borel function H(x) so that the expressions in (3.20) and (3.21) are
equal, i.e.,
Z Z +∞ Z Z +∞
yfX,Y (x, y)dy dx = H(x) fX,Y (x, y)dy dx.
A −∞ A −∞
If these integrals are to coincide for each Borel set A, then we must take
R +∞
yfX,Y (x, y)dy
H(x) = R−∞
+∞ ,
f
−∞ X,Y
(x, y)dy
1. E [Y | G] is G-measurable.
We do not prove that the random variable E [Y | G] exists, as the proof is beyond the scope of these notes/this
course. The interested student can check, e.g., [103, p. 27] or [63, p. 200].
We have, when FX is the σ field generated by X,
E [Y | FX ] = E [Y | X] ,
def
P(A | G) = E [χA | G] . (3.23)
3.7. CONDITIONING W.R.T. A σ -FIELD 99
Theorem 3.7.1 a and b are real numbers, E [| Y |] < ∞, E [| Z |] < ∞, E [| X |] < ∞ and H ⊂ F , G ⊂ F ,
1. Linearity:
E [aX + bY | G] = aE [X | G] + bE [Y | G]
2. Double expectation :
E [E [Y | G]] = E [Y ]
E [ZY | G] = ZE [Y | G]
E [Y | G] = E [Y ]
5. Tower Property : If H ⊂ G,
E [E [Y | G] | H] = E [Y | H]
6. Positivity: If Y ≥ 0,
E [Y | G] ≥ 0.
2. To prove the rule of double expectation we observe that by assumption the condition in (3.22) is to hold
for all A in G, hence it must hold for Ω. This means that
Z Z
E [Y | G] dP = Y dP = E [Y ] ,
Ω Ω
as claimed.
3. We start by verifying the result for Z = χB (see (3.12)), where B ∈ G. In this special case we get
Z Z Z Z
ZE [Y | G] dP = χB E [Y | G] dP = E [Y | G] dP = Y dP,
A A A∩B A∩B
for all A ∈ G, and hence the lemma 3.5.1 about uniqueness says that
Z
(ZE [Y | G] − E [ZY | G]) dP = 0
A
implies
ZE [Y | G] = E [ZY | G]
almost surely.
We can proceed in the same manner to prove that the result holds for step functions
n
X
Z= a j χ Aj ,
j=1
4. Since we assume that Y is independent of G, Y is independent of the random variable χA for all A ∈ G.
Due to (2.44) Y and χA have zero covariance, and this means by (2.42)
E [Y χA ] = E [Y ] E [χA ] .
Therefore Z Z
Y dP = χA Y dP = E [Y χA ] = E [Y ] E [χA ]
A Ω
Z Z Z
= E [Y ] χA dP = E [Y ] dP = E [Y ] dP,
Ω A A
since E [Y ] is a number, whereby, if we read this chain of inequalities from right to left
Z Z
E [Y ] dP = Y dP,
A A
E [Y ] = E [Y | G] .
5. We shall play a game with the definition. By the condition (3.22) we have again for all A ∈ G that
Z Z
E [Y | G] dP = Y dP. (3.24)
A A
which holds for all A ∈ H. But when we check and apply the definition (3.22) once again, we get from
this the conclusion that
E [E [Y | G] | H] = E [Y | H] ,
as was to be proved.
6. We omit this.
Example 3.7.2 Taking out what is known This example is encountered in many situations. Let H(x) be
a Borel function. X and Y are random variables. Then the rule of taking out what is known gives
E [H(X) · Y | FX ] = H(X) · E [Y | FX ] .
To get a better intuitive feeling for the tower property, which is an enormously versatile tool of computation, we
recall example 1.5.4. There X is random variable and for a Borel function Y = f (X). It was shown in loc.cit.
that
FY ⊆ FX .
Then the tower property tells us that for a random variable Z
E [E [Z | FX ] | FY ] = E [Z | FY ] .
How do we interpret this? We provide an answer to this question in section 3.7.4 below by using the interpre-
tation of a conditional expectation E [Y | X] as an estimator of Y by means of X.
Yb = E [Y | FX ]
and
def
Ye = Y − Yb .
Then it holds that
Var(Y ) = Var(Yb ) + Var(Ye ).
Proof We recall the well known formula, see, e.g. [15, p. 125],
We must investigate the covariance, which obviously must be equal to zero, if the our statement is to be true.
We have h i h i h i
Cov(Yb , Ye ) = E Yb · Ye − E Yb · E Ye . (3.26)
102 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
Therefore in (3.26) h i
E Yb · Ye = 0.
Furthermore h i h i h i
E Ye = E Y − Yb = E [Y ] − E Yb =
= E [Y ] − E [E [Y | FX ]] = E [Y ] − E [Y ] = 0.
Thus even the second term in the right hand side of (3.26) is equal to zero. Thereby we have verified the claim
as asserted.
Yb = E [Y | FX ] = E [Y | X]
Ye = Y − Yb .
In hfact we ishould pay attention to the result in (3.46) in the exercises. This says that if E Y 2 < ∞ and
2
E (g(X)) < ∞, where H(x) is a Borel function, then
h i
2 2
E (Y − H(X)) = E [Var(Y | X)] + E (E [Y | X] − H(X)) . (3.27)
This implies, since both terms in the right hand side are ≥ 0 that for all H(x)
h i h i
2 2
E (Y − E [Y | X]) ≤ E (Y − H(X)) (3.28)
In other words, H ∗ (X) = Yb = E [Y | X] is the optimal estimator of Y based on X, in the sense of minimizing
the mean square error. The proof of lemma 3.7.3 above contains the following facts about optimal mean square
estimation:
• h i
E Ye = 0. (3.29)
3.7. CONDITIONING W.R.T. A σ -FIELD 103
Cov(Yb , Ye ) = 0. (3.30)
This framework yields a particularly effective theory of estimation (prediction, filtering, e.t.c. [90]), when later
combined with the properties of Gaussian vectors and Gaussian stochastic processes.
E [E [Z | FX ] | FY ] = E [Z | FY ] . (3.32)
Now we recall from the preceding section that E [Z | FY ] is our best mean square estimate of Z based on Y .
By the same account Z b = E [Z | FX ] is the best mean square estimate of Z based on X. But then, of course,
we have in the left hand side of (3.32),
h i
bb b | FY
Z =E Z
or, in other words, that our best mean square estimate of Z based on Y is in fact an estimate of Zb ! This is
what is lost, when being forced to estimate Z using Y rather than X. The loss of information is also manifest
in the inclusion FY ⊂ FX .
Proof: is omitted, since it can be done as the proof of theorem 1.8.3 in chapter 1..
104 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
3.8 Exercises
3.8.1 Easy Drills
1. A and B are two events with P(A) > 0 and P(B) > 0. A ∩ B = ∅. Are A and B independent?
(a) Is A ∩ B = ∅?
(b) Are A and B independent?
(c) Find P(Ac ∪ B c ).
3. Given P(A ∩ B c ) = 0.3, P((A ∪ B)c ) = 0.2 and P(A ∩ B) = 0.1, find P(A | B).
5. A and B are two events with P((A ∪ B)c ) = 0.6 and P(A ∩ B) = 0.1. Let E be the event that either A
or B but not both will occur. Find P(E | A ∪ B).
P (A ∩ B)
def
P† (A) = ,
P(B)
or, P† (A) = P (A | B) in (3.1). Show that Ω, F , P† is a probability space.
n−1
2. The Chain Rule of Probability Let A1 , A2 , . . . , An , n ≥ 2 be a events such that P ∩i=1 Ai > 0.
Show that
n−1
P (∩ni=1 Ai ) = P An | ∩i=1 Ai · · · P (A3 | A2 ∩ A1 ) P (A2 | A1 ) P (A1 ) . (3.34)
This rule is easy to prove and often omitted from courses in probability, but has its merits, as will be seen.
The expression is known as the law of total probability . How is this related to the expression in (3.15) ?
P (B | Al ) P (Al )
P (Al | B) = Pk , l = 1, 2, . . . , k. (3.36)
i=1 P (B | Ai ) P (Ai )
The expression is nowadays known as Bayes’ Formula or Rule, c.f. (3.6), but was in the past centuries
called the rule of inverse probability.
3.8. EXERCISES 105
6. X ∈ Fs (p), 0 < p < 1. Show that for every pair (k, m), k = 0, 1, . . . , m = 0, 1, 0, 1, . . . ,
This is known as the lack of memory property of the first success distribution.
7. Let X1 and X2 be two independent r.v.’s with the same p.m.f. pX (k) on the positive integers, k = 1, 2, . . . ,.
We know that pX (k) ≤ c(< 1) for every k. Show that P (X1 + X2 = n) ≤ c.
The same formula will be derived using generating functions in an exercise of chapter 5.
9. The following is an idea in molecular biotechnology about a p.d.f. of p-values, when testing hypotheses of
gene expressions in microarrays:
(
λ + (1 − λ) · apa−1 0 < p < 1
f (p) = (3.40)
0 elsewhere.
Here 0 < λ < 1, and 0 < a < 1. This distribution has been called the BUM distribution. The acronym
BUM stands for Beta-Uniform Mixture. Find a generative model for the the BUM distribution.
10. X ∈ U (0, 1). Find P X ≤ x | X 2 = y .
11. Poisson Plus Gauss Distribution [33, p.327] Let N ∈ Po (λ), X ∈ N 0, σ 2 , N and X are independent.
Set
U = N + X.
Show that the p.d.f. of U is
X∞
e−λ λk − (u−k)2
fU (u) = √ e 2σ2 . (3.41)
k=0
σ 2π k!
12. This exercise is excerpted from [84, p. 145−146]1, and is in loc.cit. a small step in developing methods for
treating measurements of real telephone traffic.
Let N ∈ Po(λt). Let T | N = n ∈ Erlang n, 1s , see example 2.2.10. Show that
E [T ] = λst.
The model discussed in loc.cit. is the following. N is the number of phone calls coming to a telephone
exchange during (0, t]. If N = n , then the total length of the n calls is T . Hence E [T ] is the expected
size of the telephone traffic started during (0, t].
1 [84]is the Ph.D.-thesis (teknologie doktorsavhandling) from 1943 at KTH by Conrad ’Conny’ Palm (1907−1951). Palm was an
electrical engineer and statistician, recognized for several pioneering contributions to teletraffic engineering and queueing theory.
106 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
13. X ∈ Exp(1), Y ∈ Exp(1) are independent. Find the distribution of X | X + Y = c, where c > 0 is a
constant.
Y1 = X1 + X2 + . . . + XN ; Y1 = 0, N = 0, Y2 = N − X1 .
λ
λ
Show that Y1 ∈ Po 2 and Y2 ∈ Po 2 and that they are independent.
a+b−1
16. (From [30]) Given P(A) = a and P(B) = b, show that b ≤ P(A | B) ≤ ab .
2
Answer: λ.
Answer: Y ∈ Exp(1/λ).
(e) Find
pX|Y (k|y) = P (X = k|Y = y) , k = 0, 1, 2, . . . , .
2. (From [35]) Let X ∈ Po (λ) and Y ∈ Po (µ). X and Y are independent. Set Z = X + Y .
λ
(a) Find the conditional distribution X | Z = z. Answer: X | Z = z ∈ Bin z, λ+µ .
3.8. EXERCISES 107
λ λ
(b) Find E [X | Z = z], E [X | Z], Var [X | Z = z] , Var [X | Z]. Answer: z λ+µ , E [X | Z] = Z λ+µ ,
λ λ λ λ
Var [X | Z = z] = z λ+µ 1 − λ+µ , Var [X | Z] = Z λ+µ 1 − λ+µ .
(c) Find the coefficient of correlation ρX,Z ,
Cov(X, Z)
ρX,Z = p p .
Var [X] Var [Z]
q
λ
Answer: λ+µ .
3. (From
[35])X ∈ Exp (λ) and Y ∈ U (0, θ). X and Y are independent. Find P (X > Y ). Answer:
λ θ
−λ
θ 1−e .
4. (From [35]) The joint distribution of (X, Y ) is for β > −1 and α > −1.
(
c (α, β) y β (1 − x)α 0 ≤ x ≤ 1, 0 ≤ y ≤ x,
fX,Y (x, y) =
0 elsewhere.
(a) Determine c (α, β). Aid: Consider a suitable beta function, c.f., (2.31).
(b) Find the marginal distributions and compute E [X], Var [X].
(c) Determine E [X | Y = y], Var [X | Y = y], E [Y | X = x], Var [Y | X = x].
Answers
(β+1)Γ(α+β+3)
(a) c (α, β) = Γ(α+1)Γ(β+2) .
(b)
c (α, β) β+1
fX (x) = x (1 − x)α , 0 ≤ x ≤ 1,
β+1
c (α, β) β
fY (y) = y (1 − y)α+1 0 ≤ y ≤ 1,
α+1
β+2
E [X] = ,
α+β+3
(α + 1)(β + 2)
Var [X] = .
(α + β + 4)((α + β + 3)2
(c)
α+1
E [X | Y = y] = 1 − (1 − y),
α+2
(α + 1)(1 − y)2
Var [X | Y = y] = .
(α + 3)(α + 2)2
You obtain E [Y | X = x], Var [Y | X = x] from this by replacing y with 1 − x and α with β and β
with α.
5. (From [35]) Let X1 , X2 , . . . , Xn be independent and Po(λi ) -distributed random variables, respectively.
P
Let the r.v. I ∈ U (1, 2, . . . , n), c.f., Example 2.3.3. Find E [XI ] and Var [XI ]. Answer: Let λ = n1 ni=1 λi .
Then
E [XI ] = λ,
and n
2 1X 2
Var [XI ] = λ − λ + λ .
n i=1 i
108 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
6. (From [35]) Let X1 , X2 , . . . , Xn be independent and identically Exp(1/λ) -distributed random variables.
Let in addition S0 = 0 and Sn = X1 + X2 + . . . + Xn . Set
N = max{n | Sn ≤ x}.
N is a random time, equal to the number of that random sample, when Sn for the last time stays under
x. Then show that N ∈ Po (λx).
7. Show using the properties of conditional expectation, that if X and Y are independent and the expectations
exist, then
E [X · Y ] = E [X] · E [Y ] . (3.42)
(a) Find c.
(b) Find fX (x) and fY (y).
(c) Find E [X].
(d) Find E [X | Y = y].
5
Answers: (a) c = 2, (b) fX (x) = 1 + 2x − 3x2 , 0 < x < 1, fY (y) = 3y 2 , 0 < y < 1. (c) E [X] = 12 , (d)
E [X | Y = y] = 59 y.
9. Let (X, Y ) be a continuous bivariate r.v. with the joint p.d.f. in (2.112). Find fY |X=x (y). Answer:
(
e(x−y) x < y
fY |X=x (y) =
0 elsewhere.
10. Let X ∈ Exp (1/a), Y ∈ Exp (1/a) are independent. Show that X | X + Y = z ∈ U (0, z).
X
11. Let X ∈ Exp (1), Y ∈ Exp (1) are independent. Show that X+Y ∈ U (0, 1).
12. Rosenblatt Transformation, PIT2 Let X = (X1 , . . . , Xn ) be a continuous random vector with the
joint distribution FX (x1 , . . . , xn ). We transform (X1 , . . . , Xn ) to (Y1 , . . . , Yn ) by
Yi = gi (Xi ) ,
Note that we are using here an application of the chain rule (3.34).
Show that (Y1 , . . . , Yn ) are independent and that Yi ∈ U (0, 1), i = 1, 2, . . . , n.
2 The author thanks Dr. Thomas Dersjö from Scania, Södertälje for pointing out this.
3.8. EXERCISES 109
In structural safety and solid mechanics this transformation is an instance of the isoprobabilis-
tic transformations . In econometrics and risk management3 this transformation is known
as PIT = probability integral transform. PIT is applied for evaluating density forecasts4
and assessing a model, s validity. Thus the PIT is used for transforming joint probabilities for
stochastic processes in discrete time. Here the arbitrariness of the ordering in X1 , . . . , Xn , that
is regarded as a difficulty of the Rosenblatt transformation, is automatically absent.
13. Let (X, Y ) be a bivariate random variable, where both X and Y are binary, i.e., their values are 0 or 1.
The p.m.f of (X, Y ) is
x 1−x
1−x (1−y) (1−y)
pX,Y (x, y) = τ x (1 − τ ) θy (1 − θ) λy (1 − λ) , x ∈ {0, 1}, y ∈ {0, 1}. (3.44)
Here 0 ≤ τ ≤ 1, 0 ≤ θ ≤ 1, and 0 ≤ λ ≤ 1.
3.8.4 Miscellaneous
1. (From [20]) Let A and B be sets in F and let χA and χB be the respective indicator functions, see equation
(3.12). Assume that 0 < P(B) < 1. Show that
(
P(A | B) if ω ∈ B
E [χA | χB ] (ω) = c
(3.45)
P(A | B ) if ω ∈ / B.
2. (From [20]) Let B ∈ G, P (B) > 0 and let X be such that E [|X|] < ∞. Show that
E [E [X | G] | B] = E [X | B] .
e−λ t
3. (From the Exam in sf2940, 23rd of October 2007) X ∈ Po (λ). Show that E etX | X > 0 = 1−e−λ eλe − 1 .
h i
2
4. Mean Square Error Let H(x) be a Borel function and X random variable such that E (H(X)) < ∞.
Then show that
h i
2 2
E (Y − H(X)) = E [Var(Y | X)] + E (E [Y | X] − H(X)) . (3.46)
3 Hampus Engsner, when writing his M.Sc-thesis, pointed out PIT for the author.
4 Diebold F.X., Gunther T.A & Tay A.S.: Evaluating density forecasts, 1997, National Bureau of Economic Research Cambridge,
Mass., USA
110 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
where h i
A = E (Y − E [Y | X])2 ,
and h i
2
C = E (E [Y | X] − H(X)) .
Now use double expectation for the three terms in the right hand side of (3.47). For A we get
h h i i
2
E E (Y − E [Y | X]) | X ,
for B
E [E [(Y − E [Y | X] · E [Y | X] − H(X))] | X] ,
and for C, h h ii h i
E E (E [Y | X] − H(X))2 | X = E (E [Y | X] − H(X))2 ,
where the known condition dropped out. Now show that the term B is = 0, and then draw the desired
conclusion.
When one takes H(X) = E [Y ], a constant function of X, (3.46) yields the law of total variance in
(3.10) h i
2
Var [Y ] = E (Y − E [Y ]) = E [Var(Y | X)] + Var [E [Y | X]] . (3.48)
5. (From [12]) Let X and Y be independent random variables and assume that E (XY )2 < ∞. Show that
2 2
Var [XY ] = (E [X]) Var(Y ) + (E [Y ]) Var [X] + Var [Y ] Var [X] .
Aid: Set Z = XY , and then use the law of total variance, equation (3.48) above, via
6. The linear estimator YbL , of Y by means of X, optimal in the mean square sense is given (as will be shown
in section 7.5) by
σY
YbL = µY + ρ (X − µY ) ,
σX
Cov(Y,X)
where µY = E [Y ], µX = E [X], σY2 = Var [Y ], σX
2
= Var [X], ρ = σY ·σX .
7. (From [12])
(a) Let X1 , X2 , . . . , Xn be independent and identically distributed (I.I.D.) random variables and let
S = X1 + X2 + . . . + Xn .
Show that
S
E [X1 | S] = . (3.51)
n
(b) Let X ∈ N (0, k) , W ∈ N (0, m) and be independent, where k and m are positive integers. Show
that
k
E [X | X + W ] = (X + W ).
k+m
Aid: The result in (a) can turn out to be helpful.
P (X > b | X > a)
depends only on the ratio a/b and not on the individual scales a and b. Zipf’s law is also scale-free in this
sense.
Recently the scale-free property has been observed for the degree distribution of many networks, where
it is associated with the so-called small world phenomenon5 . Examples are the World Wide Web, and
human web of sexual contacts and many networks of interaction in molecular biology.
2
v
9. Let N ∈ Po 2σ 2 . Let
X | N = n ∈ χ2 (2n + 2).
√
Set R = σ X. Compute directly the density of R and show that you obtain (2.114), i.e., R ∈ Rice (v, σ).
Aid: You will eventually need a series expansion of a modified Bessel function of the first kind with order
0, as a real function see, e.g., [92, section 12.4]6 or [3, p. 288].
10. Assume that X | P = p ∈ Ge(p) (= NBin(1, p)) and P ∈ β (α, ρ). Show that X ∈ War(ρ, α), as defined in
example 2.3.14. We apply here the Bayesian integral of (3.5). This fact should explain why the Waring
distribution is known under the name Negative-Binomial Beta distribution.
5A small world network is a graph in which the distribution of connectivity is not confined to any scale and where every node
can be reached from each other by a small number of steps.
6 Or, see p.9 of Formelsamling i Fysik, Institutionen för teoretisk fysik, KTH, 2006
http://courses.theophys.kth.se/SI1161/formelsamling.pdf.
112 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
11. let X ∈ N (0, 1) and Y ∈ N (0, 1) and X and Y be independent. Take a real number λ. Set
(
Y, if λY ≥ X
Z=
−Y, if λY < X.
12. X ∈ N (0, σx2 ) and f (x) is the p.d.f. of N (0, σc2 ). U ∈ U (0, f (0)) and is independent of X.
Show that X | U ≤ f (X) ∈ N 0, s2 ) , where s12 = σ12 + σ12 .
x c
3.8.5 Martingales
The exercises below are straightforward applications of the rules of computation in theorem 3.7.1 on a sequence
of random variables with an assorted sequence of sigma fields, to be called martingales, and defined next.
Definition 3.8.1 Let F be a sigma field of subsets of Ω. Let for each integer n > 0 Fn be a sigma field ⊂ F
and such that
Fn ⊂ Fn+1 . (3.52)
Then we call the family of sigma fields (Fn )n≥1 a filtration.
Fn = σ (X1 , . . . , Xn ) ,
i.e., the sigma field generated by X1 , . . . , Xn . This means intuitively that if A ∈ F , then we are able to decide
whether A ∈ Fn or A ∈ / Fn by observing X1 , . . . , Xn .
∞
Definition 3.8.2 Let X = (Xn )n=1 be a sequence of random variables on (Ω, F , P). Then we call X a
martingale with respect to the filtration (Fn )n≥1 , if
E [Xn+1 | Fn ] = Xn . (3.53)
The word martingale can designate several different things, besides the definition above. Martingale is, see
figure 3.17 , a piece of equipment that keeps a horse from raising its head too high, or, keeps the head in a
constant position, a special collar for dogs and other animals and a betting system.
It is likely that the preceding nomenclature of probability theory is influenced by the betting system (which
may have received its name from the martingale for horses . . .).
7 http://commons.wikimedia.org/wiki/User:Malene
3.8. EXERCISES 113
Figure 3.1: Shannon Mejnert riding on Sandy in Baltic Cup Show on 28th of May 2006 at Kallehavegaard
Rideklub, Randers in Denmark. The horse, Sandy, is wearing a martingale, which, quoting the experts,
consists of: ..’ a strap attached to the girth and passes between the horse’s front legs before dividing into two
pieces. At the end of each of these straps is a small metal ring through which the reins pass.’
Xn = E [X | Fn ] .
Show that (Xn )n≥1 is a martingale with respect to the filtration (Fn )n≥1 .
∞
2. Let (Xn )n=1 be a sequence of independent, nonnegative random variables with E [Xn ] = 1 for every n.
Let
M0 = 1, F0 = (Ω, ∅) ,
M n = X1 · X2 · . . . · Xn ,
and
Fn = σ (X1 , . . . , Xn ) .
Show that (Mn )n≥0 is a martingale with respect to the filtration (Fn )n≥0 .
3. Let X be a martingale with respect to the filtration (Fn )n≥1 . Show that then for every n
(Recall that a martingale in the sense of figure 3.1 keeps the horse’s head in a constant position.)
4. Let X be a martingale with respect to the filtration (Fn )n≥1 . Show that then for every n ≥ m ≥ 1
E [Xn | Fm ] = Xm .
114 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
∞
5. {Xn }n=1 are independent and identically distributed with E [Xn ] = µ and Var [Xn ] = σ 2 . Define
n
X
W0 = 0, Wn = Xi ,
i=1
Fn = σ (X1 , . . . , Xn ) ,
and
Sn = (Wn − nµ)2 − nσ 2 .
∞ ∞
Show that {Sn }n=0 is a martingale w.r.t. {Fn }n=0 .
∞
6. Let {Xn }n=0 be a sequence of independent random variables. In many questions of statistical inference,
∞
signal detection e.t.c. there are two different probability distributions for {Xn }n=0 . Let now f and g be
two distinct probability densities on the real line. The likelihood ratio Ln is defined as
def f (X0 ) · f (X1 ) · . . . · f (Xn )
Ln = , (3.54)
g (X0 ) · g (X1 ) · . . . · g (Xn )
where we assume that g (x) > 0 for all x.
(a) Show that Ln is a martingale with respect to Fn = σ (X1 , . . . , Xn ), if we think that g is the p.d.f.
∞
of the true probability distribution for {Xn }n=0 . That g is the the p.d.f. of the true probability
distribution is here simply to be interpreted as the instruction to compute the required expectations
using g. For example, for any Borel function H of X
Z ∞
E [H (X)] = H(x)g(x)dx.
−∞
(b) Let
ln = − ln Ln .
The function l(n) is known as the (- 1 ×) loglikelihood ratio. Show that (w.r.t. the p.d.f. of the
true distribution g)
E [ln+1 | Fn ] ≥ ln .
Aid: Consider Jensen’s inequality for conditional expectation in theorem 3.7.4.
7. Stochastic Integrals We say that (Xn )n≥0 is predictable, if Xn is Fn−1 measurable, where (Fn )n≥0
is a filtration. Let us define the increment process △X as
def
(△X)n = Xn − Xn−1 ,
(a) Show that a sequence of random variables (Xn )n≥0 is a martingale if and only if
for n = 0, 1, . . ..
(b) For any two random sequences X = (Xn )n≥0 and M = (Mn )n≥1 the discrete stochastic integral
is a sequence defined by
n
X
def
(X ⋆ M)n = Xk (△M )k (3.56)
k=0
Assume that X is predictable, M is a martingale and that E [| Xk (△M )k |] < ∞. Show that (X ⋆ M)
is a martingale.
Pn
Aid: Set Zn = k=0 Xk (△M )k and find the expression for (△Z)n and use (3.55).
3.8. EXERCISES 115
8. Why Martingales? We have above worked on some examples of martingales with the verification of
the martingale property as the main activity. Apart from the potential charm of applying the rules of
conditional expectation, why are martingales worthy of this degree of attention? The answer is that there
are several general results (the stopping theorem, maximal inequalities, convergence theorems e.t.c.) that
hold for martingales. Thus, it follows by martingale convergence, e.g., that in (3.54) the likelihood ratio
Ln → 0 almost surely, as n → ∞.
What is the ’practical’ benefit of knowing the the convergence Ln → 0? Aid: Think of how you would use
Ln to decide between H0 : Xi ∈ g, H1 : Xi ∈ f .
More about martingales and their applications in statistics can be studied, e.g., in [102, ch. 9.2.]. Appli-
cations of martingales in computer science are presented in [79].
116 CHAPTER 3. CONDITIONAL PROBABILITY AND EXPECTATION W.R.T. A SIGMA FIELD
Chapter 4
Characteristic Functions
which means that a function of x is transformed to a (transform) function of t (=the transform variable).
Remark 4.1.1 The literature in mathematical physics and mathematical analysis uses often the definition
Z ∞
b 1
f (t) = e−itx f (x)dx.
2π −∞
There is also Z ∞
fb(t) = e−2πitx f (x)dx
−∞
widely used, with j in place of i, in electrical engineering. This state of affairs is without doubt a bit
confusing. Of course, any variant of the definition can be converted to another by multiplying by the
appropriate power of 2π, or by replacing t with 2πt. When encountered with any document or activity
involving the Fourier transform one should immediately identify, which particular definition is being used.
We are, moreover, going to add to confusion by modifying (4.1) to define the characteristic function of a
random variable.
117
118 CHAPTER 4. CHARACTERISTIC FUNCTIONS
This is a simplified formal expression, we are neglecting considerations of existence and the region of
convergence, c.f., [100, p. 39].
An important desideratum is that we should be able to uniquely recover f from fb, or, that there should be
an inverse transform. There is, under some conditions, see [100, p.171], the Fourier inversion formula given
by
Z ∞
1
f (x) = eitx fb(t)dt. (4.2)
2π −∞
which have in the past been collected in printed volumes of tables of Fourier transforms.
Since the distribution function
FX (x) = P ({X ≤ x}) .
completely determines the probabilistic behaviour and properties of a random variable X, we are obviously lead
to work with transforms of FX (x), or more precisely, we deal with the transform of its p.d.f. fX (x), when it
exists, or with the transforms of the probability mass function pX (xk ).
The Fourier transform exercises its impact by the fact that, e.g., differentiation and integration of f corre-
spond to simple algebraic operations on fb , see [100, Appendix C 4.]. Hence we can in many cases easily solve,
e.g., differential equations in f by algebraic equations in the transform fb and then invert back to obtain the
desired solution f . We shall meet with several applications of this interplay between the transformed function
and its original function in probability theory, as soon as a suitable transform has been agreed upon.
For another illustration of the same point, the Mellin transform is important in probability theory for the
fact that if X and Y are two independent non negative random variables, then the Mellin transform of the
density of the product XY is equal to the product of the Mellin transforms of the probability densities of X and
R∞
of Y . Or, if the Mellin transform of a probability density fX (x) of a r.v. X ≥ 0 is fbMX (t) = 0 xt−1 fX (x)dx,
then
fbM (t) = fbM (t)fbM (t).
XY X Y
4.2. CHARACTERISTIC FUNCTIONS: DEFINITION AND EXAMPLES 119
This is the complex conjugate of the Fourier transform, needless to say. Let us recall that eitx = cos(tx) +
i sin(tx). Then we have
E eitX = E [cos(tX)] + iE [sin(tX)] .
We can regard the right hand side of the last expression as giving meaning to the expectation of the complex
random variable eitX in terms
q of expectations of two real random variables. By definition of the modulus of a
itx 2
√
2
complex number | e | = cos (tx) + sin (tx) = 1. Therefore
E | eitX | = 1, E | eitX |2 = 1.
Hence the function eitx is integrable (w.r.t. to dFX ), and ϕX (t) exists for all t. In other words, every
distribution function/random variable has a characteristic function.
We are thus dealing with an operation that transforms, e.g., a probability density fX (or probability mass
function) to a complex function ϕX (t),
Ch
fX 7→ ϕX .
The following theorem deals with the inverse of a characteristic function.
Theorem 4.2.1 If the random variable X has the characteristic function ϕX (t), then for any interval (a, b]
Z T −ita
P (X = a) + P (X = b) 1 e − e−itb
P (a < X < b) + = lim ϕX (t)dt.
2 T →+∞ 2π −T it
Proof: The interested reader can prove this by a modification of the proof of the Fourier inversion theorem
found in [100, p.172-173].
Here we have in other words established that there is the operation
Ch−1
ϕX 7→ fX .
The following theorem is nothing but a simple consequence of the preceding explicit construction of the inverse.
Theorem 4.2.2 (Uniqueness) If two random variables X1 and X2 have the same characteristic functions,
i.e.,
ϕX1 (t) = ϕX2 (t) for all t ∈ R,
then they have the same distribution functions
which we write as
d
X1 = X2 .
120 CHAPTER 4. CHARACTERISTIC FUNCTIONS
There are several additional properties that follow immediately from the definition of the characteristic function.
(b) ϕX (0) = 1.
(c) | ϕX (t) |≤ 1.
(e) The characteristic function of a + bX, where a and b are real numbers, is
(h) For any n, any complex numbers zl , l = 1, 2, . . . , n, and any real tl , l = 1, 2, . . . , n we have
n X
X n
zl z k ϕX (tl − tk ) ≥ 0. (4.5)
l=1 k=1
Proof:
(d) Let us pause to think what we are supposed to prove. A function ϕX (t) is by definition uniformly
continuous in R [69, p. 68], if it holds that for all ǫ > 0, there exists a δ > 0 such that |ϕX (t+h)−ϕX (t)| ≤
ǫ for all |h| ≤ δ and all t ∈ R. The point is that δ is independent of t, i.e., that δ depends only on ǫ. In
order to prove this let us assume, without restriction of generality that h > 0. Then we have
|ϕX (t + h) − ϕX (t)| =| E eitX eihX − 1 |≤ E | eitX eihX − 1 |
≤ E | eitX | | eihX − 1 | = E | eihX − 1 | .
| {z }
=1
From the expression in the right hand side the claim about uniform continuity is obvious, if E | eihX − 1 | →
ihX
0, as h → 0, since we can then make E | e − 1 | arbitrarily small by choosing h sufficiently small
independently of t.
It is clear that eihX − 1 → 0 (almost surely), as h → 0 . Since | eihX − 1 | ≤ 2, we can apply dominated
ihX
convergence theorem 1.8.7 to establish E | e − 1 | → 0. Hence we have proved the assertion in part
(d).
4.2. CHARACTERISTIC FUNCTIONS: DEFINITION AND EXAMPLES 121
= E [cos(tX)] + iE [sin(tX)],
where z stands for the conjugate of the complex number z, and then
= E [eitX ] = ϕX (t).
(g) Let us first suppose that the characteristic function of X is real valued, which implies that ϕX (t) = ϕX (t).
But we have found in the proof of (f) that ϕX (t) is the characteristic function of −X. By uniqueness of
d
the characteristic functions, theorem 4.2.2 above, this means that X = −X, as was to be shown.
d
Let us next suppose that X = −X. Then ϕX (t) = ϕ−X (t) and by (f) ϕ−X (t) = ϕX (t), and therefore
ϕX (t) = ϕX (t), and the characteristic function of X is real valued.
(h) Take any n and complex numbers zl and real tl , l = 1, 2, . . . , n. Then we write using the properties of
complex numbers and the definition of ϕX
n X
X n n X
X n h i
zl z k ϕX (tl − tk ) = zl z k E ei(tl −tk )X
l=1 k=1 l=1 k=1
n X
n
" n n #
X XX
itl X −itk X itl X −itk X
= E zl e zk e =E zl e zk e
l=1 k=1 l=1 k=1
" n X
n
# " n n
#
X X X
itl X itl X
=E zl e zk eitk X =E zl e zk eitk X
l=1 k=1 l=1 k=1
The properties (a)-(h) in the preceding theorem are necessary conditions, i.e., they will be fulfilled, if a
function is a characteristic function of a random variable. The condition (h), i.e., (4.5) says that a characteristic
function is non negative definite.
There are several sets of necessary and sufficient conditions for a complex valued function to be a
characteristic function of some random variable. One of these is known as Bochner’s theorem. This theorem
states that an arbitrary complex valued function ϕ is the characteristic function of some random variable if
and only if (i) -(iii) hold, where (i) ϕ is non-negative definite, (ii) ϕ is continuous at the origin, (iii) ϕ(0) = 1.
Unfortunately the condition (i),i.e., (4.5) is in practice rather difficult to verify.
122 CHAPTER 4. CHARACTERISTIC FUNCTIONS
and if we are allowed to move differentiation w.r.t. t inside the integral sign, we get
Z∞
(1) d d itx 1 −x2 /2
ϕX (t) = ϕX (t) = e √ e dx
dt dt 2π
−∞
Z∞ Z∞
itx 1 2 1 2
= ixe √ e−x /2 dx = −ieitx √ −xe−x /2 dx =
2π 2π
−∞ −∞
= 0 − tϕX (t).
(1)
In other words we have encountered the differential equation ϕX (t) + tϕX (t) = 0. This equation has the
2
integrating factor et /2 , or, in other words we have the equation
d t2 /2
e ϕX (t) = 0.
dt
2
We solve this with ϕX (t) = Ce−t /2
. Since ϕX (0) = 1 by (b) in theorem 4.2.3 above, we get C = 1. Thus we
have obtained the result
2
X ∈ N (0, 1) ⇔ ϕX (t) = e−t /2
. (4.7)
2
We observe that e−t /2 is a real valued function. Hence theorem 4.2.3 (g) shows that if X ∈ N (0, 1),
then −X ∈ N (0, 1), which is also readily checked without transforms. Indeed,
Example 4.2.5 (Normal Distribution X ∈ N µ, σ 2 ) Let Z ∈ N (0, 1) and set X = σZ + µ, where σ > 0
and µ is an arbitrary real number. Then we find that
x−µ x−µ
FX (x) = P (X ≤ x) = P Z ≤ =Φ ,
σ σ
4.2. CHARACTERISTIC FUNCTIONS: DEFINITION AND EXAMPLES 123
d
where Φ(x) is the distribution function of Z ∈ N (0, 1) and dx Φ(x) = φ (x). Thus we obtain by by (4.6) that
d 1 x−µ 1 2 2
fX (x) = FX (x) = φ = √ e−(x−µ) /2σ .
dx σ σ σ 2π
Hence X ∈ N µ, σ 2 . But by (e) in theorem 4.2.3 we have
σ2 t2
ϕX (t) = ϕσZ+µ (t) = eiµt ϕZ (σt) = eiµt e− 2 ,
Example 4.2.6 (Poisson Distribution) Let X ∈ Po (λ), λ > 0. Due to definition (4.4)
∞ ∞ k
X k X eit λ
itk −λ λ −λ it
ϕX (t) = e e =e = e−λ ee λ
k! k!
k=0 k=0
= eλ(e −1)
it
,
where we invoked the standard series expansion of ez for any complex z. In other words, we have found the
following:
X ∈ Po (λ) ⇔ ϕX (t) = eλ(e −1) .
it
(4.9)
Some of the next few examples are concerned with the continuous case of the definition by evaluating the integral
in (4.4). The reader with a taste for mathematical rigor may become consterned for the fact that we will be
proceeding as if everything was real valued. This is a pragmatic simplification of the presentation, and the
results in the cases below will equal those obtained, when using a more rigorous approach. The computation of
Fourier transforms and inverse Fourier transforms can then, of course, require contour integration and residue
calculus, which we do not enter upon in the main body of the text. An exception is the section on Mellin
transforms.
Example 4.2.7 (Exponential Distribution) Let X ∈ Exp (λ), λ > 0. By definition (4.4)
Z∞
1
ϕX (t) = E eitX = eitx e−x/λ dx
λ
0
Z∞ ∞
1 −x((1/λ)−it) 1 −1 −x((1/λ)−it) 1 1 1
= e dx = e = = .
λ λ ((1/λ) − it) 0 λ (it − (1/λ)) (1 − iλt)
0
Thus we have
1
X ∈ Exp(λ) ⇔ ϕX (t) = . (4.10)
1 − iλt
124 CHAPTER 4. CHARACTERISTIC FUNCTIONS
Example 4.2.8 (Laplace Distribution) X ∈ L (1) says that X has the p.d.f.
1 −|x|
fX (x) = e , −∞ < x < +∞. (4.11)
2
The definition in (4.4) gives
Z∞
1
ϕX (t) = eitx e−|x| dx
2
−∞
Z∞ Z0 Z∞
itx −|x| itx x
e e dx = e e dx + eitx e−x dx. (4.12)
−∞ −∞ 0
We change the variable x = −u, in the first integral in the right hand side of (4.12), which yields
Z0 Z0
itx x
e e dx = e−itu e−u (−1)du
−∞ ∞
Z∞ Z∞
−itu −u
= e e du = eitu e−u du,
0 0
which is seen to be the complex conjugate of the second integral in the right hand side of (4.12). This second
integral is in its turn recognized from the directly preceding example as the characteristic function of Exp(1).
Thus we get by (4.10)
Z∞
1
eitx e−x dx = .
1 − it
0
Hence
Z∞
1 1 1 1
eitx e−|x| dx = +
2 2 1 − it 1 − it
−∞
1 1 1 1 1 + it + 1 − it 1
= + = = .
2 1 + it 1 − it 2 1 + t2 1 + t2
In summary,
1
X ∈ L (1) ⇔ ϕX (t) = . (4.13)
1 + t2
d
The theorem 4.2.3 (g) shows that if X ∈ L (1), then X = −X.
and by (4.10)
1 1 1 1
= · = ·
1 − it 1 − i(−t) 1 − it 1 + it
4.2. CHARACTERISTIC FUNCTIONS: DEFINITION AND EXAMPLES 125
1
= .
1 + t2
Here a reference to (4.13) gives that if X ∈ Exp(1), Y ∈ Exp(1), X and Y independent, then
X − Y ∈ L (1) . (4.14)
d
We have X − Y = Y − X, too.
Example 4.2.10 (Gamma Distribution) Let X ∈ Γ (p, a), p > 0, a > 0. The p.d.f. is
1 xp−1 −x/a
Γ(p) ap e 0≤x
fX (x) = (4.15)
0 x < 0.
By definition (4.4)
Z∞
1 xp−1 −x/a
ϕX (t) = eitx e dx
Γ(p) ap
0
Z∞
1 xp−1 −x((1/a)−it)
= e dx.
Γ(p) ap
0
Z∞
1 1 1
= p up−1 e−u du.
a Γ(p) ((1/a) − it)p
0
R∞
By definition of the Gamma function Γ(p) = up−1 e−u du, and the desired characteristic function is
0
1 1 1
= = p.
ap ((1/a) − it)p (1 − iat)
Example 4.2.11 (Standard Cauchy) X ∈ C (0, 1) says that X is a continuous r.v., and has the p.d.f.
1 1
fX (x) = , −∞ < x < ∞. (4.17)
π 1 + x2
We are going to find the characteristic function of X ∈ C (0, 1) by the duality argument or the symmetry
property of the Fourier transforms, see [100, p. 252]. Since all transforms involved are real, we have no
difficulty for the fact that the characteristic function is the complex conjugate of the Fourier transform.
126 CHAPTER 4. CHARACTERISTIC FUNCTIONS
Remark 4.2.1 The symmetry or duality property of the Fourier transform in (4.1) is as follows.
F F
If f (x) → fb(t), then fb(x) → 2πf (−t).
Example 4.2.12 (Point Mass Distribution) For the purposes of several statements in the sequel we intro-
duce a probability mass function with a notation reminiscent of the Dirac pulse.
(
1 x=c
δc (x) = (4.19)
0 x 6= c.
Then δc is a distribution such that all mass is located at c. In the terminology of appendix 2.5 δc defines a
purely discrete measure with one atom at c. Then, if X ∈ δc ,
Example 4.2.13 (Bernoulli Distribution) Let X ∈ Be (p). Here p = P (X = 1). Then we apply again the
discrete case of the definition (4.4) and get
ϕX (t) = E eitX = eit0 (1 − p) + eit p = (1 − p) + eit p.
Example 4.2.14 (Symmetric Bernoulli Distribution) The characteristic function of X ∈ SymBe with
p.m.f. in (2.50) is computed as
1 1
ϕX (t) = E eitX = e−it + eit = cos(t).
2 2
X ∈ SymBe ⇔ ϕX (t) = cos(t). (4.22)
4.3. CHARACTERISTIC FUNCTIONS AND MOMENTS OF RANDOM VARIABLES 127
Example 4.2.15 (Binomial Distribution) Let X ∈ Bin (n, p). The discrete case of the definition (4.4)
yields ! !
Xn Xn Xn
itk itk n k n−k n k
ϕX (t) = e P (X = k) = e p (1 − p) = eit p (1 − p)n−k
k=0 k=0
k k=0
k
n
= eit p + (1 − p) ,
Theorem 4.3.1 If the random variable X has the expectation, E [| X |] < ∞, then
d d
ϕX (t) |t=0 = ϕX (0) = iE [X] . (4.24)
dt dt
If E | X |k < ∞, then
dk
k
ϕX (0) = ik E X k . (4.25)
dt
d
d itX d
Proof: Formally, dt ϕX (t) = E dt e = E iXeitX . Hence dt ϕX (0) = iE [X]. The legitimacy of changing
the order of diffentiation and expectation is taken for granted.
We can do some simple examples.
Example 4.3.2 (The Cauchy Distribution) In (4.18) X ∈ C (0, 1) was shown to have the characteristic
function ϕX (t) = e−|t| . Let us note that |t| does not have a derivative at t = 0.
Example 4.3.3 (Mean and Variance of the Poisson Distribution) We have in (4.9)
Then
d
ϕX (t) = eλ(e −1) · iλeit
it
dt
and by (4.24)
1 d
E [X] = ϕX (0) = λ,
i dt
as is familiar from any first course in probability and/or statistics.
d2
ϕX (t) = eλ(e −1) · i2 λ2 ei2t + eλ(e −1) i2 λeit ,
it it
dt2
128 CHAPTER 4. CHARACTERISTIC FUNCTIONS
Theorem 4.4.1 X1 , X2 , . . . , Xn are independent random variables with respective characteristic functions
Pn
ϕXk (t), k = 1, 2, . . . , n. Then the characteristic function ϕSn (t) of their sum Sn = k=1 Xk is given by
Corollary 4.4.2 X1 , X2 , . . . , Xn are independent and identically distributed random variables with the char-
d Pk
acteristic function ϕX (t), X = Xk . Then the characteristic function ϕSn (t) of their sum Sn = i=1 Xi is given
by
n
ϕSn (t) = (ϕX (t)) . (4.27)
If X and Y are independent random variables with probability densities fX and fY , respectively, then their
sum Z = X + Y has, as is checked in (2.110), the p.d.f. given by the convolutions
Z ∞ Z ∞
fZ (z) = fX (x)fY (z − x)dx = fY (y)fX (z − y)dy.
−∞ −∞
fZ = fX ⊕ fY ,
This is plainly nothing but a well known and important property (convolution theorem) of Fourier transforms,
[100, p. 177].
As applications of the preceding we can prove a couple of essential theorems.
Theorem 4.4.3 X1 , X2 , . . . , Xn are independent and Xk ∈ N µk , σk2 for k = 1, 2, . . . , n. Then for any real
constants a1 , . . . , an !
Xn n
X n
X
Sn = ak X k ∈ N a k µk , a2k σk2 . (4.28)
k=1 k=1 k=1
and some elementary rearrangements using the properties of the exponential function we find
Pn 2 2 2
k=1 ak σk t
Pn
= ei k=1 µk ak t− 2
or Pn 2 2 2
k=1 ak σk t
Pn
ϕSn (t) = ei k=1 µk ak t− 2 .
Pn Pn 2 2
A comparison with (4.8) identifies ϕSn (t) as the characteristic function of N k=1 ak µk , k=1 ak σk . By
uniqueness of the characteristic function we have shown the assertion as claimed.
Pn 2
1
Example 4.4.4 Let X1 , . . . , Xn are I.I.D. and ∈ N (µ, σ 2 ). Set X = n i=1 Xi . Thus X ∈ N µ, σn .
Example 4.4.7 (Sum of Two Independent Binomial Random Variables with the same p) X1 ∈ Bin (n1 , p),
X2 ∈ Bin (n2 , p), X1 and X2 are independent. Then
Example 4.4.9 (Gamma Distribution a Sum of Independent Exp(λ) Variables) Let X ∈ Γ (n, λ), where
n is a positive integer. Then the finding in (4.16) shows in view of (4.10) that X is in distribution equal to
a sum of n independent Exp(λ)-distributed variables. In view of (2.2.10) we can also state that a sum of n
independent Exp(λ)-distributed variables has an Erlang distribution.
Example 4.4.10 (Sum of Two Independent Gamma Distributed Random Variables) Let X1 ∈ Γ (n1 , λ)
and X2 ∈ Γ (n2 , λ). Then in view of (4.16) and (4.26) we get that
X1 + X2 ∈ Γ (n1 + n2 , λ) .
4.5. EXPANSIONS OF CHARACTERISTIC FUNCTIONS 131
which is easily seen by drawing a picture of the complex unit circle and a chord of it to depict eix − 1.
We make the induction assumption that (4.32) holds for n. We wish to prove the assertion for n + 1. By
complex conjugation we find that it suffices to consider x > 0. We proceed by expressing the function to be
bounded as a definite integral.
n+1
X (ix)k n+1
X (ix)k
eix − = eix − 1 −
k! k!
k=0 k=1
n Z x n
# "
X (ix)k+1 X (it)k
ix it
=e −1− = e − d(it).
(k + 1)! 0 k!
k=0 k=0
Thus
n+1
X Z x n
X
(ix)k (it)k
| eix − |≤ | eit − | d(it).
k! 0 k!
k=0 k=0
At this point of the argument we use the induction hypothesis, i.e., (4.32) holds for n. This yields
Z x n
X Z x
it (it)k |t|n+1 2|t|n
|e − | dt ≤ min , dt
0 k! 0 (n + 1)! n!
k=0
|x|n+2 2|x|n+1
≤ min , .
(n + 2)! (n + 1)!
(Why does the last inequality hold ?) In summary, we have shown that
n+1
X
(ix)k |x|n+2 2|x|n+1
| eix − |≤ min , ,
k! (n + 2)! (n + 1)!
k=0
which evidently tells that the assertion (4.32) holds for n + 1. The proof by induction is complete.
The bound (4.32) leads immediately to the next bound for expansion of characteristic function.
Now we apply the error bound in (4.32) on the expression inside the expectation, and the upper bound in (4.33)
follows.
For ease of effort in the sequel we isolate an important special case of the preceding. With n = 2 we have in
(4.33) the error bound
" #
|tX|3 2|tX|2 2 min |t||X|3 , 2|X|2
E min , ≤ |t| E .
3! 2! 3!
o(t2 )
fulfills limt→0 t2 → 0. Thus we can write
t2 2
ϕX (t) = 1 + itE [X] − E X + o t2 . (4.34)
2
In view of the preceding there is also the following series expansion.
Theorem 4.5.3 Suppose that the random variable X has the nth moment E [| X |n ] < ∞ for some n. Then
n
X (it)k
ϕX (t) = 1 + E Xk + o(|t|n ). (4.35)
k!
k=1
X n
X Yk n
def 1
Wn = √ Yk = √
n n
k=1 k=1
We shall now compute the characteristic function of Wn and then see what happens to this function, as n → ∞.
It turns out that the scaling must be taken exactly as √1n for anything useful to emerge.
4.6. AN APPENDIX: A LIMIT 133
d
By (4.27) with Y = Yk it follows that
n
ϕWn (t) = ϕPn (t) = ϕ √Y (t) .
Y
√k
k=1 n n
By property (e) in theorem 4.2.3 we have that ϕ √Y (t) = ϕY √tn . Thus
n
n
t
ϕWn (t) = ϕY √ .
n
When we expand ϕY √tn as in (4.34) we obtain, as E [Yk ] = 0 and Var [Yk ] = 1,
2
t t2 t
ϕY √ =1− +o .
n 2n n
Thereby we get 2 n
t2 t
ϕWn (t) = 1 − +o .
2n n
It is shown in the Appendix 4.6, see (4.42), that now
2
lim ϕWn (t) = e−t /2
. (4.36)
n→∞
In view of (4.7 ) we observe that the characteristic function of the scaled sum Wn of random variables converges
by the above for all t to the characteristic function of N (0, 1). We have now in essence proved a version of the
Central Limit Theorem, but the full setting of convergence of sequences of random variables will be treated
in chapter 6.
The proof above is strictly speaking valid for sequences of real numbers. We shall next present two additional
arguments.
134 CHAPTER 4. CHARACTERISTIC FUNCTIONS
n−1
Y n−1
Y
≤| zn − wn | + | zi − wi |,
i=1 i=1
and then
|un − v n | ≤ |u − v||un−1 | + |v||un−1 − v n−1 |. (4.41)
We note that
|un−1 | ≤ max(|u|, |v|)n−1
and that
|v| · max(|u|, |v|)n−2 ≤ max(|u|, |v|) · max(|u|, |v|)n−2 = max(|u|, |v|)n−1 .
4.6.3 Applications
The situation corresponding to (4.37) is often encountered as
2 n
t2 t 2
lim 1 − +o = e−t /2 . (4.42)
n→∞ 2n n
1. Let us set
t2 t2
cn = − −n·o .
2 n
t
o( n
t
) 2
Then n · o n = 1 → 0, as n → ∞. Thus cn → − t2 , as n → ∞, and (4.42) follows by (4.37), since
n
cn t2 t2
=− −o .
n 2n n
2. Let us now check (4.42) using the inequalities in the preceding section. With regard to lemma 4.6.2 we
take for all i 2
t2 t
zi = 1 − +o
2n n
and
t2
wi = 1 − .
2n
Then |wi | ≤ 1 and |zi | ≤ 1 and
n
Y n
Y 2 n n
t2 t t2
| zi − wi | = | 1 − +o − 1− |,
i=1 i=1
2n n 2n
|un − v n | ≤ |u − v|n
4.7 Exercises
4.7.1 Additional Examples of Characteristic Functions
1. Let a < b. Show that
eitb − eita
X ∈ U (a, b) ⇔ ϕX (t) = . (4.43)
it(b − a)
2. Let X ∈ Tri(−1, 1), which means that the p.d.f. of X is
(
1 − |x| |x| < 1
fX (x) = (4.44)
0 elsewhere.
Show that 2
sin 2t
X ∈ Tri(−1, 1) ⇔ ϕX (t) = t . (4.45)
2
3. Let X1 ∈ U − 21 , 12 and X2 ∈ U − 12 , 12 . Assume that X1 and X2 are independent. Find the distribution
of X1 + X2 .
6. Stable Distributions
then the distribution of X is stable. (It can be verified that ϕX (t) is in fact a characteristic function.)
Interpret (a) and (b) in terms of (4.48).
4.7. EXERCISES 137
Sk
Note that the r.v.’s k are not independent.
8. (From [35]) Here we study the product of two independent standard Gaussian variables. More on products
of independent random variables is given in section 4.7.4.
(a) X1 ∈ N (0, 1) and X2 ∈ N (0, 1) are independent. Show that the characteristic function of their
product Y = X1 · X2 is
1
ϕY (t) = √ . (4.49)
1 + t2
(b) Z1 ∈ Γ(a, b) and Z2 ∈ Γ(a, b) are independent. We set
U = Z1 − Z2 ,
d
and suppose we know that U = Y , where Y has the distribution in part (a) of this exercise. What
are the values of a and b ? Answer: a = 1/2, b = 1.
9. (From [35]) The r.v. X has the characteristic function ϕ(t). Show that |ϕ(t)|2 is a characteristic function.
d d
Aid: Take X1 and X2 as independent and X1 = X as well as X2 = X. Check the characteristic function
of Y = X1 − X2 .
10. X ∈ IG(µ, λ) with p.d.f. given in (2.37). Find its characteristic function as
q
2
( λµ ) 1− 1− 2µλ it
ϕX (t) = e .
11. X ∈ K(L, µ, ν) as in example 2.2.22. What could ϕX (t) be ? Aid: None known.
N = min{n | Xn = 0},
1
(a) Show that N ∈ Fs 2 .
PN
(b) Show that the characteristic function of SN = k=1 Xk is ϕSN (t) = 1/(2 − cos t).
(c) Find Var (SN ) (Answer: Var (SN ) = 1).
2. (5B1540 04-05-26) The random variable Yn is uniformly distributed on the numbers {j/2n|j = 0, 1, 2, . . . , 2n −
1}. The r.v. Xn+1 ∈ Be 12 is independent of Yn .
3. (5B1540 00-08-29) X ∈ Exp(1), Y ∈ Exp(1) are independent random variables. Show by means of
characteristic functions that
Y d
X+ = max(X, Y ).
2
4. (an intermediate step of an exam question in FA 181 1981-02-06) Let X1 , X2 , . . . , Xn be independent and
identically distributed. Furthermore, a1 , a2 , . . . , an are arbitrary real numbers. Set
Y1 = a1 X1 + a2 X2 + . . . + an Xn
and
Y2 = an X1 + an−1 X2 + . . . + a1 Xn .
Show that
d
Y1 = Y2 .
Let
(n) dn
ϕX (t) = ϕX (t),
dtn
(0) (1)
where ϕX (t) = ϕX (t), ϕX (t) = −tσ 2 ϕX (t). Show by induction that for n ≥ 2,
(n) (n−2) (n−1)
ϕX (t) = −(n − 1)σ 2 ϕX (t) − tσ 2 ϕX (t).
Then we get
(n) (n−2)
ϕX (0) = −(n − 1)ϕX (0), n ≥ 2, (4.51)
(1)
which is regarded as a difference equation with the initial value ϕX (0) = 0. Solve (4.51).
4.7. EXERCISES 139
2. The Rice Method is a technique of computing moments of nonlinear transformations of random variables
by means of characteristic functions [104, pp. 378-]. Let H(x) be a (Borel) function such that its Fourier
b
transform H(t) exists. X is a random variable such that E [H(X)] exists. Then we recall the formula for
inverse Fourier transform in (4.2) as
Z ∞
1 b
H(x) = eitx H(t)dt.
2π −∞
Then it follows, if the interchange of integral and expectation is taken for granted, that
Z ∞ Z ∞
1 itX b 1
E [H(X)] = E e H(t)dt = b
E eitX H(t)dt,
2π −∞ 2π −∞
and by definition of the characteristic function
Z ∞
1 b
E [H(X)] = ϕX (t)H(t)dt. (4.52)
2π −∞
This is the tool of the Rice method. It may turn out that the integration in the right hand side can be
performed straightforwardly (often by means of contour integration and residue calculus).
Assume that X ∈ N (0, σ 2 ). Show that
σ2
E [cos(X)] = e 2 .
Aid 1.: An engineering formula for the Fourier transform of cos(x) is, [101, p.413],
F b 1
H(x) = cos(x) 7→ H(t) = (δ(t − 1) + δ(t + 1)) ,
2
where δ(t) is the Dirac’s delta ’function’.
Aid 2.: If you do not feel comfortable with Dirac’s delta, write cos(x) by Euler’s formula, in this attempt
you do not really need (4.52).
Example 4.7.1 A financial portfolio is valued in a domestic currency (e.g., SEK). The prices of shares and
other instruments are uncertain and are modeled as random variables. In addition the exchange rates are
uncertain, hence the value of the portfolio in, say, JPY may be modelled by a product of two random variables.
Example 4.7.2 In statistical methodology an important role is played by the following result. Suppose X ∈
N (0, 1), Y ∈ χ2 (f ), X and Y are independent. Then we know (presumably without a proof (?)) by any first
course in statistics that
X
p ∈ t(f ) (4.53)
Y /f
1 Robert Hjalmar Mellin (1854 - 1933) studied at the University of Helsinki, where his teacher was Gösta Mittag-Leffler, who left
Helsinki for having been appointed to professor of mathematics at the University of Stockholm. Mellin did post-doctoral studies
in Berlin under Karl Weierstrass and in Stockholm. From 1908 till retirement Mellin served as professor of mathematics at the
Polytechnic Institute in Helsinki, which subsequently became Helsinki University of Technology, currently merged to a constituent
of the Aalto University.
140 CHAPTER 4. CHARACTERISTIC FUNCTIONS
i.e., the ratio follows the Student’s t -distribution. We hint thereby that this can shown by a Mellin transfor-
mation.
For a random variable X ≥ 0 with the probability density fX (x) we define the Mellin transform as
Z ∞
fbMX (z) = xz−1 fX (x)dx. (4.54)
0
Considered as a function of the complex variable z, fbMX (z) is a function of the exponential type and is analytic
in a strip parallel to the imaginary axis. The inverse transformation is
Z
1
fX (x) = x−z fbMX (z)dz, (4.55)
2πi L
for all x, where fX (x) is continuous. The contour of integration is usually L = {c − i∞, c + i∞} and lies in the
strip of analycity of fbMX (z).
Several of the exercises below require proficiency in complex analysis to the extent provided in [93].
4. Let fX (x) and fY (y) be two probability densities on (0, ∞). Let
Z ∞ Z ∞
1 x 1 x
h(x) = fX fY (y)dy = fX (y) fY dy. (4.61)
0 y y 0 y y
Compute the Mellin transform of h(x). Aid: Recall (2.108) in the preceding.
The function h(x) is called the Mellin convolution of fX (x) and fY (y).
where the strip of analycity is the half-plane Re(z) > 0. Find fX (x). A piece of residue calculus is required
for the inversion in (4.55).
The requirement X ≥ 0 would seem to be a serious impediment to usefulness the Mellin transform in probability
calculus. However, let X + = max{X, 0} denote the positive part of X and X − = max{−X, 0} denote its
negative part. Thus X = X + − X − , and
XY = X + Y + − X + Y − − X − Y + + X − Y − ,
and then the Mellin transform of X can be defined for XY . This or other similar tricks enable us to extend the
transform to the general case2 . Then we can show, e.g., that the product of n independent N (0, 1) variables is
(the student is not required to do this)
Z c+i∞ −z
1 1 x2
fQn Xk (x) = Γn (z) dz, (4.67)
k=1
(2π)
n/2 2πi c−i∞ 2n
where the contour of integration is a line parallel to the imaginary axis and to the right of origin. The integral
may be evaluated by residue calculus to give
∞
X 1
fQnk=1 Xk (x) = n/2
R(z, n, j),
j=0 (2π)
2 −z
where R(z, n, j) denotes the residue of 2xn Γn (z) at the nth order pole at z = −j. People knowledgeable in
special functions recognize by (4.67) also that f nk=1 Xk (x) is an instance of what is called Meijer’s G-function
Q
(or H-function) [3, pp.419−425], which is a generalization of the hypergeometric function. The residues can
be evaluated by numerical algorithms, and therefore the probability density and the corresponding distribution
function are available computationally, and by virtue of compilation efforts in the past, in tables of of function
values.
Make the change of variable x = eu and z = c − it in (4.54). Then we get the transform
Z ∞
fbMX (c − it) = eu(c−it) fX (eu ) dx, (4.69)
0
This shows in view of (4.1) and (4.2) that we have in fact the pair of a function and its Fourier transform as in
(4.3),
fX (eu ) euc , fbMX (c − it) .
2 M.D.Springer & W.E. Thompson: The Distribution of Products of Independent Random Variables. SIAM Journal on Applied
Mathematics, Vol. 14, No.3, 1966, pp. 511−526.
Chapter 5
5.1 Introduction
The topic in this chapter will be the probability generating functions and moment generating functions in
probability theory. Generating functions are encountered in many areas of mathematics, physics, finance and
engineering. For example, in [3] one finds the generating functions for Hermite, Laguerre, Legendre polynomials
and other systems of polynomials. The calculus of generating functions for problems of discrete mathematics
(e.g., combinatorics) is expounded in [41]. In control engineering and signal processing generating functions are
known plainly as z-transforms, see [93, 100]. The generic concept is as follows.
∞
Consider a sequence of real numbers (ak )k=0 , e.g., ak could be the value of the kth Hermite polynomial Hk
∞
at x. The (ordinary) generating function of (ak )k=0 is defined as
∞
X
G(t) = a k tk
k=0
for those values of t, where the sum converges. For a given series there exists a radius of convergence R > 0 such
that the sum converges absolutely for | t |< R and diverges for | t |> R. G(t) can be differentiated or integrated
term by term any number of times, when | t |< R, [69, section 5.4]. We recall also Abel’s Theorem: if R ≥ 1
P∞
then limt↑1 G(t) = k=0 ak . In the sequel limits with t ↑ 1 will often be written as t → 1.
In many cases the G(t) can be evaluated in a closed form. For example, the generating function of (proba-
bilist’s) Hermite polynomials Hk (x) in (2.97) is
∞
X
1 2
G(t) = ext− 2 t = Hk (x)tk .
k=0
The individual numbers, Hk (x), in the sequence can be recovered (generated) from the explicit G(t) by differ-
entiation.
pX (k) = P (X = k) , k = 0, 1, 2, . . .
143
144 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
The first example that comes to mind is X ∈ Po(λ), see example 2.3.8. In the case of a finite set of values we
take pX (k) = 0 for those non negative integers that cannot occur, e.g., when X takes only a finite number of
values.
Definition 5.2.1 ( Probability generating function ) X is a non negative integer valued random variable.
The probability generating function (p.g.f.) gX (t) of X is defined by
∞
def X
gX (t) = E tX = tk pX (k). (5.1)
k=0
∞
We could be more precise and talk about the p.g.f. of the probability mass function {pX (k)}k=0 , but it is
customary and acceptable to use phrases like ’p.g.f. of a random variable’ or ’p.g.f. of a distribution’.
Example 5.2.1 (P.g.f. for Poisson distributed random variables) X ∈ Po(λ), λ > 0. The p.g.f. is
∞
X ∞
X (tλ)k
kλ
gX (t) = tk e−λ = e−λ = e−λ · etλ ,
k! k!
k=0 k=0
where we used the series expansion of etλ , which converges for all t. In summary,
We write also
gPo (t) = eλ(t−1) , t ∈ R.
P∞
Note that gX (1) = k=0 pX (k) = 1, so the series converges absolutely at least for | t |≤ 1. In addition,
gX (0) = pX (0). By termwise differentiation we get
X ∞ X ∞
(1) d
gX (t) = gX (t) = ktk−1 pX (k) = ktk−1 pX (k).
dt
k=0 k=1
Theorem 5.2.3 (Uniqueness) If X and Y are two non negative integer valued random variables such that
then
pX (k) = pY (k) k = 0, 1, 2, . . .
5.2. PROBABILITY GENERATING FUNCTIONS 145
We write this as
d
X = Y.
Proof: Since gX (t) = gY (t) holds for a region of convergence, we can take that the equality holds for some
region around origin. Then we have by (5.3) for all k
(k) (k)
gX (0) g (0)
pX (k) = , pY (k) = Y .
k! k!
(k) (k)
But the assumption implies that gX (0) = gY (0), and hence pX (k) = pY (k) for all k.
The uniqueness theorem means in the example above that (5.2) can be strengthened to
We can think of the generating functions of functions of X. The ’p.g.f. of Y = H(X)’ would then be
h i X∞
gY (t) = gH(X) = E tH(X) = tH(k) pX (k).
k=0
Example 5.2.4 Let Y = a + bX, where X is a non negative integer valued random variable and a and b are
non negative integers. Then
h X i
gY (t) = E ta+bX = ta E tbX = ta E tb = ta g X tb , (5.5)
Example 5.2.5 (P.g.f. for Bernoulli random variables) X ∈ Be(p), 0 < p < 1. Here P(X = 1) = p,
P(X = 0) = 1 − p = q (and . The p.g.f. is
gX (t) = t0 (1 − p) + tp = q + pt.
Hence we have
X ∈ Be(p) ⇔ gX (t) = q + pt. (5.6)
We write also
gBe (t) = q + pt.
(1) (1) (k)
We note that gX (0) = q = P(X = 0), gX (t) = p and thus gX (0) = p = P(X = 1) and gX (0) = 0 for
k = 2, 3 . . ., as should be.
Example 5.2.6 (P.g.f. for Binomial random variables) X ∈ Bin(p), 0 < p < 1, q = 1 − p. The p.g.f. is
n
! n
!
X n X n
k k n−k
gX (t) = t p (1 − p) = (tp)k (1 − p)n−k
k=0
k k=0
k
n n
= ((1 − p) + tp) = (q + tp) ,
146 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
We write also
gBin (t) = (q + tp)n .
When both (5.6) and (5.7) are taken into account, we find
n
gBin (t) = (gBe (t)) . (5.8)
Example 5.2.7 (P.g.f. for Geometric random variables) X ∈ Ge(p), 0 < p < 1, q = 1 − p. pX (k) = q k p,
k = 0, 1, 2, . . .. The p.g.f. is
∞
X n
X
gX (t) = tk p(1 − p)k = p (tq)k ,
k=0 k=0
p 1
X ∈ Ge(p) ⇔ gX (t) = gGe (t) = , | t |< . (5.9)
1 − qt q
Example 5.2.8 (P.g.f. for First Success random variables) X ∈ Fs(p), 0 < p < 1, q = 1 − p. pX (k) =
q k−1 p, k = 1, 2, . . .. The p.g.f. is
∞ ∞ n n
!
X X pX p X
k k−1 k k−1 k k
gX (t) = t pq =p t q = (tq) = (tq) − 1
q q
k=1 k=1 k=1 k=0
Example 5.2.9 (P.g.f. for X + 1, X ∈ Ge(p)) Let X ∈ Ge(p), 0 < p < 1, q = 1 − p. We set Y = X + 1.
Since X has values k = 0, 1, 2, . . . ,, the values of Y are k = 1, 2, . . . ,. To compute the p.g.f. of Y we can use
(5.5) with a = 1 and b = 1 and apply (5.9)
p pt
gY (t) = t · gX (t) = t · = .
1 − qt 1 − qt
Here a look at (5.10) and the uniqueness of p.g.f. entail
X + 1 ∈ Fs(p).
5.3. MOMENTS AND PROBABILITY GENERATING FUNCTIONS 147
This makes perfect sense by our definitions. If X ∈ Ge(p), then X is the number of independent attempts
in a binary trial until one gets the first success NOT INCLUDING the successful attempt. The first success
distribution Fs(p) is the distribution of the number of independent attempts in a binary trial until one gets the
first success INCLUDING the successful attempt. Clearly these very conceptions imply that X + 1 ∈ Fs(p), if
X ∈ Ge(p). Hence we have re-established this fact by a mechanical calculation. Or, we have checked that p.g.f.
corresponds to the right thing.
E [X(X − 1) · . . . · (X − (r − 1))]
Theorem 5.3.1 (Factorial Moments by p.g.f.) X is a non negative integer valued random variable, and
E [X r ] < ∞ for some r > 0. Then
(r)
gX (1) = E [X(X − 1) · . . . · (X − (r − 1))] . (5.11)
but the terms corresponding to k = 0, 1, . . . , r − 1 contribute obviously by a zero to the sum in the right hand
side, and hence the claim in the theorem is true.
A number of special cases of the preceding result are of interest as well as of importance. We assume that the
moments required below exist.
•
(1)
gX (1) = E [X] . (5.12)
•
(2)
gX (1) = E [X(X − 1))] = E X 2 − E [X] .
As we have
2
Var[X] = E X 2 − (E [X]) ,
148 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
it follows that
(2) 2
Var[X] = gX (1) + E [X] − (E [X])
or 2
(2) (1) (1)
Var[X] = gX (1) + gX (1) − gX (1) . (5.13)
Then clearly Sn has, by basic algebra, the non negative integers as values. The results about the p.g.f. of the
sum follow exactly as the analogous results for characteristic functions of the sum .
Theorem 5.4.1 If X1 , X2 , . . . , Xn are independent non negative integer valued random variables with respec-
Pn
tive p.g.f.s gXk (t), k = 1, 2, . . . , n. Then the p.g.f. gSn (t) of their sum Sn = k=1 Xk is given by
gSn (t) = eλ1 (t−1) · eλ2 (t−1) · . . . · eλn (t−1) = e(λ1 +λ2 +...+λn )(t−1) .
Corollary 5.4.3 X1 , X2 , . . . , Xn , are independent and identically distributed non negative integer valued
d Pk
random variables with the p.g.f. gX (t), X = Xk . Then the p.g.f. gSn (t) of their sum Sn = i=1 Xi is given by
n
gSn (t) = (gX (t)) . (5.15)
The assertions in (5.15) and (5.8) give another proof of the fact in Example 4.4.6.
where we took advantage of the assumed independence between the r.v.’s in the sum and the variable N (an
independent condition drops out). But then (5.15) yields
∞
X n
= (gX (t)) pN (n) .
n=0
In view of the definition of the p.g.f. of N the last expression is seen to equal
= gN (gX (t)) .
We refer to gSN (t) = gN (gX (t)) as the composition formula. An inspection of the preceding proof shows
that the following more general composition formula is also true.
150 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
Example 5.5.3 Let N ∈ Po(λ), Xk ∈ Be(p) for k = 1, 2, . . .. From (5.6) gX (t) = q + pt and from (5.4)
gN (t) = eλ(t−1) . Then (5.17) becomes
i.e.,
gSN (t) = eλp(t−1) .
By uniqueness of p.g.f.s we have thus obtained that SN ∈ Po(λp). The result is intuitive: we can think of
first generating N ones (1) and then deciding for each of these ones, whether to keep it or not by drawing
independently from a Bernoulli random variable. Then we add the ones that remain. This can be called
’thinning’ of the initial Poisson r.v.. Therefore thinning of Po(λ) is probabilistically nothing else but drawing
an integer from Poisson r.v. with the intensity λ modulated by p, Po(λp).
The result in theorem 5.5.1 has many nice consequences, when combined with the moment formulas in section
5.3. Let us assume that all required moments exist.
(1)
• By (5.12) gX (1) = E [X] and thus
(1) (1) (1)
E [SN ] = gSN (1) = gN (gX (1)) · gX (1)
Lemma 5.5.4
Var [SN ] = Var [N ] (E [X])2 + E [N ] Var [X] . (5.20)
and
(2)
gX (1)) = E X 2 − E [X] .
In addition by (5.19)
(1)
gSN (1) = E [N ] · E [X] .
we get
2 2
= E N 2 − E [N ] (E [X]) + E [N ] E X 2 − E [X] + E [N ] · E [X] − (E [N ] · E [X]) .
′
We simplify this step-by-step (one simplification per line), e.g., with eventual applications of Steiner s
formula (2.6),
2 2 2
= E N 2 (E [X]) − E [N ] (E [X]) + E [N ] E X 2 − E [X] + E [N ] · E [X] − (E [N ] · E [X])
= E N 2 − (E [N ])2 (E [X])2 − E [N ] (E [X])2 + E [N ] E X 2 − E [X] + E [N ] · E [X]
2 2
= Var [N ] (E [X]) − E [N ] (E [X]) + E [N ] E X 2 − E [X] + E [N ] · E [X]
2 2
= Var [N ] (E [X]) − E [N ] (E [X]) + E [N ] E X 2 − E [N ] E [X] + E [N ] · E [X]
2 2
= Var [N ] (E [X]) − E [N ] (E [X]) + E [N ] E X 2
= Var [N ] (E [X])2 + E [N ] E X 2 − (E [X])2
2
= Var [N ] (E [X]) + E [N ] Var [X] ,
Since the infinite sequence lacks memory due to independence, we can always drop a finite number of trials in
the beginning and yet, in this new infinite sequence, the probability P (Ek ) is for any k unaffected.
If the first trial is a failure, in order for the outcome to be in Ek , there must be an even number of successes
in the next k − 1 trials (lack of memory), or, in other words the outcome of the next k − 1 trials is in Ek−1 . If
the first trial is a success, then there must be an odd number of successes in the next k − 1 trials, or, in other
c
words the outcome of the next k − 1 trials is in the complement Ek−1 . Thus we can write
c
Ek = (Ek−1 ∩ {failure in the first trial }) ∪ Ek−1 ∩ {success in the first trial } .
We let pk be defined by
def
pk = P (Ek ) .
Then we can write (5.23) as the difference equation or recursion
This is actually valid only for k ≥ 2. Namely, if k = 1, an even number of successes can come about in only one
way, namely by making zero successes, and thus we take p1 = q. If the equation in (5.24) is to hold for k = 1,
i.e.,
q = p1 = qp0 + p (1 − p0 ) ,
we must take p0 = 1. The initial conditions for the equation in (5.24) must therefore be taken as
p1 = q, p0 = 1. (5.25)
pk − (q − p)pk−1 = p. (5.26)
Hence we are dealing with a non homogeneous linear difference equation of first order with constant
coefficients. One can solve (5.26) with the analytic techniques of difference equations [45, pp. 13−14].
We consider first the homogeneous equation
pk − (q − p)pk−1 = 0.
The standard ’Ansatz’ for its solution is pk = c1 z k , where c1 is a constant to be determined. This
gives clearly the general solution of the homogeneous difference equation as pH k
k = c1 (q − p) . We
need next to find a particular solution of the nonhomogenous equation
pk − (q − p)pk−1 = p.
In this situation one seeks for a constant as a particular solution. One sees that pSk = c2 12 is a
particular solution, where c2 is a constant to be determined. Then we have by linearity the complete
solution of (5.24) as the sum of the two solutions
1
pk = pH S k
k + pk = c1 (q − p) + c2 .
2
5.6. THE PROBABILITY OF AN EVEN NUMBER OF SUCCESSES 153
The constants c1 and c2 are next determined by the two initial conditions p0 = 1 and p1 = q. This
gives the system of equations c1 + c22 = 1, c1 (1 − 2p) + c22 = (1 − p). Its solution is c1 = 21 and
c2 = 1. Hence we have obtained the complete solution as
1 1 1
pk = (q − p)k + = 1 + (q − p)k .
2 2 2
This is the expression we should rediscover by using the generating function.
Let us introduce the (ordinary) generating function, see [55, pp. 86-87] or [35],
∞
X
G(t) = p k tk .
k=0
When we first multiply both sides of (5.24) by tk and then sum over k = 1, 2, . . .
∞
X ∞
X ∞
X ∞
X
pk tk = qt pk−1 tk−1 + pt tk−1 − pt tk−1 pk−1
k=1 k=1 k=1 k=1
∞
X ∞
X ∞
X
= qt pk tk + pt tk − pt tk p k . (5.27)
k=0 k=0 k=0
1 pt
G(t) = + .
1 − qt + pt (1 − t)(1 − qt + pt)
An expansion by partial fractions yields
1 p 1 p 1
G(t) = + +
1 − qt + pt 1 − q + p 1 − t 1 − q + p 1 − qt + pt
1 1 1 1
= + ,
2 1 − t 2 1 − qt + pt
where we used 1 − q + p = 2p. Thereby
1 1
2G(t) = + .
1 − t 1 − qt + pt
If we recast the two terms in the right hand side as sums of respective geometric series we obtain
∞
X ∞
X ∞
X ∞
X
k k k k
2 pk t = t + (q − p) t = (1 + (q − p)k )tk . (5.28)
k=0 k=0 k=0 k=0
When we identify the coefficients of tk in the power series in the both sides of (5.28) we get
1
pk = 1 + (q − p)k k ≥ 0, (5.29)
2
which agrees with the expression found by the first method.
154 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
Definition 5.7.1 (Moment generating function) The moment generating function (m.g.f.) gX (t) for a
random variable X is defined by
∞
P txk
e pX (xk ) discrete r.v.,
tX k=−∞
def
ψX (t) = E e = (5.30)
R∞ tx
e f X (x) dx continuous r.v.,
−∞
if there is a positive real number h such that E etX exists for | t |< h.
The requirement of existence of E etX is not satisfied for any h > 0, for example, by a random variable
X ∈ C(0, 1). Thus X ∈ C(0, 1) has no m.g.f. and, as has been pointed out in example 2.2.16, has no moments
either for that matter. Having said that, let us note that the analysis of optical fiber communication in [33] is
completely based on m.g.f.s. The pertinent uniqueness theorem is there, but we omit the proof.
Theorem 5.7.1 (Uniqueness) If X and Y are two random variables such that
then
d
X=Y
The proof of the following theorem should be obvious in view of the proofs of the analogous theorems for
characteristic and probability generating functions in the preceding .
Theorem 5.7.2 If X1 , X2 , . . . , Xn are independent random variables with respective m.g.f.s ψXk (t), k =
Pn
1, 2, . . . , n, which all exist for |t| < h, for some h > 0. Then the m.g.f. ψSn (t) of the sum Sn = k=1 Xk is
given by
ψSn (t) = ψX1 (t) · ψX2 (t) · . . . · ψXn (t). (5.31)
Corollary 5.7.3 If X1 , X2 , . . . , Xn are independent and identically distributed random variables with the
P
m.g.f. ψX (t), which exists for |t| < h, h > 0. Then the m.g.f. ψSn (t) of the sum Sn = nk=1 Xk is given by
n
ψSn (t) = (ψX (t)) . (5.32)
5.7. MOMENT GENERATING FUNCTIONS 155
Example 5.7.4 (M.g.f. for Random Variables Taking Values in Non Negative Integers ) If X is a
r.v. taking values in non negative integers the m.g.f. is by definition (5.30) in the discrete case, assuming
existence of ψX (t),
∞
X ∞
X k
ψX (t) = etk pX (k) = et pX (k) = gX et ,
k=0 k=0
where gX (et ) is the p.g.f. of X with et in the domain of convergence of the p.g.f.. In view of this several
examples of m.g.f.s are immediate. We get by (5.4)
t
X ∈ Po(λ) ⇔ ψX (t) = eλ(e −1)
,
from (5.7)
n
X ∈ Bin(p) ⇔ ψX (t) = q + et p ,
and from (5.10)
pet
X ∈ Fs(p) ⇔ ψX (t) = , t < − ln(1 − p).
1 − qet
Example 5.7.5 (M.g.f. for Y = aX + b) If X is a r.v. with the m.g.f. ψX (t), which exists for |at| < h, and
Y = aX + b, where a and b are real numbers, then
Example 5.7.6 (M.g.f. for X ∈ N (0, 1)) If X is ∈ N (0, 1), we have by the definition (5.30)
Z∞
1 2
ψX (t) = etx √ e−x /2 dx
2π
−∞
Here we used the fact that the integrand in the underbraced integral is the probability density of N (t, 1). This
m.g.f. exists for all t.
t2
X ∈ N (0, 1) ⇔ ψX (t) = e 2 . (5.34)
Example 5.7.7 (M.g.f. for X ∈ N (µ, σ 2 )) If X ∈ N (µ, σ 2 ), we have shown in example 4.2.5 above that if
Z ∈ N (0, 1) and X = σZ + µ, then X ∈ N (µ, σ 2 ). Then as in (5.33)
Example 5.7.8 (M.g.f. for a sum of independent normal random variables) Let X1 ∈ N (µ1 , σ12 ) and
X2 ∈ N (µ2 , σ22 ) be independent. Then by (5.31)
and by (5.35)
2 t2
σ1 2 t2
σ2 2 +σ2 )t2
(σ1 2
= etµ1 + 2 · etµ2 + 2 = et(µ1 +µ2 )+ 2 .
Example 5.7.9 (M.g.f. for an Exponential Random Variable) Let X ∈ Exp(a), a > 0. The p.d.f. is
1 e−x/a x ≥ 0
fX (x) = a
0 x < 0.
Z∞ Z∞
tx 1 −x/a 1
e−x( a −t) dx
1
ψX (t) = e e dx =
a a
0 0
" #+∞
1 −1
e−x( a −t)
1
= 1 ,
a a − t
0
1 1
and if a − t > 0, i.e., if a > t, we have
1 1 1
= 1
= .
a a −t (1 − at)
Thereby we have found
1 1
X ∈ Exp(a) ⇔ ψX (t) = , > t. (5.37)
(1 − at) a
Example 5.7.10 (M.g.f. for a Gamma (Erlang) Random Variable) X ∈ Γ (n, a), where n is a positive
integer. In other words, we consider an Erlang distribution. Then example 4.4.9 and (5.37) yield
n
1 1
X ∈ Γ (n, λ) ⇔ ψX (t) = , > t. (5.38)
1 − at a
The proof of the statement in theorem 5.5.1 can be modified in an obvious manner to establish the following
composition rule.
5.7. MOMENT GENERATING FUNCTIONS 157
Theorem 5.7.11 (Composition Rule with m.g.f ) X1 , X2 , . . . , Xn , . . . are independent and identically dis-
tributed random variables with the m.g.f. ψX (t) for |t| < h. N is independent of the Xk s and has the non
negative integers as values and with the p.g.f. gN (t). The m.g.f. ψSN (t) of SN defined in (5.16) is
This does not produce the moment generating function as defined in (5.30). Symbolically we have that
X ∞
1
E = E X k tk ,
1 − tX
k=0
k ∞
and this is the ordinary generating function of E X k=0 . On the other hand, if we set
∞
X E Xk k
t ,
k!
k=0
tx
we can apply the series expansion of e to obtain by termwise expectation
∞
X E Xk k
E etX = t , (5.40)
k!
k=0
which equals the moment generating function ψX (t), as treated above. However, in mathematics, see, e.g., [41,
p. 350], the power series
X∞
ak k
EG(t) = t
k!
k=0
∞
is called the exponential generating function of the sequence of real numbers (ak )k=0 . In order to adhere
to the standard mathematical terminology we should hence call any ψX (t) = E etX the exponential moment
generating function (e.m.g.f.).
But the practice of talking about moment generating functions has become well established and is thereto
time-honoured. There is in other words neither a pragmatic reason to campaign for a change of terminology to
e.m.g.f.’s, nor a realistic hope of any success in that endeavour.
The take-home message is the following theorem.
1 The point is made by J.P. Hoyt in The American Statistician, vol. 26, June 1972, pp. 45−46.
158 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
Theorem 5.7.12 Let X be a random variable with m.g.f. ψX (t) that exists for |t| < h for some h > 0. Then
(i) For all k > 0, E |X|k < ∞, i.e, moments of all orders exist.
(ii)
(k)
E X k = ψX (0). (5.41)
tk (k)
by successive differentiation one finds that the coefficient of k! is equal to ψX (0).
5.8 Exercises
5.8.1 Probability Generating Functions
1. Pascal Distribution Let X ∈ Pascal(n, p), n = 1, 2, 3, . . ., 0 < p < 1 and q = 1 − p, see Example 2.3.10.
Its probability mass function is then
!
k−1
pX (k) = P (X = k) = pn q k−n , k = n, n + 1, n + 2, . . . . (5.43)
n−1
Hence Y has the Negative Binomial distribution, Y ∈ NBin(n, p). Aid: The part (c) does not require
a generating function. Use the finding in (b) and (5.43).
1
3. X1 ,X2 , . . . , Xn are independent Poisson distributed random variables with E [Xk ] = k. Show that the
Pn
p.g.f. of Yn = k=1 kXk is
Pn tk −1
gYn (t) = e k=1 k .
P∞
(b) Show that E [N ] = k=0 P (N > k).
5. ([35]) The r.v.’s X1 ,X2 , . . . , Xn are independent and identically disributed. Their common distribution
function is FX (x). We consider the random variable N , which has the positive integers as values and has
the p.g.f. gN (t). N is independent of X1 ,X2 , . . . , Xn . Set
Y = max{X1 , X2 , . . . , XN }.
Show that
FY (y) = gN (FX (y)) .
Aid: The law of total probability (3.35) may turn out to be useful.
1
6. (From [49]) The r.v. X has the p.g.f. gX (t) = ln 1−qt . Determine E [X], Var [X], and the p.m.f. of X.
(1−e−1 )k
Answers: E [X] = Var [X] = e − 1, pX (k) = k , k ≥ 1.
7. Let us introduce
def
g̃X1 −X2 (t) = E tX1 −X2 ,
where X1 and X2 are two independent r.v.’s with non negative integers as values. Hence g̃ is an extension
of the notion of p.g.f. to a random variable with integers as values. Let X1 ∈ Po(µ1 ) and X2 ∈ Po(µ2 ),
X1 and X2 are independent.
E [X] = 1.
Let B be the event B = {X > 0}. We consider X truncated to the positive integers, X|B, i.e., X
conditioned on B (recall section 3.3). We have in addition that
X|B ∈ Fs(p).
(a) Find the m.g.f. of X. What is the region of existence ? Answer: ψX (t) = Γ(1 − t), |t| < 1.
Aid: Find the p.d.f. of X and use the appropriate part of the definition in (5.30). In the resulting
integral make the change of variable u = ex and be sure to find the right limits of integration.
(b) Show that E [X] = γ = Euler’s constant.
Aid: Karl Weierstrass3 re-expressed (You need not check this) the Gamma function in (2.7)
with
∞
1 Y x −x
= xeγx 1+ e r,
Γ(x) r=1
r
where γ is Euler’s constant. Show now that
d X∞
dx Γ(x) 1 1 1
=− −γ+ − .
Γ(x) x r=1
r r+x
d
dx Γ(x)
The function Γ(x) is known as the Digamma function.
2
π
(c) Show that Var [X] = 6 .
4. Find the m.g.f. of the logistic distribution with p.d.f. in (2.39). Answer: B(1 − t, 1 + t), −1 < t < 1,
where B is the Beta function given in (2.31).
5. Difference of two independent Gumbel variables V ∈ Gumbel and W ∈ Gumbel are independent.
In other words their common distribution function is found in (2.35). Show that
U = V − W ∈ logistic(0, 1),
pX (0) = 0,
(5.45)
Z 1
pX (k) = u · (1 − u)k−1 du, k = 1, 2, . . . .
0
Check that you get gSN (0) = 1. What is the distribution of SN ? Hint: Try the world wide web
with Lea-Coulson Model for Luria -Delbrück Distribution or Lea-Coulson Probability
Generating Function for Luria -Delbrück Distribution as search words.
2. (5B1540 02-08-21, reconsidered) The random variables X1 , X2 , . . . , Xn , . . . , are I.I.D. and have the prob-
ability mass function
1 1 1
pX (−1) = , pX (0) = , pX (1) = .
4 2 4
1
PN
Let N ∈ Fs 2 and be independent of the Xk ’s. Find the characteristic function of SN = k=1 Xk . (Aid:
Use (5.18).)
′
In an exercise to chapter 4 we defined for the same r.v.’s Xn the random variable N by
′
N = min{n | Xn = 0},
′
so that N is the smallest (or first) n such that Xn = 0. It was found that the characteristic function of
′
is ϕSN ′ (t) = 1/(2 − cos t). What is the reason for the difference in the results about N and N ?
3. (FA 181 1982-02-05) Let X1 , X2 , . . . , Xn , . . . be independent and identically distributed with Xk ∈ N (0, 1),
k = 1, 2, . . . , n. N is a random variable with values in the positive integers {1, 2, . . .}. N is independent
of the variables X1 , X2 , . . . , Xn , . . .. We set
SN = X 1 + X 2 + . . . + X N .
We assume that
P (N = k) < 1, k = 1, 2, . . . .
Show now that SN cannot be a normal random variable, no matter what distribution N has, as long as
this distribution satisfies our assumptions. Aid: The result in (5.18) should turn out to be useful.
162 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
7. (Due to Harald Lang) Let p be the probability that when tossing a thumbtack (North American English),
or a drawing pin (British English) 4 it falls on its pin (not on its head). A person tosses a thumbtack
a number of times, until the toss results in falling on the pin for the first time. Let X be the serial
number of the toss, when the falling on the pin occurred for the first time. After that the person tosses
the thumbtack an additional X times. Let Y be the number of times the thumbtack falls on its pin in the
latter sequence of tosses. Find the p.m.f. of Y ,
SN = X 1 + X 2 + . . . + X N .
(c) In fact a good definition of the compound Poisson distribution is that it is the probability distribution
on the non negative integers with the p.g.f. in (5.47). In example 2.3.9 the compound Poisson
distribution was defined in terms of the p.m.f. in (2.54). Explain why the two definitions actually
deal with one and the same thing, i.e., SN ∈ ComPo(λ, µ) in the sense of example 2.3.9.
SN = X 1 + X 2 + . . . + X N .
1
Show that SN ∈ Exp pa .
4a short nail or pin with usually a circular head, used to fasten items such as documents to a wall or board for display. In
Swedish: häftstift.
5.8. EXERCISES 163
10. (From [49]) Let 0 < p < 1. q = 1 − p. X1 , X2 , . . . , Xn , . . . are independent and identically distributed with
Xk ∈ Ge(q), k = 1, 2, . . . ,. N ∈ Ge(p). N is independent of the variables X1 , X2 , . . . , Xn , . . .. We set
SN = X 1 + X 2 + . . . + X N .
(1−p)2 1
(a) Show that the p.m.f. of SN is pSN (k) = (2−p)k+1 , k ≥ 1, pSN (0) = 2−p .
1−p
(b) Show that SN | SN > 0 ∈ Fs(a), and show that a = 2−p .
11. (From [49]) Let X1 , X2 , . . . , Xn , . . . be independent and identically distributed with Xk ∈ L(a), k =
1, 2, . . . ,. Np ∈ Fs(p). Np is independent of the variables X1 , X2 , . . . , Xn , . . .. We set
SN p = X 1 + X 2 + . . . + X N p .
√
Show that pSNp ∈ L(a).
12. (From [49]) Let X1 , X2 , . . . , Xn , . . . be independent and identically distributed with Xk ∈ Po(2), k =
1, 2, . . . ,. N ∈ Po(1). N is independent of the variables X1 , X2 , . . . , Xn , . . .. We set S0 = 0, and
SN = X 1 + X 2 + . . . + X N .
Show that
E [SN ] = 2, Var [SN ] = 6.
13. (From [49]) Let X1 , X2 , . . . , Xn , . . . be independent and identically distributed. N is independent of the
variables and has the non negative integers as values.
SN = X 1 + X 2 + . . . + X N .
Show that
Cov (X1 + X2 + . . . + XN , N ) = E [X] · Var [N ] .
SN = X 1 + X 2 + . . . + X N .
Show that
m λ(m−1)
E [X1 + X2 + . . . + XN ] = e −1 .
m−1
Find E [X], Var [X] and E [ln X] expressed in terms of DX (t) and its derivatives.
164 CHAPTER 5. GENERATING FUNCTIONS IN PROBABILITY
Let X be a random variable that assumes values in the nonnegative integers. Show that the exponential
generating function for factorial moments is
h i
X
EGX (t) = E (1 + s) .
Aid: Try to find a suitable way of using the Markov inequality (1.38). The inequality in (5.48) is known
as the Chernoff Inequality or the Chernoff Bound.
The number D (p|p + ǫ) is called the Kullback distance between the probability distributions
Be (p) and Be (p + ǫ), see [23].
6.1 Introduction
This chapter introduces and deals with the various modes in which a sequence of random variables defined in a
common probability space (Ω, F , P) can be said to converge. We start by three examples (for as many different
senses of convergence, there will be a fourth mode, almost sure convergence, later in this chapter).
Results about convergence of sequences are important for the same reasons as limits are important every-
where in mathematics. In probability theory we can find simple approximations to complicated or analytically
unaccessible probability distributions. In section 6.5 we clarify the formulas of propagation of error by conver-
gence of sequences. In section 7.4.2 we will give meaning to a sum of a countably infinite number of random
P∞
variables that looks like i=0 ai Xi . In section 10.5 we will define by a limit for a Wiener process an integral
Rb
that looks like a f (t)dW (t).
Xmax = max{X1 , X2 , . . . , Xn }.
It is clear that Xmax is a well defined r.v., since it holds for any x ∈ R that {Xmax ≤ x} = ∩ni=1 {Xi ≤ x} ∈ F .
We wish to understand or approximate the probabilistic behaviour of Xmax for large values of n, which we study
by letting n → ∞. Let x > 0. By independence
n
Y
P (Xmax ≤ x) = P (∩ni=1 {Xi ≤ x}) = P ({Xi ≤ x})
i=1
n n
= (FX (x)) = 1 − e−x ,
since all Xk ∈ Exp(1). Then
n
P (Xmax ≤ x) = 1 − e−x → 0,
as n → ∞. This is an intuitive result, but it does not contribute much to any the purpose of useful approximation
we might have had in mind. We need a more refined apporoach. The trick turns out to be to shift Xmax by a
suitable amount depending on n, or precisely by − ln n,
Yn = Xmax − ln n, n = 1, 2, . . . (6.1)
165
166 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
Now we get, as n → ∞, n
e−x −x
FYn (x) = 1 − → e−e .
n
Let us write
−x
FY (x) = e−e , −∞ < x < ∞. (6.2)
This is the distribution function of a Gumbel distributed random variable Y , c.f. example 2.2.19. Hence it
should be permissible to say that there is the convergence of Yn to Y in the sense that FYn (x) → FY (x).
Example 6.1.2 (Weak Law of Large Numbers ) X1 , X2 , . . . are independent, identically distributed (I.I.D.)
random variables with finite expectation µ and with variance σ 2 . We set Sn = X1 + X2 + . . . + Xn , n ≥ 1.
We want to understand the properties of the arithmetic mean n1 Sn for large values of n, which we again study
by letting n → ∞.
2
We need to recall that by I.I.D. E n1 Sn = µ and Var n1 Sn = n12 nσ 2 = σn . Then Chebyshev’s inequality
in (1.27) yields for any ǫ > 0 that
Sn 1 Sn 1 σ2
P | − µ |> ǫ ≤ 2 Var = 2 .
n ǫ n ǫ n
Example 6.1.3 (Convergence of Second Order Moments ) X1 , X2 , . . . is a sequence three point random
variables such that
1 1 1
P (Xn = −1) = , P (Xn = 0) = 1 − , P (Xn = +1) = .
2n n 2n
1
It is immediate that E [X] = 0 and that E Xn2 = (−1)2 · 2n + 02 · 1 − n1 + (+1)2 · 2n
1
= n1 . Hence
E Xn2 → 0,
6.2. DEFINITIONS OF MODES OF CONVERGENCE, UNIQUENESS OF THE LIMIT 167
as n → ∞. Again we can regard this convergence of the second moments as a notion of probabilistic convergence
of the sequence X1 , X2 , . . . to 0. To be quite accurate, we are saying that Xn converges to zero in the sense that
h i
2
E (Xn − 0) → 0,
as n → ∞.
+∞
Definition 6.2.1 (Convergence in Distribution) A sequence of random variables (Xn )n=0 converges in
distribution to the random variable X, if and only if it holds for the sequence of respective distribution
functions that
FXn (x) → FX (x) as n → ∞
Remark 6.2.1 We try next to justify the presence of points of continuity in the definition above. Let Xn be
a random variable which induces, see (2.80), on the real line the total mass at the point n1 ,
1
P Xn = = 1.
n
but
x = 0 : FXn (0) = 0 does not converge to FX (0) = 1.
But it is reasonable that the convergence FXn (x) → FX (x) for x 6= 0 should matter, and therefore we require
convergence of the sequence of distribution functions only for the points of continuity of the limit distribution
function.
168 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
d
The notation Xn → X will be systematically distorted in the sequel, as we are going to write, e.g.,
d d
Xn → N (0, 1), Xn → Po (λ) ,
and so on, if X ∈ N (0, 1), X ∈ Po (λ) and so on. In terms of the assumed licence to distort we have in the
example 6.1.1 found that
d
Xmax − ln n → Gumbel.
P (| Xn − X |> ǫ) → 0,
as n → ∞.
The limiting random variable may be a degenerate on, i.e., a constant. This is the case in example 6.1.2, where
we found that
Sn P
→ µ, as n → ∞.
n
+∞
Definition 6.2.3 (Convergence in r-mean) A sequence of random variables (Xn )n=0 converges in r-
mean to the random variable X, if and only if it holds that
E [| Xn − X |r ] → 0,
as n → ∞. .
2
E |Xn − X| →0, as n → ∞.
2
The chapter 7 below will be devoted to this convergence and its applications. Obviously, Xn → X is the case
encountered in Example 6.1.3.
The limiting random variable in all of these modes of convergence is unique in distribution. This will now
be proved in the case of convergence in probability.
P P
Theorem 6.2.1 if Xn → X, as n → ∞ and Xn → Y , as n → ∞, then
d
X = Y.
6.3. RELATIONS BETWEEN CONVERGENCES 169
Proof: We apply in this the inequality in (1.33). For given ǫ > 0 we take C = {|X − Y | ≤ ǫ}, A = {|Xn − Y | ≤
ǫ/2} and B = {|Xn − Y | ≤ ǫ/2}. We check first that the condition
A ∩ B ⊆ C,
required for (1.33) is valid here. We note by the triangle inequality that
But A ∩ B is the event that both A = {|Xn − Y | ≤ ǫ/2} and B = {|Xn − Y | ≤ ǫ/2} hold. Hence if the event
A ∩ B holds,
|X − Xn | + |Xn − Y )| ≤ ǫ/2 + ǫ/2 = ǫ,
and
P (B c ) = P ({|Xn − Y | > ǫ/2}) → 0
as was desired.
as n → ∞.
P d
Xn → X ⇒ Xn → X
as n → ∞. If c is a constant,
P d
Xn → c ⇔ Xn → c
Theorem 6.3.1
r P
Xn → X ⇒ Xn → X (6.3)
as n → ∞.
Proof: We use Markov’s inequality, (1.38). We check readily that this implies for a non negative random
variable U (U ≥ 0) and a > 0, that for r ≥ 1
1
P (U ≥ a) ≤ E [U r ] .
ar
Then we apply this to U =| Xn − X | and get
1
P (| Xn − X |≥ ǫ) ≤ E [| Xn − X |r ] .
ǫr
Thus the desired conclusion follows.
Theorem 6.3.2
P d
Xn → X ⇒ Xn → X (6.4)
as n → ∞.
Proof: Let us set FXn (x) = P (Xn ≤ x). By some basic set operations we get
(a case of the obvious application of finite additivity: P (A)= P (A ∩ B) + P (A ∩ B c )). We observe that
| Xn − X |≤ ǫ ⇔ −ǫ ≤ Xn − X ≤ ǫ
⇔ −Xn − ǫ ≤ −X ≤ −Xn + ǫ
⇔ Xn − ǫ ≤ X ≤ Xn + ǫ
Hence we can conclude that Xn ≤ x ⇒ X ≤ x + ǫ, for the event {Xn ≤ x} ∩ {| Xn − X |≤ ǫ}, which implies
that {Xn ≤ x} ∩ {| Xn − X |≤ ǫ} ⊆ {X ≤ x + ǫ} ∩ {| Xn − X |≤ ǫ} and then
If we now let ǫ ↓ 0, we get by existence of left limits and right continuity (theorem 1.5.6) required of distribution
functions
FX (x−) ≤ lim inf FXn (x) ≤ lim sup FXn (x) ≤ FX (x). (6.8)
n→∞ n→∞
But the definition 6.2.1 requires us to consider any point x of continuity of FX (x). For such a point
FX (x−) = FX (x)
FX (x) ≤ lim inf FXn (x) ≤ lim sup FXn (x) ≤ FX (x), (6.9)
n→∞ n→∞
and therefore
lim inf FXn (x) = lim sup FXn (x) = FX (x). (6.10)
n→∞ n→∞
Thus the desired limit exists and
lim FXn (x) = FX (x). (6.11)
n→∞
This is the assertion that was to be proved.
For the proof of the next theorem exploits the point mass distribution δc in (4.19).
Theorem 6.3.3 Let c be a constant. Then
P d
Xn → c ⇔ Xn → c (6.12)
as n → ∞.
Proof:
P d
⇒ : Xn → c ⇒ Xn → c, as n → ∞, is true by (6.4).
d
⇐ : We assume in other words that Xn → δc , as n → ∞. Let us take ǫ > 0 and consider in view of definition
6.2.2
P (| Xn − c |> ǫ) = 1 − P (−ǫ ≤ Xn − c ≤ ǫ)
by the rule (2.91)
| {z }
= 1 − (FXn (c + ǫ) − FXn (c − ǫ) + P (Xn = c − ǫ))
= 1 − FXn (c + ǫ) + FXn (c − ǫ) − P (Xn = c − ǫ)
≤ 1 − FXn (c + ǫ) + FXn (c − ǫ),
since P (Xn = c − ǫ) ≥ 0. Now, by assumption
FXn (c + ǫ) → FX (c + ǫ) = 1,
since (
1 x≥c
FX (x) =
0 x<c
and c + ǫ is a point of continuity of FX (x). By assumption
FXn (c − ǫ) → FX (c − ǫ) = 0,
1 − FXn (c + ǫ) + FXn (c − ǫ) → 1 − 1 + 0 = 0,
P
Xn + Yn → X + Y.
r r
Theorem 6.4.2 (Xn )n≥1 and (Yn )n≥1 are two sequences such that Xn → X and Yn → Y , as n → ∞ for some
r > 0. Then
r
Xn + Yn → X + Y.
The following theorem has been accredited to two past researchers in probability1 .
d
Theorem 6.4.3 (Cramér -Slutzky Theorem) (Xn )n≥1 and (Yn )n≥1 are two sequences such that Xn → X
P
and Yn → a, as n → ∞, where a is a constant. Then, as n → ∞,
(i)
d
Xn + Yn → X + a.
(ii)
d
Xn − Yn → X − a.
(iii)
d
Xn · Yn → X · a.
(iv)
Xn d X
→ for a 6= 0.
Yn a
The proof of the next assertion is an instructive exercise in probability calculus and the definition of continuity
of a function.
P
Theorem 6.4.4 (Xn )n≥1 is a sequence such that Xn → a, as n → ∞, where a is a constant. Suppose that
h(x) is a function that is continuous at a. Then
P
h (Xn ) → h(a), (6.13)
as n → ∞.
P
as n → ∞. We shall, as several times above, find an upper bound that converges to zero, if Xn → a is assumed.
We write on this occasion the expression in the complete form
Since h(x) is continuous at a we have that for all ǫ > 0 there exists a δ = δ(ǫ) > 0 such that
| x − a |≤ δ ⇒| h(x) − h(a) |≤ ǫ.
Thus we get
P ({ω ∈ Ω| | h (Xn (ω)) − h(a) |> ǫ}) ≤ P ({ω ∈ Ω| | Xn (ω) − a |> δ}) .
But by assumption we have that
P ({ω ∈ Ω| | Xn (ω) − a |> δ}) → 0,
as n → ∞, which proves the claim as asserted.
The next example shows how these results are applied in statistics.
Example 6.4.5 Let (Xn )n≥1 be a sequence of independent r.v.,s Xn ∈ Be(p), 0 < p < 1. Let Sn = X1 + X2 +
. . . + Xn . We wish to study the distribution of
1
Sn − p
Q = q 1n ,
n Sn (1− n1 Sn )
n
as n → ∞. The r.v. Q is used for building confidence intervals for p, where p is carries a statement about an
unknown population proportion. In order to handle this expression we note the following. We can write
n
1 1X
Sn − p = (Xk − p) .
n n
k=1
Thus Pn
Pn √1
(Xk −p)
√
√1
nk=1 (Xk − p) n k=1 p(1−p)
Q= q = r .
1 1 1
( 1
)
S
n n 1 − S
n n n Sn 1− n Sn
p(1−p)
The rationale for the introducing this identity will become soon clear. We define for x ∈ [0, 1] the continuous
function s
x (1 − x)
h(x) = .
p (1 − p)
Then Pn (Xk −p)
√1 √
n k=1 p(1−p)
Q= . (6.14)
h n1 Sn
By properties of Bernoulli variables
Pn
(Xk −p)
Hence we observe in the numerator of Q that √1
√ is a scaled sum of exactly same form as the
n k=1
p(1−p)
p
7 p, σ 7→ p (1 − p) in Xkσ−µ ). The provisional argument in
scaled sum in section 4.5.2 above (replace µ →
loc.cit. entails that
n
1 X (Xk − p) d
√ p → N (0, 1),
n p (1 − p)
k=1
as n → ∞. In the denominator of (6.14) we observe that the weak law of large numbers, example 6.1.2 above,
implies
1 P
Sn → p,
n
P
as n → ∞. Then we get by (ii) in Cramér -Slutzky theorem that 1 − n1 Sn → (1 − p). Thus (6.13) in the
theorem above implies, as 0 < p < 1, that
1 P
h Sn → h(p) = 1.
n
But then case (iv) in the Cramér -Slutzky Theorem entails that
Pn (Xk −p)
√1 √
n k=1 p(1−p) d
Q= → N (0, 1),
h n1 Sn
as n → ∞, which resolves the question posed. In the section 6.6.3 we ascertain that the central limit theorem
suggested in section 4.5.2 by means of characteristic functions is valid.
Remark 6.5.1 If we approximate the expectations as in (6.15) with g(x) = 1/x we get
1 1
E ≈ , (6.16)
X E [X]
as a practically minded rule for computing E X1 . But if X ∈ C(0, 1) , then X1 ∈ C(0, 1). For C(0, 1), the
expectation does not exist, as shown in example 2.2.16. Hence, an approximation like (6.16) makes no sense in
this situation.
2 seealso H.H. Ku: Notes on the Use of Propagation of Error Formulas. Journal of Research of the National Bureau of Standards
- C. Engineering and Instrumentation. Vol 70C, no 4, October - December, 1966
6.5. ASYMPTOTIC MOMENTS AND PROPAGATION OF ERROR 175
as n → ∞. We say that µ and σ 2 /n are the asymptotic mean and asymptotic variance, respectively, of
Pn
the sequence {Xn }n≥1 . The obvious example is Xn = n1 i=1 Zi of Zi , I.I.D. variables with µ = E [Zi ] and
σ 2 = Var[Zi ].
Note that we do not suppose that µn = E [Xn ], σn2 = Var[Xn ] and that µn → µ and σn2 → σ 2 . In fact
E [Xn ], and Var[Xn ] are not even required to exist.
Theorem 6.5.1 (Propagation of Error) Let {Xn }n≥1 be a sequence of random variables such that (6.17)
′ ′
holds. Let g(x) be a differentiable function with the first derivative g (x) which is continuous and that g (µ) 6= 0.
Then it holds that ′ 2
√ d
n (g(Xn ) − g(µ)) → N 0, σ 2 g (µ) , (6.18)
as n → ∞.
Proof: By the mean value theorem of calculus [69, p.100] there exists for every x and µ a number ξ between x
and µ such that
′
g(x) − g(µ) = g (ξ)(x − µ). (6.19)
Thus there is a well defined function of ω, Zn , such that | Zn − µ |≤| Xn − µ | and by (6.19)
′
g (Xn ) − g(µ) = g (Zn ) (Xn − µ) . (6.20)
In fact, Zn is a random variable, a property we must require, but this will be proved after the theorem.
By (i) and (iii) of the Cramér -Slutzky Theorem 6.4.3
1 √ d
(Xn − µ) = √ n (Xn − µ) → 0 · N 0, σ 2 ,
n
P
as n → ∞. Hence Xn − µ → 0 in view of theorem 6.3.3 and (6.12). Since | Zn − µ |≤| Xn − µ |, we get that
P
| Zn − µ |→ 0 (it is here we need the fact that Zn is a sequence of random variables), as n → ∞. But then
(6.13) in theorem 6.4.4 implies, as n → ∞, that
′ P ′
g (Zn ) → g (µ), (6.21)
′
by the assumed continuity of the derivative g (x). Now we have in (6.20)
√ √ ′
n (g(Xn ) − g(µ)) = ng (Zn ) (Xn − µ)
′ √
= g (Zn ) n (Xn − µ) .
By (6.17) and (iii) of the Cramér -Slutzky Theorem 6.4.3 and by (6.21)
√ d ′
n (g(Xn ) − g(µ)) → g (µ)X,
It is, of course, still a matter of judgement to decide whether the approximation in the theorem above can
be used in any given situation.
The following indented section of text verifies that Zn defined in (6.20) is a random variable and can be
skipped at the first reading.
176 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
We have earlier maintained that unmeasurable sets lack importance in practice, as they are very
hard to construct. The claim that Zn in (6.20) is a random variable seems innocuous for such a
setting of mind. However, we can ask, if there is a proof the claim. The mean value theorem of
calculus gives Zn as a well defined function of ω by (6.20). According to the definition 1.5.1 we need
in addition to show that Zn is a measurable map from F to the Borel σ algebra. To that end we fix
an arbitrary n and drop it temporarily from the notation. Let us define for each ω ∈ Ω,
( ′
def g (µ) if X(ω) = µ
H(X(ω)) = g(X(ω))−g(µ)
X(ω)−µ if X(ω) 6= µ.
H(X(ω)) is a random variable, as g (X(ω)) is a random variable and a ratio of two random variables
is a random variable. We set for each ω ∈ Ω
def ′
G(z) = H(X(ω)) − g (z) .
We should actually be writing Gω (z), as there is a different function G(z) for each ω, but we abstain
from this for simplicity. Then G(z) is a random variable and is continuous as a function of z and
(6.20) corresponds to finding for fixed ω a root (at least one exists by the mean value theorem of
calculus) Z(ω) to the equation
G(z) = 0. (6.22)
(i) We assume first that G(X(ω))G(µ) < 0. Then we can apply the method of bisection to
construct a root to (6.22). We assume first that X(ω) < µ. Then we set for k = 1, 2, . . . ,
a0 (ω) = X(ω), b0 = µ
ak (ω) = ak−1 (ω) bk (ω) = mk−1 (ω), if G(ak−1 (ω))G(mk−1 (ω)) < 0
(6.23)
ak (ω) = mk−1 (ω) bk (ω) = bk−1 (ω), if G(ak−1 (ω))G(mk−1 (ω)) > 0.
Here
ak−1 (ω) + bk−1 (ω)
mk (ω) = ,
2
which explains the name bisection (draw a picture) given to this iterative method of solving
equations. By the construction it holds that
Since X is a random variable, m1 is a random variable, and since G(z) is a random variable, too,
both a1 and b1 are by (6.23) random variables. Therefore, by the steps of construction of the
bisection, each mk is a random variable. It holds that ak−1 (ω) ≤ ak (ω) and bk (ω) ≤ bk−1 (ω),
ak (ω) < bk (ω) and ak (ω) − bk (ω) = 21k (µ − X(ω)). Thus we have the limit, which we denote
by Z(ω),
Z(ω) = lim mk (ω) = lim ak (ω) = lim bk (ω).
k→∞ k→∞ k→∞
Since Z(ω) is a pointwise limit of random variables, it is a random variable (this can be verified
by writing the statement of convergence by means of unions and intersections of events).
Then it follows by continuity of G(z) and (6.24) that
or that G2 (Z(ω)) = 0, i.e., G(Z(ω)) = 0 so that Z(ω) is a root of (6.22) between X(ω) and µ,
and Z is random variable, as was claimed.
If we assume that X(ω) > 0 we get the same result by trivial modifications of the proof above.
(ii) The case G(X(ω))G(µ) ≥ 0 contains two special cases. First, there is a unique root to G(z) = 0,
which we can find by a hill-climbing technique of as limit of measurable approximations. Or,
we can move over to a subdomain with a root, where the bisection technique of case (i) again
applies.
The method of bisection is a simple (and computationally ineffective) algorithm of root solving, but
in fact it can be evoked analogously in a constructive proof the theorem of intermediate values [69,
pp.71-73] of differential calculus.
1 1 1
P (Xn = −1) = , P (Xn = 0) = 1 − , P (Xn = +1) = .
2n n 2n
We have found, loc.cit., that
2
Xn2 → 0,
We have in the preceding introduced the distribution δc , c.f., (4.19) above. The characteristic function of δc is
by (4.20)
ϕδc (t) = 1 · eitc .
Example 6.6.2 (Xn )+∞ λ
n=1 is a sequence of random variables such that Xn ∈ Bin n, n for n = 1, 2, . . . ,, λ > 0.
We have by (4.23) that n
λ it λ
ϕXn (t) = 1− +e ,
n n
which we rewrite as
λ it n
= 1+ e −1 ,
n
and then by a standard limit as n → ∞,
→ eλ(e −1)
it
= ϕPo (λ),
where we recognized the result (4.9). In words, we should be allowed to draw the conclusion that
d
Xn → Po(λ).
This result tells rigorously that we can approximate X ∈ Bin n, nλ for small p and large n by Po(np).
In fact these two examples present two respective examples of the workings of the following fundamental
theorem.
d
Theorem 6.6.3 (Continuity Theorem for Characteristic Functions) (a) If Xn → X, and X is a ran-
dom variable with the characteristic function ϕX (t), then
as n → ∞.
and ϕ(t) is continuous at t = 0, then ϕ(t) is the characteristic function of some random variable X
(ϕ(t) = ϕX (t)) and
d
Xn → X.
The proof is omitted. We saw an instance of case(a) in example 6.6.1. In addition, we applied correctly the
case (b) in example 6.6.2, since eλ(e −1) is continuous at t = 0.
it
With regard to the ’converse statement’ in (b) it should be kept in mind that one can construct sequences
of characteristic functions that converge to a function that is not a characteristic function.
By means of characteristic functions we can easily prove (proof omitted) the uniqueness theorem for con-
vergence in distribution.
d d
Theorem 6.6.4 (Uniqueness of convergence in distribution) If Xn → X, and Xn → Y , then
d
X = Y.
6.6. CONVERGENCE BY TRANSFORMS 179
Theorem 6.6.5 If {Xn }n≥1 is a sequence of random variables with values in the non negative integers and
p.g.f.’s gXn (t). If
gXn (t) → gX (t),
d
as n → ∞, then Xn → X, as n → ∞.
Theorem 6.6.6 {Xn }n≥1 is a sequence of random variables such that the m.g.f.’s ψXn (t) exist for |t| < h for
some h > 0. Suppose that X is a random variable such that its m.g.f. ψX (t) exists for |t| < h1 ≤ h for some
h1 > 0 and that
ψXn (t) → ψX (t),
d
as n → ∞, then Xn → X, as n → ∞.
Then
d
Wn → N (0, 1), as n → ∞. (6.25)
Technology. In 1962 his professorship was transferred to mathematical statistics and in 1967, he obtained the first chair in
mathematical statistics at Uppsala University.
180 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
Let us set
C = {ω ∈ Ω|Xn (ω) → X(ω) as n → ∞ }.
This means, in the language of real analysis [36], that the sequence of measurable functions Xn converges ’point-
wise’ to the limiting measurable function X on a set of points (=elementary events, ω), which has probability
a.s.
one. We are thus stating that P (C) = 1 if and only if Xn → X. We shall next try to write the set C more
transparently.
Convergence of a sequence of numbers (xn )n≥1 to a real number x means by definition that for all ǫ > 0
there exists an n(ǫ) such that for all n > n(ǫ) it holds that |xn − x| < ǫ. By this understanding we can write C
in countable terms, i.e. we replace the arbitrary ǫ’s with 1/k’s, as
1
C = ∩∞ ∪∞
k=1 m=1 n≥m∩ ω ∈ Ω| | X n (ω) − X(ω) |≤ . (6.27)
k
Theorem 6.7.1
a.s. P
Xn → X ⇒ Xn → X (6.28)
as n → ∞.
Proof: Let us look at the complement C c or, by De Morgan’s rules, from (6.27)
c ∞ ∞ 1
C = ∪k=1 ∩m=1 ∪n≥m ω ∈ Ω| | Xn (ω) − X(ω) |> . (6.29)
k
and
Bm (ǫ) = ∪n≥m An (ǫ) . (6.30)
Then we set
A (ǫ) = ∩∞ ∞
m=1 Bm (ǫ) = ∩m=1 ∪n≥m An (ǫ) .
6.7. ALMOST SURE CONVERGENCE 181
Then clearly
c 1
C = ∪∞
k=1 A .
k
In view of (1.14) and the discussion around it
Theorem 6.7.2 (Xn )n≥1 is a sequence of r.v.’s such that i) and ii) below are satisfied:
Then
2
Xn → X (6.31)
as n → ∞.
The steps of the required proof are the exercise of subsection 6.8.4 below.
182 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
We need in other words to prove that for every ω ∈ C and for every ε > 0 there is N (ω, ε) so that if n ≥ N (ω, ε)
holds that |Sn /n − m| ≤ ε.
Sn
It suffices to prove that | − m| > ε can occur only a finite number of times, i.e.,
n
Sn
lim P | − m| > ε some n ≥ N = 0.
N →∞ n
Note the distinction with regard to the law of large numbers in the weak form, which says that that for all ε > 0
Sn
P | − m| > ε → 0 as n → ∞.
n
In words: for the law of large numbers in the strong form |Sn /n − m| must be small for all sufficiently large n
for all ω ∈ C, where P (C) = 1.
In tossing a coin we can code heads and tails with 1 and 0, respectively, and we can identify an ω with
a number in the interval [0, 1] drawn at random, where binary expansion gives the sequence of zeros
and ones. The law of large numbers says in this case that we will obtain with probability 1 a number
such that the proportion of 1:s in sequence converges towards 1/2. There can be ”exceptional” -ω -
for example the sequence 000 . . . is possible, but such exceptional sequences have the probability 0.
After these deliberations of pedagogic nature let us get on with the proof4 .
Without restriction of generality we can assume that E(Xi ) = m = 0, since we in any case can consider Xi − m.
We have Var [Sn ] = nσ 2 . By Chebyshev, s inequality (1.27) it holds that
Var [Sn ] nσ 2 σ2
P (|Sn | > nε)) ≤ 2
= 2
= 2.
(nε) (nε) nε
P
Unfortunately the harmonic series ∞
1 1/n is divergent so we cannot use Borel-Cantelli lemma 1.7.1 directly.
P∞
But it holds that 1 1/n < ∞ and this means that we can use the lemma for n2 , n = 1, 2, . . . . We have
2
σ2
P (|Sn2 | > n2 ε) ≤ .
n2 ε 2
4 Gunnar Englund is thanked for pointing out this argument.
6.8. EXERCISES 183
Sn2
In other words it holds by Borel-Cantelli lemma 1.7.1, that P (| | > ε i.o.) = 0 which proves that Sn2 /n2 → 0
n2
almost surely. We have in other words managed to establish that for the subsequence n2 , n = 1, 2, . . . there is
convergence with probability 1. It remains to find out what will happen between these n2 . We define therefore
i.e., the largest of the deviation from Sn2 that can occur between n2 and (n + 1)2 . We get
(n+1)2 −1
X
Dn2 = max (Sk − Sn2 )2 ≤ (Sk − Sn2 )2 ,
n2 ≤k<(n+1) 2
k=n2
where we used the rather crude inequality max(|x|, |y|) ≤ (|x| + |y|). This entails
(n+1)2 −1
X
E Dn2 ≤ E (Sk − Sn2 )2 ) .
k=n2
But E (Sk − Sn2 )2 = (k − n2 )σ 2 ≤ 2nσ 2 as n2 ≤ k < (n + 1)2 and there are 2n terms in the sum and this
entails
E Dn2 ≤ (2n)(2n)σ 2 = 4n2 σ 2 .
4n2 σ 2 4σ 2
P D n > n2 ε ≤ 2 2 = 2 2 .
(n ε) n ε
In other words, Dn /n2 → 0 holds almost surely. Finally this yields for k between n2 and (n + 1)2 that
Sk |S 2 | + Dn |S 2 | + Dn
| |≤ n ≤ n 2 → 0.
k k n
This means that we have succeeded in proving that Sn /n → 0 with probability 1. We have done this under the
condition that Var(Xi ) = σ 2 < ∞, but with a painstaking effort we can in fact prove that this condition is not
necessary.
6.8 Exercises
6.8.1 Convergence in Distribution
1−cos x
1. (5B1540 2003-08-27) The random variables X1 , X2 , . . . be I.I.D. with the p.d.f. fX (x) = πx2 .
2
(a) Check that fX (x) = 1−cosπx2
x
is a probability density. Aid: First, note 1 − cos x = 2 sin x2 . Then
recall (4.45), and the inverse transform (4.2), i.e.,
Z ∞
1
f (x) = eitx fb(t)dt.
2π −∞
1 d
(b) Show that n (X1 + X2 + . . . + Xn ) → C(0, 1), as n → ∞. Aid: Use (4.2) to find ϕX (t).
as p ↓ 0.
184 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
as n → ∞.
4. (From [35]) This exercise studies convergence in distribution in relation to convergence of the corresponding
sequence of expectations.
{Xn }n≥1 is a sequence of r.v.’s such that for a real number r
(
1 − n1 x = 0
P (Xn = x) = 1
n x = nr .
as n → ∞.
(b) Investigate limn→∞ E [Xn ] for r < 1, r = 0 and r > 1. Is there convergence to the expectation of the
limiting distribution δ0 ?
as n → ∞.
Y d
√ → N (0, 1),
λ
as λ → ∞.
(n)
8. (From [35]) {Xl }l≥1 is for each n a sequence of independent r.v.’s such that
(
1 − n1 x = 0
(n)
P Xl = x = 1
n x = 1.
6.8. EXERCISES 185
(n)
Let N assume values in the non negative integers. N is independent of {Xl }l≥1 for each n. Set
(n) (n) (n) (n)
SN = X 1 + X2 + . . . + XN +n .
Show that
(n) d
SN → Po(1),
as n → ∞.
9. (From [35]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ Po(λ) for each n. N is independent of
{Xn }n≥1 , and N ∈ Ge(p). Set
SN = X 1 + X 2 + . . . + X N , S0 = 0.
Let now λ → 0, while at the same time p → 0, so that λp → α, where α is a pre-selected positive number.
Show that
d α
SN → Fs .
α+1
10. (From [49]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ C(0, 1). Show that
def 1 d 1
Yn = max (X1 , . . . , Xn ) → FY (y) = e− πy , y > 0,
n
π y3 y5 y7
as n → ∞. Aid: arctan(x) + arctan(1/x) = 2 and arctan y = y − 3! + 5! − 7! . . ..
11. (From [49]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ Pa(1, 2).
14. (From [49]) {Xn }n≥1 is a sequence of independent and identically distributed r.v.’s, with the characteristic
function ( p
1 − |t|(2 − |t|) |t| ≤ 1
ϕ(t) =
0 |t| ≥ 1.
Show that
n
1 X d
Xk → X,
n2
k=1
√
− 2|t|
as n → ∞, where ϕX (t) = e and compare with (4.48).
186 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
15. (From [49]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ La(a) for each n. N is independent of
{Xn }n≥1 , and N ∈ Po(m). Set
SN = X 1 + X 2 + . . . + X N , S0 = 0.
Let now m → +∞, when at the same time a → 0, so that ma2 → 1. Show that then
d
SN → N (0, 2) .
16. (From [49]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ Po(µ) for each n. N is independent of
{Xn }n≥1 , and N ∈ Po(λ). Set
SN = X 1 + X 2 + . . . + X N , S0 = 0.
Let now λ → ∞, while at the same time µ → 0, so that µλ → υ > 0. Show that
d
SN → Po(υ).
17. (From [49]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ Po(µ) for each n. N is independent of
{Xn }n≥1 , and N ∈ Ge(p). Set
SN = X 1 + X 2 + . . . + X N , S0 = 0.
p
Let now µ → 0, while at the same time p → 0, so that µ → α > 0. Show that then
d α
SN → Ge .
α+1
2. (From [35]) Use the result in the preceding example to show that the χ2 (n) distribution with a large
number of degrees of freedom is approximately N (0, 1).
3. (From [35]) {Xn }n≥1 is a sequence of independent r.v.’s, Xn ∈ U (0, 1) for every n. Take
√ √
n 1/ n
Yn = e (X1 · X2 · . . . · Xn ) .
Show that
d
Yn → Log-Normal,
as n → ∞. The Log-Normal distribution is found in (2.93). Here the parameters of the Log-Normal
distribution are µ = 0 and σ 2 = 1.
4. (From [35]) Let {Xn }n≥1 be a sequence of independent r.v.’s, Xn ∈ U (0, e) for every n. Show that
√
n d
(X1 · X2 · . . . · Xn )1/ → Log-Normal,
as n → ∞. The Log-Normal distribution is in (2.93). Here the parameters of the Log-Normal distribution
are µ = 0 and σ 2 = 1.
5. {Xn }n≥1 is a sequence of independent and identically distributed SymBer - r.v.’s, i.e., they have with the
common p.m.f. (
1
2 k = −1
pX (k) = 1
2 k = 1.
Set n
X Xk
Sn = √ .
k=1
k
Show that the following statements of convergence hold, as n → ∞:
Var[Sn ] Pn 1
(a) ln n → 1. For this statement it is an advantage to know that k=1 k − ln n → γ, where γ is
Euler’s constant = 0.577 . . ..
(b)
Sn − E [Sn ] d
→ N (0, 1).
ln n
6. (From [49]) {Xn }n≥1 is an I.I.D. sequence of r.v.’s and E [X] = µ < ∞ for for each n. Nn is independent
of {Xn }n≥1 , and Nn ∈ Ge(pn ).
Let now pn → 0, as n → ∞. Show that
d
pn (X1 + X2 + . . . + XNn ) → Exp(µ),
as n → ∞.
188 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
7. (From [49]) {Xn }n≥1 is a sequence of positive I.I.D. r.v.’s with E [Xn ] = 1 and Var [Xn ] = σ 2 . For n ≥ 1
def
Sn = X 1 + X 2 + . . . + X n .
Show that
p √ d σ2
Sn − n → N 0, ,
4
as n → ∞.
1 Pn
8. {Xi }i≥1 are I.I.D. N (µ, σ 2 ). Find the asymptotic distribution of {eTn }n≥1 , where Tn = n i=1 Xi .
Show that !
n
X
1 P
Sn − pk → 0,
n
k=1
3. (From [35]) Let X1 , X2 , . . . , X2n+1 be independent and identically distributed r.v.’s. They have a distri-
bution function FX (x) such that the equation FX (m) = 12 has a unique solution. Set
The median of an odd number of numerical values is the middle one of the numbers. Median is thus
algorithmically found by sorting the numerical values from the lowest value to the highest value and
picking the middle one, i.e., the one separating the higher half of the list from the lower half. Show that
P
Mn → m,
as n → ∞.
d
4. (sf2940 2012-02-11) X1 , X2 , . . . , Xn , . . . are independent and identically distributed r.v.’s. Xn = X, and
E [X] = µ, Var [X] = σ 2 > 0. Set
n
1X
Xn = Xk ,
n
k=1
Show that n
1X 2 P
Xk − X n → σ 2 , (6.33)
n
k=1
Apply the weak law of large numbers and a suitable property of convergence in probability to prove
the assertion.
d
5. X1 , X2 , . . . , Xn , . . . are independent and identically distributed r.v.’s., Xn = X for each n, and E [X] = µ,
Var [X] = σ 2 > 0. Set
n
1X
Xn = Xk ,
n
k=1
and
n
1 X 2
Sn2 = Xk − X n .
n−1
k=1
Show that √
n Xn − µ d
→ N (0, 1).
Sn
as n → ∞. Aid: The result in (6.33) is definitely useable here.
6. Show that the t(n) distribution converges to N (0, 1), as n → ∞. Aid: Consider exercise 5. in this section.
Show that
P
Yn → 0,
as n → ∞.
(a) Show that even the limiting r.v. X is bounded almost surely by L, or,
P (| X |≤ L) = 1.
and
Bm (ε) = ∪n≥m An (ε) .
Then show that
a.s.
(a) Xn → X, as n → ∞ ⇔ P (Bm (ε)) → 0, as m → ∞. Aid: Part of this is imbedded in the proof of
theorem 6.28.
a.s. P
(b) Xn → X, as n → ∞, if n≥1 P (An (ε)) < ∞ for all ε > 0.
2
Then show that Xn → 0, but that the sequence Xn does not converge almost surely.
Aid: You need a result in the preceding exercise 1. in this section.
x = 0.a1 a2 a3 a4 a5 . . .
5. {Xn }n≥1 is a sequence of random variables such that there is a sequence of (non negative) numbers
P∞
{ǫn }n≥1 such that n=1 ǫn < ∞ and
∞
X
P (| Xn+1 − Xn |> ǫn ) < +∞. (6.35)
n=1
a.s.
Show that there is a random variable X such that Xn → X, as n → ∞.
192 CHAPTER 6. CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
Chapter 7
We write also
2
Xn → X.
This definition is silent about convergence of individual sample paths (Xn (ω))∞
n=1 (a fixed ω ∈ Ω ). By a sample
∞
path we mean that we take a fixed ω ∈ Ω and obtain the sequence of outcomes (Xn (ω))n=1 . Hence, by the
above we can not in general claim that Xn (ω) → X(ω) for an arbitrarily chosen ω or almost surely, as shown
in the preceding.
(iii) haX + bY, Zi = ahX, Zi + bhY, Zi, where Z ∈ L2 (Ω, F , P) and a and b are real constants.
193
194 CHAPTER 7. CONVERGENCE IN MEAN SQUARE AND A HILBERT SPACE
In view of (i)-(iii) we can regard random variables X ∈ L2 (Ω, F , P) as elements in a real linear vector space
with the scalar product hX, Y i. Hence L2 (Ω, F , P) equipped with the scalar product hX, Y i is a pre-Hilbert
space, see e.g., in [96, Appendix H p. 252]1 or [89, ch. 17.7] and [92, pp. 299−301]. Thus we define the norm
(or length)
def p
k X k = hX, Xi. (7.3)
In fact one can prove the completeness of our pre-Hilbert space, [63, p. 22]. Completeness means that if
δ(Xn , Xm ) → 0, as min(m, n) → ∞ ,
2
then there exists X ∈ L2 (Ω, F , P) such that Xn → X. In other words, L2 (Ω, F , P) equipped with the scalar
product hX, Y i is a Hilbert space. Hence several properties in this chapter are nothing but special cases of
general properties of Hilbert spaces.
Hilbert spaces are important, as, amongst other things, they possess natural notions of length, orthogonality
and orthogonal projection, see [36, chapter 6.] for a full account. Active knowledge about Hilbert spaces in
general will NOT be required in the examination of this course.
The inequality (7.5) is known as the Cauchy-Schwartz inequality, and is but a special case of Hölder’s inequality
in (1.25) for p = q = 2. The inequality (7.6) is known as the triangle inequality.
(a)
E [X] = lim E [Xn ]
n→∞
1 The reference is primarily to this book written in Swedish, as it is the texbook for SI1140 Mathematical Methods in Physics
http://www.kth.se/student/kurser/kurs/SI1140?l=en UK, which is a mandatory course for the programme CTFYS at KTH.
7.3. PROPERTIES OF MEAN SQUARE CONVERGENCE 195
(b)
E |X|2 = lim E |Xn |2
n→∞
(c)
E [XY ] = lim E [Xn · Yn ]
n→∞
Proof We prove (c), when (a) and (b) have been proved. First, we see that |E [Xn Yn ] | < ∞ and |E [XY ] | < ∞
by virtue of the Cauchy - Schwartz inequality and the other assumptions. In order to prove (c) we consider
since |E [Z] | ≤ E [|Z|]. Now we can use the ordinary triangle inequality for real numbers and obtain:
and
p p
E [|(Yn − Y )X|] ≤ E [|Yn − Y |2 ] E [|X|2 ].
p p
But by assumption E [|Xn − X|2 ]→ 0, E |Yn |2 → E |Y |2 (part (b)), and E [|Yn − Y |2 ]→ 0, and thus the
assertion (c) is proved.
We shall often need Cauchy’s criterion for mean square convergence , which is the next theorem.
∞
Theorem 7.3.2 Consider the random sequence {Xn }n=1 with Xn ∈ L2 (Ω, F , P) for every n. Then
E |Xn − Xm |2 → 0 (7.7)
The assertion here is nothing else but that the pre-Hilbert space defined section 7.1.2 above is complete. A
useful form of Cauchy’s criterion is known as Loève’s criterion:
Theorem 7.3.3
E |Xn − Xm |2 → 0 ⇐⇒ E [Xn Xm ] → C. (7.8)
→ C + C − 2C = 0.
196 CHAPTER 7. CONVERGENCE IN MEAN SQUARE AND A HILBERT SPACE
Proof of =⇒: We assume that E |Xn − Xm |2 → 0. Then for any m and n
Here,
E [(Xn − X) Xm ] → E [0X] = 0,
2
by theorem 7.3.1 (c), since Xn → X according to Cauchy’s criterion. Also,
E [XXm ] → E X 2 = C
7.4 Applications
7.4.1 Mean Ergodic Theorem
Although the definition of converge in mean square encompasses convergence to a random variable, in many
applications we shall encounter convergence to a constant.
∞
Theorem 7.4.1 The random sequence {Xn }n=1 is uncorrelated and with E [Xn ] = µ < ∞ for every n and
Var [Xn ] = σ 2 < ∞ for every n. Then
n
1X 2
Xn → µ,
n j=1
as n → ∞.
P
Proof Let us set Sn = n1 nj=1 Xn . We have E [Sn ] = µ and Var [Sn ] = 1 2
nσ , since the variables are uncorrelated.
For the claimed mean square convergence we need to consider
1
E |Sn − µ|2 = E (Sn − E [Sn ])2 = Var [Sn ] = σ 2
n
so that
1
E |Sn − µ|2 = σ 2 → 0
n
as n → ∞, as was claimed.
P∞
as n → ∞. The symbol i=0 ai Xi is a notation for a random variable in L2 (Ω, F , P) defined by the converging
Pn
sequence. The Cauchy criterion in theorem 7.3.2 gives for Yn = i=0 ai Xi and n < m that
" m # m m
!2
X X X
2 2 2
E |Yn − Ym | = E | ai X i | = σ a2i + µ2 ai , (7.9)
i=n+1 i=n+1 i=n+1
since by Steiner, s formula EZ 2 = Var(Z) + (E [Z])2 for any random variable that has variance. We need to
recall a topic from mathematical analysis.
Remark 7.4.1 The Cauchy sequence criterion for convergence of sums states that a sum of real numbers
ai
X∞
ai
i=0
converges if and only if the sequence of partial sums is a Cauchy sequence. By a partial sum we mean a finite
sum like
Xn
Sn = ai .
i=0
That the sequence of partial sums is a Cauchy sequence says that for every ε > 0, there is a positive integer N
such that for all m ≥ n ≥ N we have
X
m
|Sm − Sn | = ai < ε,
i=n+1
which is equivalent to
n+k
X
lim
n→∞
ai = 0. (7.10)
k→∞ i=n
P∞
This can be proved as in [69, p.137−138]. The advantage of checking convergence of i=0 ai by partial sums is
that one does not need to guess the value of the limit in advance.
By the Cauchy sequence criterion for convergence of sums we see in the right hand side of (7.9) by virtue of
(7.10) that E |Yn − Ym |2 converges by the Cauchy sequence convergence of sums to zero if and only if
• in case µ 6= 0
∞
X
|ai | < ∞,
i=0
P∞
(which implies i=0 a2i < ∞)
• in case µ = 0
∞
X
a2i < ∞.
i=0
and, as n → ∞,
2
Xn → X. (7.12)
198 CHAPTER 7. CONVERGENCE IN MEAN SQUARE AND A HILBERT SPACE
Thus (7.12) implies in view of Theorem 7.3.1 (a) and (b) that there are numbers µ and σ 2 such that
As an application, we can continue with the sums in section 7.4.2. If Xi are independent N µ, σ 2 , and
P∞
i=0 |ai | < ∞, then !
∞
X ∞
X ∞
X
2 2
ai X i ∈ N µ ai , σ ai . (7.13)
i=0 i=0 i=0
Example 7.5.1 Let M0 = {X ∈ L2 (Ω, F , P) | E [X] = 0}. This is clearly a subspace. By Theorem 7.3.1 (a)
M0 is also a closed subspace. It is also a Hilbert space in its own right.
Example 7.5.2 Let {0} = {X ∈ L2 (Ω, F , P) | X = 0 a.s.}. Then {0} is a subspace, and a subspace of any
other subspace.
Let X = (X1 , X2 , . . .) be a sequence of random variables in L2 (Ω, F , P). We define the subspace spanned
Pn
by X1 , X2 , . . . , Xn , which is the subspace LX
n consisting of all linear combinations i=1 ai Xi of the random
variables, and their limits in the mean square, or
LX
n = sp {X1 , X2 , . . . , Xn } . (7.14)
Since we here keep the number of random variables fixed and finite, the limits in the mean square are limits of
n
X
Ym = ai (m)Xi , as m → ∞.
i=1
Definition 7.5.1 Two random variables X ∈ L2 (Ω, F , P) and Y ∈ L2 (Ω, F , P) are said to be orthogonal, if
hX, Y i = 0. (7.15)
hX, Y i = E [XY ] = 0,
X ⊥ M. (7.16)
One might also want to check that M ⊥ is actually a subspace, as is claimed above.
The following theorem is fundamental for many applications, and holds, of course, for any Hilbert space,
not just for L2 (Ω, F , P), where we desire to take advantage of it.
Theorem 7.5.3 Let M be a closed subspace of L2 (Ω, F , P) Then any X ∈ L2 (Ω, F , P) has a unique decom-
position
X = ProjM (X) + Z (7.18)
where ProjM (X) ∈ M and Z ∈ M ⊥ . In addition it holds that
Proof is omitted, and can be found in many texts and monographs, see, e.g., [26, pp. 35−36] or [36, p.204−206].
The theorem and proof in [96, p. 262] deals with a special case of the result above.
For our immediate purposes the interpretation of theorem 7.5.3 is of a higher priority than expediting its proof.
We can think of ProjM (X) as an orthogonal projection of X to M or as an estimate of X by means of M .
Then Z is the estimation error. ProjM (X) is optimal in the sense that it minimizes the mean squared error
k X − V k2 = E (X − V )2 .
This interpretation becomes more obvious if we take M = LX n as in (7.14). Then ProjM (X) ∈ M must be
of the form
Xn
ProjM (X) = ai X i . (7.20)
i=1
which is an optimal linear mean square error estimate of X by means of X1 , X2 . . . Xn . The coefficients
ai can be found as a solution to a system of linear equations, see the exercises below.
Example 7.5.4 We reconsider M0 in example 7.5.1 above. Then the random variable 1, i.e., 1(ω) = 1 for
almost all ω, is orthogonal to M0 , since
E [X · 1] = E [X] = 0,
for any X ∈ M0 . The orthogonal subspace M0⊥ is in fact spanned by 1,
M0⊥ = {Z | Z = c · 1, c ∈ R}.
Example 7.5.5 X and Y are random variables in L2 (Ω, F , P). Let us consider the subspace M = LY1 ⊂ M0
(example 7.5.1 above) spanned by Y − µY , where µY = E [Y ]. Thus ProjM (X − µX ), µX = E [X], is a random
variable that is of the form
ProjM (X − µX ) = a (Y − µY )
for some real number a. Let
Z = (X − µX ) − a (Y − µY ) .
Then we know by theorem 7.5.3 that for the optimal error Z
Z ⊥ LY1 ,
hZ, a (Y − µY )i = 0.
E [[(X − µX ) − a (Y − µY )) · a (Y − µY )] = 0 (7.21)
⇔
h i
2
aE [(X − µX ) · (Y − µY )] − a2 E (Y − µY ) = 0
⇔
aCov (X, Y ) = a2 Var(Y ),
which gives
Cov (X, Y )
a= . (7.22)
Var(Y )
This makes good sense, since if X and Y are independent, then Cov (X, Y ) = 0, and ProjM (X − µX ) = 0 (=
the random variable 0(ω) = 0 for all ω ∈ Ω). Clearly, if X and Y are independent, there is no information
about X in Y (and vice versa), and there is no effective estimate that would depend on Y . Let us write
p
Cov (X, Y ) Cov (X, Y ) Var(X)
a= = p p ·p
Var(Y ) Var(X) Var(Y ) Var(Y )
p
Var(X)
= ρX,Y · p ,
Var(Y )
where ρX,Y is the coefficient of correlation between X and Y . Then we have
p
Var(X)
X − µX = ρX,Y · p · (Y − µY ) + Z
Var(Y )
⇔
p
Var(X)
X = µX + ρX,Y ·p · (Y − µY ) + Z.
Var(Y )
Therefore, the the best linear mean square estimator of X by means of Y is
p
b Var(X)
X = µX + ρX,Y · p · (Y − µY ) . (7.23)
Var(Y )
7.6. EXERCISES 201
7.6 Exercises
7.6.1 Mean Square Convergence
1. Assume Xn ∈ L2 (Ω, F , P) for all n and Yn ∈ L2 (Ω, F , P) for all n and
2 2
Xn → X, Yn → Y,
2
aXn + bYn → aX + bY,
as n → ∞. You should use the definition of mean square convergence and suitable properties of k X k as
defined in (7.3).
2. Consider
n
X 1
Xn = Wk , n ≥ 1.
k
k=1
2
Xn → X as n → ∞,
2 2
and that X ∈ N 0, σ 6π .
3. The sequence {Xn }∞ n=1 of random variables is such that E [Xi ] = µ for all i, Cov (Xi , Xj ) = 0, if i 6= j
and such that Var(Xi ) ≤ c and for all i. Observe that the variances are thus uniformly bounded but not
necessarily equal to each other for all i. This changes the setting from that in theorem 7.4.1 above. Show
that
n
1X 2
Xj → µ,
n j=1
as n → ∞.
γmk = hYm , Yk i, m = 1, . . . , N ; k = 1, . . . , N
(7.24)
γom = hYm , Xi, m = 1, . . . , N.
202 CHAPTER 7. CONVERGENCE IN MEAN SQUARE AND A HILBERT SPACE
(a) Show first that if a1 , . . . , aN are solutions to the linear system of equations
N
X
ak γmk = γom ; m = 1, . . . , N, (7.25)
k=1
then
X − ProjM (X) ⊥ LYN , (7.26)
for a1 , . . . , aN that satisfy the system of equations (7.25). Then we can write the estimation error ε
using an arbitrary linear estimator b1 Y1 + . . . + bN YN in LYN as
N
X
ε = X − (b1 Y1 + . . . + bN YN ) = (X − ProjM (X)) + (ak − bk ) Yk .
k=1
Expand now E ε2 and recall (7.26).
In other words, LP
n is spanned by random variables of the form
k
X
ci χAi (ω),
i=1
Let Z ∈ U (0, 1) (=the uniform distribution on (0, 1)). X and Z are independent. We multiply these to
get
Y = Z · X.
7.6. EXERCISES 203
(a) Consider M = LY1 , the subspace spanned by Y − E [Y ]. Find that the best linear estimator in mean
square sense, ProjM (X − E [X]), is
r
π r
b =σ π + 1− 4 Y − σ π .
X
2 2 π 2 2
3 − 4
Aid: This is an application of the results in example 7.5.5, see (7.23). Preferably use the expression
for a from (7.22).
(b) Show that
−Y 2
σ e− 2σ2
E [X | Y ] = √ · ,
2π Q Yσ
R ∞ t2
where the Q-function Q(x) = √12π x e− 2 dt is the complementary distribution function for the
standard normal distribution, i.e., Φ(x) = 1 − Q(x).
Aid: Find the joint p.d.f. fX,Y (x, y) and the marginal p.d.f. fY (y) and compute E [X | Y = y] by
its definition.
Remark 7.6.1 This exercise shows that the best estimator in the mean square sense, E [X | Y ],
see section 3.7.3 in chapter 2., and the best linear estimator in the mean square sense,
ProjM (X − E [X]), by no means have to be identical.
204 CHAPTER 7. CONVERGENCE IN MEAN SQUARE AND A HILBERT SPACE
Chapter 8
Gaussian Vectors
• Notation: X ∈ N (µ, σ 2 ).
• X ∈ N (µ, σ 2 ) ⇒ Y = aX + b ∈ N (aµ + b, a2 σ 2 ).
205
206 CHAPTER 8. GAUSSIAN VECTORS
X−µ
• X ∈ N (µ, σ 2 ) ⇒ Z = σ ∈ N (0, 1).
We shall next see that all of these properties are special cases of the corresponding properties of a multivariate
normal/Gaussian random variable as defined below, which bears witness to the statement that the normal
distribution is central in probability theory.
8.1.2 Notation for Vectors, Mean Vector, Covariance Matrix & Characteristic
Functions
An n × 1 random vector or a multivariate random variable is denoted by
X1
X2 ′
X=
.. = (X1 , X2 , . . . , Xn ) ,
.
Xn
′
where is the vector transpose. A vector in Rn is designated by
x1
x2 ′
x= .
= (x1 , x2 , . . . , xn ) .
.
.
xn
FX (x) = P (X ≤ x) = P (X1 ≤ x1 , X2 ≤ x2 , . . . , Xn ≤ xn ) .
that is the covariance of Xi and Xj . Every covariance matrix, now designated by C, is by construction symmetric
′
C=C (8.3)
It is shown on courses in linear algebra that nonnegative definiteness implies det C ≥ 0. In terms of the entries
n,n,
ci,j of a covariance matrix C = (ci,j )i=1,j=1 the preceding implies the following necessary properties.
2. ci,i = Var (Xi ) = σi2 ≥ 0 (the elements in the main diagonal are the variances, and thus all elements in
the main diagonal are nonnegative).
3. c2i,j ≤ ci,i · cj,j (Cauchy-Schwartz’ inequality, c.f., (7.5)). Note that this yields another proof of the fact
that the absolute value of a coefficient of correlation is ≤ 1.
′
Example 8.1.1 The covariance matrix of a bivariate random variable X = (X1 , X2 ) is often written in the
following form !
σ12 ρσ1 σ2
C= , (8.5)
ρσ1 σ2 σ22
where σ12 = Var (X1 ), σ22 = Var (X2 ) and ρ = Cov(X, Y )/(σ1 σ2 ) is the coefficient of correlation of X1 and X2 .
C is invertible (⇒ positive definite) if and only if ρ2 6= 1.
Linear transformations of random vectors are Borel functions Rn 7→ Rm of random vectors. The rules for
finding the mean vector and the covariance matrix of a transformed vector are simple.
Proposition 8.1.2 X is a random vector with mean vector µX and covariance matrix CX . B is a m×n matrix.
If Y = BX + b, then
EY = BµX + b (8.6)
′
CY = BCX B . (8.7)
We have
Definition 8.1.1 h ′ i Z ′
def
φX (s) = E eis X = eis x dFX (x) (8.8)
Rn
′
In (8.8) s x is a scalar product in Rn ,
n
X
′
sx= si xi .
i=1
R
As FX is a joint distribution function on Rn and Rn is a notation for a multiple integral over Rn , we know
that Z
dFX (x) = 1,
Rn
′
Theorem 8.1.3 [Kac’s theorem] X = (X1 , X2 , · · · , Xn ) . The components X1 , X2 , · · · , Xn are independent
if and only if
h ′ i Y n
is X
φX (s) = E e = φXi (si ),
i=1
′
Proof Assume that X = (X1 , X2 , · · · , Xn ) is a vector with independent Xi , i = 1, . . . , n, that have, for
convenience of writing, the joint p.d.f. fX (x). We have in (8.8)
Z ′
φX (s) = eis x fX (x) dx
Rn
Z ∞ Z ∞ n
Y
= ... ei(s1 x1 +...+sn xn ) fXi (xi ) dx1 · · · dxn
∞ −∞ i=1
(8.9)
Z ∞ Z ∞
= eis1 x1 fX1 (x1 ) dx1 · · · eisn xn fXn (xn ) dxn = φX1 (s1 ) · · · φXn (sn ),
∞ −∞
The next statement is a manifestation of the Cramér-Wold theorem1 or the Cramér-Wold device, [67,
p. 87], which states that a probability measure on (Rn , B (Rn )) is uniquely determined by the totality of its
one-dimensional projections. Seen from this angle a multivariate normal distribution is characterized by the
totality of its one dimensional linear projections.
1 Hermann Wold, 1908 - 1992, was a doctoral student of Harald Cramér, then Professor of statistics at Uppsala University and
later at Gothenburg University http://en.wikipedia.org/wiki/Herman Wold
8.1. MULTIVARIATE GAUSSIAN DISTRIBUTION 209
′
has a normal distribution for all vectors a = (a1 , a2 , . . . , an ).
′
Proof Assume that a X has a multivariate normal distribution for all a and that µ and C are the mean vector
′
and covariance matrix of X, respectively. Here (8.6) and (8.7) with B = a give
′ ′
h ′ i ′
Ea X = a µ, Var a X = a Ca.
′
′ ′
Hence, if we set Y = a X, then by assumption Y ∈ N a µ, a Ca and the characteristic function of Y is by
(8.2)
′ ′
µ− 21 t2 a Ca
ϕY (t) = eita .
Thus
′ ′ ′
µ− 21 a Ca
ϕX (a) = Eeia X
= ϕY (1) = eia .
In view of definition 8.1.2 this shows that X ∈ N (µ, C). The proof of the statement in the other direction is
obvious.
′
Example 8.1.5 In this example we study a bivariate random variable (X, Y ) such that both X and Y have
normal marginal distribution but there is a linear combination (in fact, X + Y ), which does not have a normal
′
distribution. Therefore (X, Y ) is not a bivariate normal random variable. This is an exercise stated in [80].
Let X ∈ N 0, σ 2 . Let U ∈ Be 21 and be independent of X. Define
(
X if U = 0
Y =
−X if U = 1.
Let us find the distribution of Y . We compute the characteristic function by double expectation
ϕY (t) = E eitY = E E eitY | U
1 1
= E eitY | U = 0 · + E eitY | U = 1 ·
2 2
1 1
= E eitX | U = 0 · + E e−itX | U = 1 ·
2 2
and since X and U are independent, the independent condition drops out, and X ∈ N 0, σ 2 ,
1 1 1 t2 σ2 1 t2 σ2 t2 σ2
= E eitX · + E e−itX · = · e− 2 + · e− 2 = e− 2 ,
2 2 2 2
210 CHAPTER 8. GAUSSIAN VECTORS
which by uniqueness of characteristic functions says that Y ∈ N 0, σ 2 . Hence both marginal distributions of
the bivariate random variable (X, Y ) are normal distributions. Yet, the sum
(
2X if U = 0
X +Y =
0 if U = 1
is not a normal random variable. Hence (X, Y ) is according to theorem 8.1.4 not a bivariate Gaussian random
variable. Clearly we have ! ! !
X 1 0 X
= . (8.12)
Y 0 (−1)U X
′ ′ ′
Hence we multiply (X, X) once by a random matrix to get (X, Y ) and therefore should not expect (X, Y ) to
have a joint Gaussian distribution. We take next a look at the details. If U = 1, then
! ! ! !
X 1 0 X X
= = A1
Y 0 −1 X X
and if U = 0, ! ! ! !
X 1 0 X X
= = A0 .
Y 0 1 X X
′
The covariance matrix of (X, X) is clearly
!
2 1 1
CX = σ .
1 1
We set ! !
1 −1 1 1
C1 = , C0 = .
−1 1 1 1
′ ′
One can verify, c.f. (8.7), that σ 2 C1 = A1 CX A1 and σ 2 C0 = A0 CX A0 . Hence σ 2 C1 is the covariance matrix
of (X, Y ), if U = 1, and σ 2 C0 is the covariance matrix of (X, Y ), if U = 0.
It is clear by the above that the joint distribution FX,Y should actually be a mixture of two distributions
(1) (0)
FX,Y and FX,Y with mixture coefficients 21 , 12 ,
1 (1) 1 (0)
FX,Y (x, y) = · FX,Y (x, y) + · FX,Y (x, y).
2 2
We understand this as follows. We draw first a value u from Be 21 , which points out one of the distributions,
(u) (u)
FX,Y , and then draw a sample of (X, Y ) from FX,Y . We can explore these facts further.
′
Let us determine the joint distribution of (X, Y ) by means of the joint characteristic function, see
eq.(8.8). We get
h i h i 1 h i 1
ϕX,Y (t, s) = E ei(tX+sY ) = E ei(tX+sY ) | U = 0 · + E ei(tX+sY ) | U = 1 ·
2 2
h i 1 h i 1
= E ei(t+s)X) · + E ei(t−s)X ·
2 2
1 − (t+s)2 σ2 1 − (t−s)2 σ2
= e 2 + e 2 .
2 2
From the above ! !
2 t 2 t
(t − s) = (t, s)C1 (t + s) = (t, s)C0 .
s s
8.1. MULTIVARIATE GAUSSIAN DISTRIBUTION 211
We see that C1 and C2 are non-negative definite matrices. (It holds also that det C1 = det C0 = 0.)
Therefore ′ ′
1 − (t+s)2 σ2 1 (t−s)2 σ2 1 σ 2 s C0 s 1 σ 2 s C1 s
e 2 + e− 2 = e− 2 + e− 2 ,
2 2 2 2
′
where s = (t, s) . This shows by uniqueness
! ! of characteristic
! functions! that the joint distribution
0 0
of (X, Y ) is a mixture of N , σ 2 C0 and N , σ 2 C1 with the mixture coefficients
0 0
1 1
2, 2 .
′
Y ∈ N Bµ + b, BCB .
or " #
′ ′
′
i B s X
ϕY (s) = eis b E e . (8.13)
Here " #
′ ′ ′
i B s X
E e = ϕX B s .
Furthermore
′ ′ ′ ′ ′ ′
i B s µ− 12 B s C B s
ϕX B s = e .
Since ′ ′ ′
′ ′ ′ ′ ′
Bs µ = s Bµ, Bs C B s = s BCB s,
we get
′ ′ ′ ′ ′
′ ′ ′
i B s µ− 12 B s C B s Bµ− 12 s BCB s
e = eis .
Therefore ′ ′ 1 ′ ′
ϕX B s = eis Bµ− 2 s BCB s (8.14)
′ ′ ′
(b+Bµ)− 21 s BCB s
= eis ,
2. Theorem 8.1.7 A Gaussian multivariate random variable has independent components if and only if the
covariance matrix is diagonal.
Proof Let Λ be a diagonal covariance matrix with λi s on the main diagonal, i.e.,
λ1 0 0 ... 0
0 λ 0 ... 0
2
Λ= 0 0 λ 3 ... 0 .
.. ..
0 . . ... 0
0 0 0 ... λn
Then
′ ′
µ− 21 t Λt
ϕX (t) = eit =
Pn 1
Pn 2
i i=1 µi ti − 2 i=1 λi ti
=e
iµ1 t1 − 12 λ1 t21 iµ2 t2 − 21 λ2 t22 1 2
=e e · · · eiµn tn − 2 λn tn
is the product of the characteristic functions of Xi ∈ N (µi , λi ), which are by theorem 8.1.3 seen to be
independent.
3. Theorem 8.1.8 If C is positive definite ( ⇒ det C > 0), then it can be shown that there is a simultaneous
p.d.f. of the form
1 1 ′ −1
fX (x) = √ e− 2 (x−µX ) C (x−µX ) . (8.15)
(2π)n/2 det C
Proof It can be checked by a lengthy but straightforward computation that
Z
′ 1 ′ ′ 1 1 ′ −1
eis µ− 2 s Cs = eis x p e− 2 (x−µ) C (x−µ) dx.
(2π)n/2 det(C)
Rn
′
4. Theorem 8.1.9 (X1 , X2 ) is a bivariate Gaussian random variable. The conditional distribution for X2
given X1 = x1 is
σ2
N µ2 + ρ · (x1 − µ1 ), σ22 (1 − ρ2 ) , (8.16)
σ1
p p
where µ2 = E(X2 ), µ1 = E (X2 ), σ2 = Var (X2 ), σ1 = Var (X1 ) and ρ = Cov(X1 , X2 )/ (σ1 · σ2 ) .
Proof is done by an explicit evaluation of (8.15) followed by an explicit evaluation of the pertinent con-
ditional p.d.f. and is deferred to Appendix 8.4.
Hence for bivariate Gaussian variables the best estimator in the mean square sense, E [X2 | X1 ],
and the best linear estimator in the mean square sense are one and the same random variable,
c.f., example 7.5.5 and remark 7.6.1.
for n × n matrix A, where A is lower triangular, see [80, Appendix 1]. Actually we can always decompose
C = LDL′ ,
where L is a unique n × n lower triangular, D is diagonal with positive elements on the main diagonal, and we
√
write A = L D. Then A−1 is lower triangular. Then
Z = A−1 (X − µX )
is a standard Gaussian vector. In some applications, like, e.g., in time series analysis and signal processing, one
refers to A−1 as a whitening matrix. It can be shown that A−1 is lower triangular, thus we have obtained Z
by a causal operation, in the sense that Zi is a function of X1 , . . . , Xi . Z is known as the innovations of X.
Conversely, one goes from the innovations to X through another causal operation by X = AZ + b, and then
′
X = N b, AA .
Let Z1 och Z2 be independent N (0, 1). We consider the lower triangular matrix
!
σ1 0
B= p , (8.17)
ρσ2 σ2 1 − ρ2
′
which clearly has an inverse, as soon as ρ 6= ±1. Moreover, one verifies that C = B · B , when we write C as in
(8.5). Then we get ! !
X1 Z1
=µ+B , (8.18)
X2 Z2
where, of course, ! ! !!
Z1 0 1 0
∈N , .
Z2 0 0 1
where X1 is p × 1 and X2 is q × 1, n = q + p. Let the covariance matrix C be partitioned in the sense that
!
Σ11 Σ12
C= , (8.19)
Σ21 Σ22
Let X ∈ Nn (µ, C), where Nn refers to a normal distribution in n variables, C and µ are partitioned as in
(8.19)-(8.20). Then the marginal distribution of X2 is
X2 ∈ Nq (µ2 , Σ22 ) ,
if Σ22 is invertible. Let X ∈ Nn (µ, C), where C and µ are partitioned as in (8.19)-(8.20). Assume that the
inverse Σ−1
22 exists. Then the conditional distribution of X1 given X2 = x2 is normal, or,
X1 | X2 = x2 ∈ Np µ1|2 , Σ1|2 , (8.21)
where
µ1|2 = µ1 + Σ12 Σ−1
22 (x2 − µ2 ) (8.22)
and
Σ1|2 = Σ11 − Σ12 Σ−1
22 Σ21 .
By virtue of (8.21) and (8.22) the best estimator in the mean square sense and the best linear estimator
in the mean square sense are one and the same random variable .
(iii) A is symmetric.
Since covariance matrices are symmetric, we have by the theorem above that all covariance matrices are
orthogonally diagonalizable.
That is, all eigenvalues of a covariance matrix are real. Hence we have for any covariance matrix the
spectral decomposition
Xn
′
C= λi ei ei , (8.23)
i=1
8.4. APPENDIX: PROOF OF (8.16) 215
where Cei = λi ei . Since C is nonnegative definite, and its eigenvectors are orthonormal,
′ ′
0 ≤ ei Cei = λi ei ei = λi ,
and X ∈ N (0, CX ), i.e., CX is a covariance matrix and Λ is diagonal (with the eigenvalues of CX on the main
′
diagonal). Then if Y = P X, we have by theorem 8.1.6 that
Y ∈ N (0, Λ) .
In other words, Y is a Gaussian vector and has by theorem 8.1.7 independent components. This method of
producing independent Gaussians has several important applications. One of these is the principal component
analysis, c.f. [59, p. 74]. In addition, the operation is invertible, as
X = PY
where
Q(x1 , x2 ) =
" 2 2 #
1 x1 − µ1 2ρ(x1 − µ1 )(x2 − µ2 ) x2 − µ2
· − + .
(1 − ρ2 ) σ1 σ1 σ2 σ2
Now we claim that
1 − 1 (x2 −µ̃2 (x1 ))2
fX2 |X1 =x1 (x2 ) = √ e 2σ̃22 ,
σ̃2 2π
a p.d.f. of a Gaussian random variable X2 |X1 = x1 with the (conditional) expectation µ̃2 (x1 ) and the (condi-
tional) variance σ̃2
σ2 p
µ̃2 (x1 ) = µ2 + ρ (x1 − µ1 ), σ̃2 = σ2 1 − ρ2 .
σ1
To prove these assertions about fX2 |X1 =x1 (x2 ) we set
1 − 1 (x1 −µ1 )2
fX1 (x1 ) = √ e 2σ12 , (8.25)
σ1 2π
216 CHAPTER 8. GAUSSIAN VECTORS
and by (8.16)
σ2
= E((X1 − µ1 ) µ2 + ρ (X1 − µ1 ) − µ2
σ1
σ2
= ρ E(X1 − µ1 )((X1 − µ1 ))
σ1
σ2 σ2
= ρ E(X1 − µ1 )2 = ρ σ12 = ρσ2 σ1 .
σ1 σ1
In other words, we have established that
E [(X1 − µ1 )(X2 − µ2 )]
ρ= ,
σ2 σ1
′
which says that ρ is the coefficient of correlation of (X1 , X2 ) .
8.5. EXERCISES 217
8.5 Exercises
8.5.1 Bivariate Gaussian Variables
′
1. (From [42]) Let (X1 , X2 ) ∈ N (µ, C), where
!
0
µ=
0
and !
1 ρ
C= .
ρ 1
(a) Set Y = X1 − X2 . Show that Y ∈ N (0, 2 − 2ρ).
(b) Show that for any ε > 0
P (|Y | ≤ ε) → 1,
if ρ ↑ 1.
′
2. (From [42]) Let (X1 , X2 ) ∈ N (µ, C), where
!
0
µ=
0
and !
1 ρ
C= .
ρ 1
(a) We want to find the distribution of the random variable X1 | X2 ≤ a. Show that
Z x !
1 a − ρu
P (X1 ≤ x | X2 ≤ a) = φ(u)Φ p du, (8.26)
Φ(a) −∞ 1 − ρ2
d
where Φ(x) is the distribution function of N (0, 1) and φ(x) the p.d.f. of N (0, 1), i.e., dx Φ(x) = φ(x).
We sketch two different solutions.
Aid 1. We need to find
P ({X1 ≤ x} ∩ {X2 ≤ a})
P (X1 ≤ x | X2 ≤ a) = .
P (X2 ≤ a)
Then Z x Z a
P ({X1 ≤ x} ∩ {X2 ≤ a}) = fX1 ,X2 (u, v)dudv =
−∞ −∞
Z x Z a
= fX2 (v) fX1 |X2 =v (u)dudv.
−∞ −∞
Ra
Now find fX2 (v) and fX1 |X2 =v (u) and make a change of variable in −∞
fX1 |X2 =v (u)du.
′ ′
Aid 2. Use (8.18), which shows how to write (X1 , X2 ) , as a linear transformation of (Z1 , Z2 ) with
N (0, I), or as ! !
X1 Z1
=B .
X2 Z2
Then you can, since B is invertible, write the event
{X1 ≤ x} ∩ {X2 ≤ a}
as an event using (the innovations) Z1 and Z2 and then compute the desired probability using
the joint distribution of Z1 and Z2 .
218 CHAPTER 8. GAUSSIAN VECTORS
becomes the p.d.f. of a bivariate normal distribution, and determine its parameters, that is, its mean
vector and covariance matrix.
! !!
√ 2 1
0
Answer: c = 2π3 , N , 3
1
3
2
.
0 3 3
′ ′
4. (From [42]) (X1 , X2 ) ∈ N (0, C), where 0 = (0, 0) .
Yb = E [Y | FX ] = E [Y | X]
z1 = FX1 (x1 )
(8.28)
z2 = FX2 |X1 =x1 (x2 ) .
(8.29)
E [Z|Y = 1]
8. In the mathematical theory of communication, see [23], (communication in the sense of transmission of
messages via systems designed by electrical and computer engineers, not in the sense of social competence
and human relations or human-computer interaction (HCI)) one introduces the mutual information
I(X, Y ) between two continuous random variables X and Y by
Z ∞ Z ∞
def fX,Y (x, y)
I(X, Y ) = fX,Y (x, y) log dxdy, (8.30)
−∞ −∞ f X (x)fY (y)
where fX,Y (x, y) is the joint p.d.f. of (X, Y ), fX (x) and fY (y) are the marginal p.d.f.s of X and Y , respec-
tively. I(X, Y ) is in fact a measure of dependence between random variables, and is theoretically speaking
superior to correlation, as we measure with I(X, Y ) more than the mere degree of linear dependence
between X and Y .
! !!
0 σ 2 ρσ 2
Assume now that (X, Y ) ∈ N , . Check that
0 ρσ 2 σ 2
1
I(X, Y ) = − log 1 − ρ2 . (8.31)
2
Aid: The following steps solution are in a sense instructive, as they rely on the explicit conditional
distribution of Y | X = x, and provide an interesting decomposition of I(X, Y ) as an intermediate step.
Someone may prefer other ways. Use
fX,Y (x, y) fY |X=x (y)
= ,
fX (x)fY (y) fY (y)
and then Z ∞ Z ∞
I(X, Y ) = fX,Y (x, y) log fY |X=x (y)dxdy
−∞ −∞
220 CHAPTER 8. GAUSSIAN VECTORS
Z ∞ Z ∞
− fX,Y (x, y) log fY (y)dxdy.
−∞ −∞
Then one inserts in the first term on the right hand side
and let ! !
Y1 X1
=Q
Y2 X2
and σ22 ≥ σ12 .
(i) Find Cov(Y1 , Y2 ) and show that Y1 and Y2 are independent for all θ if and only if σ22 = σ12 .
(ii) Supppose σ22 > σ12 . For which values of θ are Y1 and Y2 are independent ?
Hence we see that by rotating two independent Gaussian variables with variances 1 + ρ and 1 − ρ, ρ 6= 0,
with 45 degrees, we get a bivariate Gaussian vector, where covariance of the two variables is equal to ρ.
11. (X, Y ) is a bivariate Gaussian r.v. with Var [X] = Var [Y ]. Show that X + Y and X − Y are independent
r.v.’s.
12. Let ! ! !!
X1 0 σ12 ρσ1 σ2
∈N , .
X2 0 ρσ1 σ2 σ22
Show that Var [X1 X2 ] = σ12 σ22 1 + ρ2 .
2y = Qx is a rotation of x by the angle θ, as explained in any text on linear algebra, see, e.g., [5, p.187 ].
8.5. EXERCISES 221
U = X1 + X2 + X3 , V = X1 + 2X2 + 3X3 .
(a) X2 = 0.
(b) X2 = 2.
Assume that |ρ| < 1. Then (8.33) tells us that the standardized distance between E [X2 | X1 ]
and its mean E[X2 ] is smaller than than the standardized distance between X1 and its mean
E[X1 ]. Here we think of X1 and X2 as a first and second measurement, respectively, of some
property, like the height of a parent and the height of an adult child of that parent.
3 Sir
Francis Galton, 1822 −1911, contributed to statistics, sociology, psychology, anthropology, geography, meteorology, genetics
and psychometry, was active as tropical explorer and inventor, and one of the first proponents of eugenics.
222 CHAPTER 8. GAUSSIAN VECTORS
where ei is a real (i.e., has no complex numbers as elements) n×1 eigenvector, i.e., Cei = λi ei
and λi ≥ 0. The set {ei }ni=1 is a complete orthonormal basis in Rn , which amongst other
things implies that every x ∈ Rn can be written as
n
X ′
x= (x ei )ei ,
i=1
′
where the number x ei is the coordinate of x w.r.t. the basis vector ei . In addition, or-
thonormality is recalled as the property
(
′ 1 i=j
ej ei = (8.34)
0 i 6= j.
We make initially the simplifying assumption that C1 and C2 have the same eigenvectors,
′
so that C1 ei = λi ei , C2 ei = µi ei . Then we can diagonalize the quadratic form x C2 C1 x as
follows.
Xn n
X
′ ′
C1 x = (x ei )C1 ei = λi (x ei )ei
i=1 i=1
n
X ′
= λi (x ei )ei . (8.35)
i=1
or
n
X
′ ′ ′
x C2 = µj (x ej )ej . (8.36)
j=1
E [X1 X2 X3 X4 ] =
∂4
E [X1 X2 X3 X4 ] = φ(X1 ,X2 ,X3 ,X4 ) (s) |s=0 .
∂s1 ∂s2 ∂s3 ∂s4
As an additional aid one may say that this requires a lot of manual labour. Note also that we have
∂k
φX (s) |s=0 = ik E Xik , i = 1, 2, . . . , n. (8.38)
∂ski
1. Bussgang’s Theorem
Let g(y) be a Borel function such that E [|g(Y )|] < ∞. We are interested in finding
2. Bussgang’s Theorem and Stein’s Lemma Assume next that g(y) is such that
′
h ′ i
and that g(y) is (almost everywhere) differentiable with the first derivative g (y) such that E g (Y ) < ∞.
Show that
h ′ i Cov (Y, g(Y ))
E g (Y ) = . (8.40)
σ22
Aid: Use an integration by parts in the integral expression for Cov (Y, g(Y )).
In statistics (8.41) known as Stein’s lemma, whereas the (electrical) engineering literature refers to (8.39)
and/or (8.40) as Bussgang’s theorem5 , see, e.g., [85, p. 340]. In the same way one can also prove that if
X ∈ N (µ, σ 2 ),
E g(X)(X − µ) = σ 2 E g ′ (X) ,
which is known as Stein’s lemma, too. Stein’s lemma has a ’Poissonian’ counterpart in Chen’s lemma
(2.123). A repeated application of Stein, s lemma on the function g(x) = x2n−1 yields the moment identity
(4.50), too.
The formula (8.41) has been applied as a test of Gaussianity in time series and signal analysis.
Can Bussgang’s theorem-Stein’s lemma from (8.41) be used here, and if yes, how ? The formula in (8.44)
is known in circuit theory as the ’input/ouput moment equation for relay correlator’.
(a) Show that if fX,Y (x, y) is the joint bivariate p.d.f., then
∂n ∂ 2n
f X,Y (x, y) = fX,Y (x, y).
∂nρ ∂ n x∂ n y
(b) Show that if Q(x, y) is a sufficiently differentiable function integrable with its derivatives w.r.t. x, y,
then
∂n ∂ 2n
E [Q(X, Y )] = E n n Q(X, Y ) . (8.45)
∂nρ ∂ x∂ y
This is known as Price’s Theorem.
(c) Let Q(x, y) = xg(y), where g(y) is differentiable. Deduce (8.41) by means of (8.45).
Remark 8.5.2 In the applications of Bussgang’s and Price’s theorems the situation is mostly that X ↔ X(t)
and Y ↔ X(t + h) , where X(t) and X(t + h) are random variables in a Gaussian weakly stationary stochastic
process, which is the topic of the next chapter.
226 CHAPTER 8. GAUSSIAN VECTORS
Chapter 9
X = {X(t) | t ∈ T },
where T is the index set of the process. All random variables X(t) are defined on the same probability space
(Ω, F , P).
In these lecture notes the set T is R or a subset of R, e.g., T = [0, ∞) or T = (−∞, ∞) or T = [a, b], a < b,
and is not countable. We shall thus talk about stochastic processes in continuous time.1
There are three ways to view a stochastic process;
• For each fixed ω ∈ Ω, T ∋ t 7→ X(t, ω) is a function of t called the sample path (corresponding to ω).
The mathematical theory deals with these questions as follows. Let now t1 , . . . , tn be n points in T and
X(t1 ), . . . , X(tn ) be the n corresponding random variables in X. Then for an arbitrary set of real numbers
x1 , x2 , . . . , xn we have the joint distribution
227
228 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
Suppose that we have a family F of joint distribution functions or finite dimensional distributions Ft1 ,...,tn given
for all n and all t1 , . . . , tn ∈ T
F = {Ft1 ,...,tn }(t1 ,...,tn )∈T n ,n∈Z+ .
The question is, when we can claim that there exists a stochastic process with F as its family of finite dimensional
distributions.
Theorem 9.1.1 (Kolmogorov Consistency Theorem) Suppose that F is given and Ft1 ,...,tn ∈ F, and
Ft1 ,...,ti−1 ,ti+1 ,...,tn ∈ F. If it holds that
Ft1 ,...,ti−1 ,ti+1 ,...,tn (x1 , . . . , xi−1 , xi+1 , . . . , xn ) = lim Ft1 ,...,tn (x1 , . . . , xn ) , (9.1)
xi ↑∞
then there exists a probability space (Ω, F , P) and a stochastic process of random variables X(t), t ∈ T , on
(Ω, F , P) such that F is its family of finite dimensional distributions.
Proof is omitted here. A concise and readable proof is found in [68, Chapter 1.1].
The condition (9.1) says in plain words that if one takes the joint distribution function for n variables from F,
it has to coincide with the marginal distribution for these n variables obtained by marginalization of a joint
distribution function from F for a set of n + 1 (or, any higher number of) variables that contains these n
variables.
where the amplitude A and the frequency w are fixed. This is a stochastic process, a sine wave with a random
′
phase. We can specify the joint distributions. Take X = (X(t1 ), X(t2 ), . . . , X(tn )) , the characteristic function
is h i h i
′
φX (s) = E eis X
= E eiR sin(φ+θ) ,
where v
u n !2 n
!2
u X X
R=At sk cos(wtk ) + sk sin(wtk )
k=1 k=1
v
uX
u n X
n
= At sk sj cos(w(tk − tj )
k=1 j=1
and Pn
−1 k=1 sk sin(wtk )
θ = tan Pn .
k=1 sk cos(wtk )
The required details are left for the diligent reader. Hence, see [8, p. 38 -39],
Z 2π
1
φX (s) = eiR sin(φ+θ) dφ
2π 0
Z 2π
1
= eiR sin(φ) dφ = J0 (R),
2π 0
9.1. STOCHASTIC PROCESSES 229
where J0 is the Bessel function of first kind of order zero, [3, pp. 248−249, eq. (6.30)] or in [96, sats
8.1 eq. (12), p. 327] or [92, p. 270]. Needless to say, the joint distribution is not a multivariate Gaussian
distribution.
The figures 9.1 and 9.2 illustrate the ways to view a stochastic process stated in the above. We have the
probability space (Ω, F , P), where Ω = [0, 2π], F = restriction of the Borel sigma field B (R) to [0, 2π], and P
is the uniform distribution on Borel sets in [0, 2π]. Thus φ ↔ ω. For one φ drawn from U (0, 2π), we have in
figure 9.1 one sample path (or, a random waveform) of X(t) = sin(0.5t + φ) (w = 0.5 and A = 1). In figure
9.2 the graphs are plots of an ensemble of five sample paths of the process corresponding to five samples from
U (0, 2π). If we focus on the random variable X(20), we see in figure 9.2 five outcomes of X(20). For the third
point of view, we see, e.g., X(20, ω5 ) = 0.9866, the green path at t = 20. In the figure 9.3 we see the histogram
for 1000 outcomes of X(20).
Remark 9.1.1 The histogram in figure 9.3 can be predicted analytically. We have for w = 0.5, A = 1, t = 20,
i.e., X(20) = H(φ). Since we can by periodicity move to any interval of length 2π, we can consider X(20) =
sin(φ). It is shown in the example 2.4.2 that the p.d.f. of X(20) is
(
√1 , −1 < x < 1
fX(20) (x) = π 1−x2
0, elsewhere.
Alternatively, [8, p. 18]), the characteristic function of any random variable like X(20) in the random sine wave
is, since we can by periodicity move to any interval of length 2π,
h i Z 2π
1
ϕX(20) (t) = E eit sin(φ) = eit sin(φ) dφ = J0 (t)
2π 0
Example 9.1.3 We generalize the example 9.1.2 above. Let φ ∈ U (0, 2π) and A ∈ Ra(2σ 2 ), which means that
A has the p.d.f. (
x −x2 /2σ2
σ 2e x≥0
fA (x) =
0 elsewhere.
Let φ and A be independent. We have a stochastic process, which is a sine wave with a random amplitude and
a random phase. Then we invoke the sine addition formulas
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 5 10 15 20
Figure 9.1: One sample path of X(t) = sin(0.5t + φ) for t ∈ [0, 20], φ ∈ U (0, 2π).
x1 = A sin(φ), x2 = A cos(φ),
solve q
x1
A = x21 + x22 , tan(φ) = ,
x2
p 1
compute the Jacobian J, and evaluate fA ( x21 + x22 ) 2π |J|.
′
The characteristic function for X = (X(t1 ), X(t2 ), . . . , X(tn )) is
h i h i
φX (s) = E ei(A sin(φ) k=1 sk cos(wtk ) · E ei(A cos(φ) k=1 sk sin(wtk )
Pn Pn
h σ2 Pn Pn i
= E e−i 2 j=1 k=1 sj sk cos(w(tk −tj ))
.
A second glance at the formula obtained reveals that this should be the characteristic function of a multivariate
normal distribution, where the covariance matrix depends on the time points {tk }nk=1 only through their mutual
differences tk − tj . As will be understood more fully below, this means that the random sine wave {X(t) | −∞ <
t < ∞} in this example is a weakly stationary Gaussian stochastic process with the autocorrelation
function CovX (t, s) = cos(wh) for h = t − s. We shall now define in general terms the autocorrelation functions
and related quantities for stochastic processes.
9.1. STOCHASTIC PROCESSES 231
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
0 5 10 15 20
Figure 9.2: Five sample paths of X(t) = sin(0.5t + φ) for t ∈ [0, 20], for five outcomes of φ ∈ U (0, 2π).
Of the random variable X(20) we see five outcomes, (ωi ≡ φi ), X(20, ω1) = 0.5554, X(20, ω2 ) = 0.0167,
X(20, ω3) = −0.9805, X(20, ω4 ) = −0.0309, X(20, ω5) = 0.9866.
Here, and in the sequel the computational rules (2.47) and (2.48) find frequent and obvious appli-
cations without explicit reference.
250
200
150
100
50
0
-1 -0.5 0 0.5 1
Figure 9.3: The histogram for 1000 outcomes of X(20), X(t) = sin(0.5t + φ), φ ∈ U (0, 2π).
These moment functions depend only on the bivariate joint distributions Ft,s . We talk also about second order
distributions and about second order properties of a stochastic process.
The terminology advocated above is standard in the engineering literature, e.g., [38, 50, 56, 71, 80, 85, 97,
p
101], but for a statistician the autocorrelation function would rather have to be CovX (t, s)/ VarX (t) · VarX (s).
where X1 ∈ N (0, σ 2 ) and X2 ∈ N (0, σ 2 ) are independent. Then the mean function is
= σ 2 cos(w(t − s)),
The autocorrelation function has several distinct properties that are necessary for a function to be an autocor-
relation function. For example, if RX (t, s) is an autocorrelation function, then the following Cauchy-Schwarz
inequality holds.
p p
| RX (t, s) |≤ RX (t, t) RX (s, s), for all t, s ∈ T . (9.6)
A characterization of autocorrelation functions is given in the next theorem.
Theorem 9.1.5 RX (t, s) is the autocorrelation function of a process X = {X(t) | t ∈ T }, if and only if it has
the following properties.
1. Symmetry
RX (t, s) = RX (s, t), for all t, s ∈ T . (9.7)
2. Nonnegative definiteness
n X
X n
xi xj RX (ti , tj ) ≥ 0 (9.8)
i=1 j=1
n,n
Clearly (9.8) means that every n × n - matrix (RX (ti , tj ))i=1,j=1 is nonnegative definite as in (8.4).
The important question raised by theorem 9.1.5 above is, how to check that a given symmetric function
R(t, s) of (t, s) ∈ T × T is nonnegative definite.
One way to decide the question in example above and elsewhere is to find a random process that has R(t, s)
as its autocorrelation function. This can, on occasion, require a lot of ingenuity and effort and is prone to
errors. We shall give several examples of autocorrelation functions and corresponding underlying processes. It
should be kept in mind right from the start that there can be many different stochastic processes with the same
autocovariance function.
There is a class of processes with random variables X(t) ∈ L2 (Ω, F , P) called weakly stationary pro-
cesses, that has been extensively evoked in the textbook and engineering literature and practice, c.f., [1, 38,
50, 56, 71, 80, 85, 89, 97, 101, 103]. Weakly stationary processes can be constructed by means of linear analog
filtering of (white) noise, as is found in the exercises of section 9.7.5. The weakly stationary processes are defined
as having a constant mean and an autocorrelation function which is a function of the difference between t and
s, c.f., example 9.1.4. The weakly stationary processes will be defined and treated in section 9.3.
We begin with a few examples of families of autocorrelation functions.
3 The answer may be found in http://arxiv.org/abs/0710.5024
234 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
Example 9.1.7 (Bilinear forms of autocorrelation functions) Take any real function f (t), t ∈ T . Then
is an autocorrelation function of a stochastic process. In fact, take X ∈ N (0, 1), and set X(t) = f (t)X. Then
R(t, s) is the autocorrelation of the process {X(t) | t ∈ T }. The mean function is the constant = 0. Thus
n
X
R(t, s) = fi (t) · fi (s)
i=0
and even
∞
X
R(t, s) = fi (t) · fi (s) (9.10)
i=0
The next example is a construction of a stochastic process that leads to the bilinear R(t, s) as given in (9.10),
see [7, pp. 6−10] or [103, pp. 82−88].
Example 9.1.8 Let Xi ∈ N (0, 1) be I.I.D. for i = 0, 1, . . .. Take for i = 0, 1, . . . the real numbers λi ≥ 0 such
P
that ∞i=0 λi < ∞. Let ei (t) for i = 0, 1, . . . be a sequence of functions of t ∈ [0, T ] such that
Z T (
1 i=j
ei (t)ej (t)dt = (9.11)
0 0 i 6= j
∞
and that (ei )i=0 is an orthonormal basis in L2 ([0, T ]), [96, pp. 279−286]. We set
N √
X
def
XN (t) = λi Xi ei (t).
i=0
Then one can show using the methods in section 7.4.2 that for every t ∈ [0, T ]
∞ √
X
2
XN (t) → X(t) = λi Xi ei (t), (9.12)
i=0
as N → ∞. Clearly, by theorem 7.4.2 X(t) is a Gaussian random variable. The limit is in addition a stochastic
process such that
X∞ √ √
E [X(t)X(s)] = λi ei (t) λi ei (s),
i=0
√
where we used theorem 7.3.1. But this is (9.10) with fi (t) = λi ei (t). This example will be continued in
example 9.2.3 in the sequel and will eventually yield a construction of the Wiener process, see section 10.3
below.
Example 9.1.9 A further bilinear form of autocorrelation functions By some further extensions of
horizons, [46, chapter 2.3], we can show that integrals of the form
Z b
R(t, s) = f (t, λ) · f (s, λ)dλ
a
Example 9.1.10 [Separable autocorrelation functions] We have here a family of autocorrelation functions
that turn out to correspond to certain important processes.
is an autocorrelation function of a stochastic process. How can one claim this ? The answer is deferred
till later, when it will be shown that this is the autocorrelation function of a Wiener process.
2. T = [0, 1] and (
s(1 − t) s ≤ t
R(t, s) = (9.14)
(1 − s)t s ≥ t.
This is the autocorrelation function of a process known as the Brownian bridge or the tied down Wiener
process.
This is the autocorrelation function of a weakly stationary process, as it is a function of |t − s|. One
process having this autocorrelation function is a stationary Ornstein-Uhlenbeck process in the chapter 11,
another is the random telegraph signal in chapter 12.5.
By this we see that all the preceding examples (9.13) - (9.15) are special cases of (9.16) for appropriate
choices of u(·) and v(·). Processes with this kind of autocorrelation functions are the so called Gauss-
Markov processes.
Example 9.1.11 (Periodic Autocorrelation) [97, pp. 272−273] Let {Bi }i≥1 be a sequence of independent
random variables with Bi ∈ Be(1/2) for all i. Define
(
π
2 if Bi = 1
Θi = π
−2 if Bi = 0.
236 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
Note that in this example T is the length of a time interval, actually the time for transmission of one bit, not
the overall index set, as elsewhere in this text. Set
This process is known as Phase-Shift Keying (PSK) , which is a basic method of modulation in transmission
of binary data.
We determine the mean function and the autocorrelation function of PSK. It is helpful to introduce two auxiliary
functions (
cos (2πfc t) if 0 ≤ t < T
sI (t) =
0 else
and (
sin (2πfc t) if 0 ≤ t < T
sQ (t) =
0 else.
Then we get by the cosine addition formula that
cos (2πfc t + Θ(t)) = cos (Θ(t)) cos (2πfc t) − sin (Θ(t)) sin (2πfc t)
∞
X
= [cos (Θk ) sI (t − kT ) − sin (Θk ) sQ (t − kT )] .
k=−∞
This looks like an infinite sum, but actually there is no need to prove any convergence.
The mean function follows easily, since cos (Θk ) = 0 and sin (Θk ) = ±1 with equal probabilities. Hence
Here we have, if k 6= l,
π π π π
E [sin (Θk ) sin (Θl )] = 1 · 1P Θk = P Θk = + 1 · (−1)P Θk = P Θk = −
2 2 2 2
−π π π π
+(−1) · 1P Θk = P Θk = + (−1) · (−1)P Θk = − P Θk = −
2 2 2 2
1 1 1 1
= − − + = 0.
4 4 4 4
If k = l, then E sin2 (Θk ) = 1
2 + 1
2 = 1.
Therefore we have
∞
X
RX (t, s) = sQ (t − kT )sQ (s − lT ).
k=−∞
Since the support4 of sQ (t) is [0, T [, there is no overlap, i.e., for any fixed pair (t, s) only one of the product
terms in the sum can be nonzero. Also, if t and s are not in the same period, then this term is not zero.
4 by the support of a function f (t) we mean the set of points t, where f (t) 6= 0.
9.2. MEAN SQUARE CALCULUS: THE MEAN SQUARE INTEGRAL 237
If we put
(t) = t/T − ⌊t/T ⌋, ⌊t/T ⌋ = integer part of t/T ,
Thus the autocorrelation function RX (t, s) of PSK is a periodic function in the sense that RX (t, s) = RX (t +
T, s + T ) (i.e., periodic with the same period in both variables). The textbook [38, chapter 12] and the mono-
graph [60] contain specialized treatments of the theory and applications of stochastic processes with periodic
autocorrelation functions.
One way of constructing stochastic processes that have a given autocorrelation function is by mean square
integrals of stochastic processes in continuous time, as defined next.
where a = t0 < t1 < . . . < tn−1 < tn = b and maxi |ti − ti−1 | → 0, as n → ∞.
The sample paths of a process {X(t)|t ∈ T } need not be integrable in Riemann’s sense5 . Since a mean square
integral does not involve sample paths of {X(t)|t ∈ T }, we are elaborating an easier notion of integration.
Rb
Theorem 9.2.1 The mean square integral a
X(t)dt of {X(t)|t ∈ T } exists over [a, b] ⊆ T if and only if the
double integral
Z b Z b
E [X(t)X(u)] dtdu
a a
and "Z #
b Z b Z b
Var X(t)dt = CovX (t, u)dtdu. (9.20)
a a a
5 The Riemann integral is the integral handed down by the first courses in calculus, see, e.g., [69, chapter 6].
238 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
Pn
Proof Let Yn = i=1 X(ti )(ti − ti−1 ). Evoking Loéve’s criterion in theorem 7.3.3 we study
n X
X m
E [Yn Ym ] = E [X(ti )X(uj )] (ti − ti−1 )(uj − uj−1 ),
i=1 j=1
in case the double integral exists, when a = t0 < t1 < . . . < tn−1 < tn = b and maxi |ti − ti−1 | → 0 as n → ∞.
So the assertion follows,
hR as claimed.
i
b
The expectation E a X(t)dt is obtained as
"Z # " n
#
b X
E X(t)dt = E lim X(ti )(ti − ti−1 ) ,
a △
i=1
where the auxiliary notion lim△ refers to mean square convergence as a = t0 < t1 < . . . < tn−1 < tn = b and
maxi |ti − ti−1 | → 0 as n → ∞, and by theorem 7.3.1 (a)
n
X
= lim E [Yn ] = lim E [X(ti )] (ti − ti−1 )
n→∞ n→∞
i=1
and then Z
n
X b
= lim µX (ti )(ti − ti−1 ) = µX (t)dt.
n→∞ a
i=1
Here !2 " Z !#
Z b b Z b
E X(t)dt = E X(t)dt · X(u)du
a a a
= lim E [Yn · Ym ] ,
min(m,n)→∞
Pn Pm
where E [Yn · Ym ] = i=1 j=1 E [X(ti )X(uj )] (ti − ti−1 )(uj − uj−1 ). Thus
"Z # Z Z Z !2
b b b b
Var X(t)dt → E [X(t)X(u)] dtdu − µX (t)dt
a a a a
Z b Z b
= (E [X(t)X(u)] − µX (t)µX (u)) dtdu,
a a
Hence we may define new stochastic process Y = {Y (t) | t ∈ T } by a stochastic integral. For each t ∈ T we set
Z t
Y (t) = X(s)ds.
a
Example 9.2.3 We continue with example 9.1.8, where we constructed the random variables
∞ √
X
X(t) = λi Xi ei (t), t ∈ [0, T ] (9.21)
i=0
and found their autocorrelation R(t, s) function as a bilinear form. The expression (9.21) is known as the
Karhunen-Loéve expansion of X(t). When we consider the mean square integral
Z T
X(t)ei (t)dt,
0
we obtain by the results on this category of integrals above and by the results on convergence in mean square
underlying (9.21) that
Z T X∞ √ Z T √
X(t)ej (t)dt = λi Xi ej (t)ei (t)dt = λj Xj , (9.22)
0 i=0 0
This is an integral equation, which is to be solved w.r.t. ei and λj . It holds in fact that we can regard
ei ’s as eigenfunctions and λi s as the corresponding eigenvalues of the autocorrelation function R(t, s). If
R(t, s) is continuous in [0, T ] × [0, T ], we can always first solve (9.23) w.r.t. λi and ei and then construct
P √
X(t) = ∞ i=0 λi Xi ei (t). For the rigorous mathematical details we refer to [46, pp. 62−69]. The insights in
this example will be made use of in section 10.3.
2
It follows that for a weakly stationary process even the variance functions is a constant, say σX , as a function
of t, since
def 2
VarX (t) = E X 2 (t) − µ2X (t) = RX (0) − µ2 = σX .
and then
2
CovX (0) = σX .
This is another necessary condition for a function RX (h) to be an autocorrelation function of a weakly stationary
process.
We have already encountered an example of a weakly stationary process in example 9.1.4.
Theorem 9.3.1 [Bochner’s Theorem] A function R(h) is nonnegative definite if and only if it can be
represented in the form
Z ∞
1
R(h) = eihf dS(f ), (9.25)
2π −∞
where S(f ) is real, nondecreasing and bounded.
9.3. WEAKLY STATIONARY PROCESSES 241
Proof of ⇐, or we assume that we have a function R(h) that is given by (9.25). Then we show that R(h) is
nonnegative definite. Assume that
d
S(f ) = s(f )
df
exists, thus s(f ) ≥ 0. Then take any x1 , . . . xn and t1 , . . . tn ,
n X
X n Z ∞ n X
X n
1
xi R(ti − tj )xj = xi ei(ti −tj )f xj s(f )df
i=1 j=1
2π −∞ i=1 j=1
Z Z n 2
1 ∞ n
X n
X 1 X
∞
iti f −itj f iti f
= xi e xj e s(f )df = xi e s(f )df ≥ 0.
2π −∞ i=1 j=1
2π −∞ i=1
2
Here |z| = z · z is the squared modulus of a complex number so that z is the complex conjugate of z.
One elegant and pedagogical proof of the converse statement, namely that if R(h) is nonnegative definite
function, then we can express it as in (9.25), is due to H. Cramér and can be found in [25, pp. 126−128].
The function S(f ) is called the spectral distribution function. If S(f ) has a derivative,
d
S(f ) = s(f ),
df
then s(f ) is called the spectral density. Clearly s(f ) ≥ 0, as S(f ) is nondecreasing. Since R(h) = R(−h), we
get also that s(f ) = s(−f ) is to be included in the set of necessary conditions for R(h) to be an autocorrelation
function.
Another term used for s(f ) is power spectral density, as
h i Z ∞
2 1
E (X(t)) = RX (0) = sX (f )df.
2π −∞
The electrical engineering6 statement is that sX (f ) is the density of power at the frequency f .
Operationally, if one can find a Fourier transform s(f ) of a function R(h) with the properties
e−a|h| cos(bh) sin(bh) 1
2 2
4(a +b ) b + b (a2 +(f −b)2 )(a2 +(f +b)2 )
6 If X(t) is a voltage or current developed across a 1-ohm resistor, then (X(t))2 is the instantaneous power absorbed by the
resistor.
242 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
Definition 9.3.2 Let X = {X(t)|t ∈ T } be a stochastic process in continuous time. Then the process is mean
square continuous if, when t + τ ∈ T ,
h i
2
E (X(t + τ ) − X(t)) → 0
as τ → 0.
−E [X(t)X(t + τ )] + E [X(t)X(t)]
= CovX (t + τ, t + τ ) − CovX (t, t + τ ) − CovX (t, t + τ ) + CovX (t, t) +
2
+ (µX (t + τ ) − µX (t)) .
We get a neat result from this, if we assume that X is weakly stationary, as then from the above
h i
2
E (X(t + τ ) − X(t)) = CovX (0) − 2CovX (τ ) + CovX (0).
Theorem 9.3.2 A weakly stationary process is mean square continuous, if and only if CovX (τ ) is continuous
in the origin.
Continuity in mean square does not without further requirements imply continuity of sample paths.
Definition 9.4.1 A stochastic process X = {X(t) | t ∈ T } is called Gaussian, if every stochastic n-vector
′
(X(t1 ), X(t2 ), · · · , X(tn )) is a multivariate normal vector for all n and all t1 , t2 , · · · , tn .
µX (ti ) = E(X(ti )) i = 1, . . . , n
9.4. GAUSSIAN PROCESSES 243
n,n
and the n × n covariance matrix C(t1 , t2 , · · · , tn ) = (cij )i=1,j=1 has the entries
i.e., the entries in the covariance matrix are the appropriate values of the autocovariance function.
We show next a theorem of existence for Gaussian processes. This sounds perhaps like a difficult thing to
do, but by the Kolmogorov consistency theorem, or (9.1), all we need to show in this case is effectively that
has mean function µX (t) = E(X(t)) = f (t). Hence we may without loss of generality prove the existence of
Gaussian processes by assuming zero as mean function.
Theorem 9.4.1 Let R(t, s) be a symmetric and nonnegative definite function. Then there exists a Gaussian
stochastic process X with R(t, s) as its autocorrelation function and the constant zero as mean function.
Proof Since the mean function is zero, and since R(t, s) is a symmetric and nonnegative definite function, we
find that
n+1,n+1
C(t1 , t2 , · · · , tn+1 ) = (R(ti , tj ))i=1,j=1
′
is the covariance matrix of a random vector (X(t1 ), X(t2 ), · · · , X(tn+1 )) . We set for ease of writing
n+1,n+1
Ctn+1 = (R(ti , tj ))i=1,j=1 .
′
We know by [49, p. 123] that we can take the vector (X(t1 ), X(t2 ), · · · , X(tn+1 )) as multivariate normal. We
set for simplicity of writing
′
Xtn+1 = (X(t1 ), X(t2 ), · · · , X(tn+1 )) ,
and let
Ftn+1 ↔ N 0, Ctn+1
denote its distribution function. Then the characteristic function for Xtn+1 is with sn+1 = (s1 , . . . , sn+1 ) given
by
h ′ i Z ′
φXtn+1 (sn+1 ) := E eisn+1 Xtn+1 = eisn+1 x dFtn+1 (x) . (9.26)
Rn+1
Let us now take
′
s(i) = (s1 , . . . , si−1 , si+1 , . . . sn+1 ) .
1. We show that φXtn+1 ((s1 , . . . , si−1 , 0, si+1 , . . . sn+1 )) gives us the characteristic function of
′
Xt(i) = (X(t1 ), X(t2 ), . . . , X(ti−1 ), X(ti+1 ), . . . , X(tn+1 )) .
2. We show that φXtn+1 ((s1 , . . . , si−1 , 0, si+1 , . . . sn+1 )) is the characteristic function of a normal distribution
for the n − 1 variables in Xt(i) .
1. Set
x(i) = (x1 , . . . , xi−1 , xi+1 , . . . xn+1 ) .
We get that
φXtn+1 ((s1 , . . . , si−1 , 0, si+1 , . . . sn+1 ))
Z ∞ Z ∞ Z xi =∞
= ... ei(s1 x1 +···+si−1 xi−1 +si+1 xi+1 +···+sn+1 xn+1 ) dFtn+1 (x)
∞ −∞ xi =−∞
Z ∞ Z ∞
= ... ei(s1 x1 +···+si−1 xi−1 +si+1 xi+1 +···+sn+1 xn+1 ) dFt(i) x(i) (9.27)
∞ −∞
• There exists a Gaussian process for every symmetric nonnegative definite function R(t, s).
• A Gaussian process is uniquely determined by its mean function and its autocorrelation
function.
holds for all n, all h ∈ R and all t1 , t2 , . . . , tn points in T for a stochastic process (not necessarily only Gaussian),
we call the process strictly stationary. In general, weak stationarity does not imply strict stationarity. But if
the required moment functions exist, strict stationarity obviously implies weak stationarity. Since the required
moment functions exist and uniquely determine the finite dimensional distributions for a Gaussian process,
it turns out that a Gaussian process is weakly stationary if and only if it is strictly stationary, as will be
demonstrated next.
9.4. GAUSSIAN PROCESSES 245
Theorem 9.4.2 A Gaussian process X = {Xt | t ∈ T =] − ∞, ∞[} is weakly stationary if and only if the
property
d
(X(t1 + h), X(t2 + h), . . . , X(tn + h)) = (X(t1 ), X(t2 ), . . . , X(tn )) (9.29)
holds for all n, all h and all t1 , t2 , . . . , tn points in T .
′
Proof ⇒: The process is weakly stationary, (µX (t1 ), · · · , µX (tn ) is a vector with all entries equal to the same
constant value, say µ. The entries in C(t1 , t2 , · · · , tn ) are of the form
RX (|ti − tj |) − µ2 .
For the same reasons the entries of the mean vector for (X(t1 + h), . . . , X(tn + h)) are = µ for all h. Hence the
covariance matrix for (X(t1 + h), . . . , X(tn + h)) has the entries
That is, (X(t1 + h), X(t2 + h), . . . , X(tn + h)) and (X(t1 ), X(t2 ), . . . , X(tn )) have the same mean vector and
same covariance matrix. Since these are vectors with multivariate normal distribution, they have the same
distribution.
⇐: If the process is Gaussian, and (9.29) holds, then the desired conclusion follows as above.
The computational apparatus mobilized by Gaussian weakly stationary processes is illustrated by the next two
examples.
Example 9.4.3 The Gaussian weakly stationary process X = {X(t)| − ∞ < t < ∞} has expectation function
= 0 and a.c.f.
RX (h) = σ 2 e−λ|h| , λ > 0.
′ ′
What is the distribution of (X(t), X(t − 1)) ? Since X is Gaussian and weakly stationary, (X(t), X(t − 1)) has
a bivariate normal distribution, we need to find the mean vector and the covariance matrix.
The mean vector is found by E [X(t)] = E [X(t − 1)] = 0. Furthermore we can read the covariance matrix
from the autocorrelation function RX (h). Thereby E [X(t)X(t − 1)] = E [X(t − 1)X(t)] = RX (1) = σ 2 e−λ , as
t − (t − 1) = 1, and E X 2 (t) = E X 2 (t − 1) = RX (0) = σ 2 e−λ·0 = σ 2 . This says also that X(t) ∈ N 0, σ 2
d
and X(t) = X(t − 1). Thus, the coefficient of correlation is
RX (1)
ρX(t),X(t−1) = = e−λ .
RX (0)
Therefore we have established
! !!
′ 0 1 e−λ
(X(t), X(t − 1)) ∈ N , σ2 .
0 e−λ 1
Example 9.4.4 The Gaussian weakly stationary process X = {X(t)| − ∞ < t < ∞} has expectation function
= 0 and a.c.f.
1
RX (h) = .
1 + h2
We want to find the probability
P (3X(1) > 1 − X(2)) .
Var(Y ) = 13.
Rb
Theorem 9.4.5 If the mean square integral a X(t)dt exists for a Gaussian process {X(t)|t ∈ T } for [a, b] ⊆ T ,
then Z b Z b Z bZ b !
X(t)dt ∈ N µX (t)dt, CovX (t, u)dtdu . (9.31)
a a a a
We shall now characterize the Gauss-Markov processes by a simple and natural property [48, p. 382].
Theorem 9.5.1 The Gaussian process X = {X(t) | t ∈ T } is a Markov process if and only if
Proof If (9.32) holds, then (9.34) obtains per definition of conditional expectation.
Let us assume conversely that (9.34) holds for a Gaussian process X. By properties of Gaussian vectors we
know that both
P (X(tn ) ≤ xn | X(t1 ) = x1 , . . . , X(tn−1 ) = xn−1 )
and
P (X(tn ) ≤ xn | X(tn−1 ) = xn−1 )
248 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
are Gaussian distribution functions, and are thus determined by their respective means and variances. We shall
now show that these are equal to each other. The statement about means is trivial, as this is nothing but (9.34),
which is the assumption.
We introduce some auxiliary notation.
def
Ye = X(tn ) − E [X(tn ) | X(t1 ) = x1 , . . . , X(tn−1 ) = xn−1 ]
and this is h i h i
Var Ye | G = Var Ye | Gn−1 .
This says that the distributions in the right and left hand sides of (9.32) have the same variance, and we have
consequently proved our assertion as claimed.
fX(t0 ),X(s),X(t) (x, u, v) = fX(t)|X(s)=u,X(t0 )=x (v) · fX(s)|X(t0 )=x (u) · fX(t0 ) (x) .
9.5. THE GAUSS-MARKOV PROCESSES AND SEPARABLE AUTOCORRELATION FUNCTIONS 249
fX(t0 ),X(s),X(t) (x, u, v) = fX(t)|X(s)=u (v) · fX(s)|X(t0 )=x (u) · fX(t0 ) (x) .
When we differentiate in both sides of this equality w.r.t. y, we get the following equation for the transition
densities Z ∞
fX(t)|X(t0 )=x (y) = fX(t)|X(s)=u (y) · fX(s)|X(t0 )=x (u) du. (9.35)
−∞
It is hoped that a student familiar with finite Markov chains recognizes in (9.35) a certain similarity with the
Chapman-Kolmogorov equation7 , now valid for probability densities. In statistical physics this equation is
called the Smoluchowski equation, see [58, p.200]. Regardless of the favoured name, the equation (9.35) can
be regarded as a consistency condition.
RX (t, s) RX (s, t0 )
= x0 ,
RX (s, s) RX (t0 , t0 )
or, equivalently
RX (t, s)RX (s, t0 )
RX (t, t0 ) = . (9.37)
RX (s, s)
Therefore we have found a necessary condition for an autocorrelation function to be the autocorrelation function
of a Gaussian Markov process.
Example 9.5.2 Consider a Gaussian process with the autocorrelation function is R(t, s) = min(t, s). It is
shown in an exercise of this chapter that min(t, s) is an autocorrelation function. Then, if t0 < s < t we check
(9.37) by
R(t, s)R(s, t0 ) min(t, s) min(s, t0 ) s · t0
= = = t0 ,
R(s, s) min(s, s) s
which equals R(t, t0 ) = min(t, t0 ) = t0 . We shall show in the next chapter that min(t, s) corresponds, e.g., to
the Wiener process, and that the indicated process is a Gaussian Markov process.
The equation (9.37) is an example of a functional equation, i.e., an equation that in our case specifies the
function RX (t, s) in implicit form by relating the value of RX (t, s) at a pair of points with its values at other
pairs of points. It can be shown [103, p. 72] that if RX (t, t) > 0, then there are functions v(t) and u(t) such
that
RX (t, s) = v (max(t, s)) u (min(t, s)) . (9.38)
We demonstrate next that (9.38) is sufficient for a Gaussian random process X to be a Markov process. Let us
set
u(t)
τ (t) = .
v(t)
p
Because it must hold by Cauchy-Schwartz inequality (7.5) that RX (t, s) ≤ RX (t, t) · RX (s, s), we ge that
τ (s) ≤ τ (t), if s < t. Hence, as soon as X is a Gaussian random process with zero mean and autocovariance
function given by (9.38), we can represent it as a Lamperti transform
where W (t) is a variable in a Gaussian Markov process with autocorrelation function RW (t, s) = min(t, s), as
in example 9.5.2. To see this we compute for s < t
Example 9.5.3 Assume next that the Gaussian Markov process is also weakly stationary and mean square
continuous. Then RX is in fact continuous and (9.37) becomes
RX (t − s)RX (s − t0 )
RX (t − t0 ) = . (9.40)
RX (0)
RX (t − t0 ) = RX (t − s)RX (s − t0 ),
This is one of Cauchy’s classical functional equations (to be solved w.r.t. G(·)). The requirements of autocorre-
lation functions for weakly stationary processes impose the additional condition | G(x) |≤ G(0). The continuous
autocovariance function that satisfies the functional equation under the extra condition is
In chapter 11 below we shall construct a Gaussian Markov process (the ’Ornstein-Uhlenbeck process’) that,
up to a scaling factor, possesses this autocorrelation function.
that is, the time spent at or above a high level b. Or, we might want to find the extreme value distribution
P max X(s) ≤ b .
0≤s≤t
To hint at what can be achieved, it can be shown, see, e.g., [2, 76], that if
1
RX (h) ∼ 1 − θt2 , as t → 0,
2
then the sojourn times Lb in (9.42) have approximately the distribution
d 2V
Lb ≈ , (9.43)
θb
252 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
where
d √
V = θY,
where Y ∈ Ra(2) (= Rayleigh distribution with parameter 2). One can also show that
P max X(s) ≤ b ≈ e−λb t ,
0≤s≤t
where there is some explicit expression for λb , [2]. But to pursue this topic any further is beyond the scope and
possibilities of this text.
9.7 Exercises
The bulk of the exercises below consists of specimen of golden oldies from courses related to sf2940 run at KTH
once upon time.
Aid: The difficulty is to show that the function is nonnegative definite. Use induction. If t1 < t2 , then
the matrix !
2,2 t1 t1
(min(ti , tj ))i=1,j=1 =
t1 t2
min ti = t1 ≤ ti ≤ ti+1 .
Hence
n+1
X n+1
X
xi min(ti , tj )xj =
i=1 j=1
n+1
X n+1
X n+1
X
= xi min(ti , tj )xj + t1 x21 + t1 x1 xj .
i=2 j=2 j=2
For i, j ≥ 2
min(ti , tj ) − t1 = min(ti − t1 , tj − t1 ),
and thus
n+1
X n+1
X n+1
X n+1
X
xi min(ti , tj )xj − xi t1 xj
i=2 j=2 i=2 j=2
n+1
X n+1
X
= xi min(ti − t1 , tj − t1 )xj ≥ 0
i=2 j=2
(a) Let R(h) be the autocovariance function of a weakly stationary process and (R(ti − tj ))n,n
i=1,j=1 be the
covariance matrix (assume zero means) corresponding to equidistant times, i.e., ti − ti−1 = h > 0.
Convince yourself of the fact that (R(ti − tj ))n,n
i=1,j=1 is a Toeplitz matrix. E.g., take one of the
autocovariance functions for a weakly stationary process in the text above, and write down the the
corresponding covariance matrix for n = 4.
(b) An n × n matrix A = (aij )n,n 8
i=1,j=1 is called centrosymmetric , when its entries aij satisfy
An equivalent way of saying this is that A = RAR, where R is the permutation matrix with ones on
the cross diagonal (from bottom left to top right) and zero elsewhere, or
0 0 ... 0 0 1
0 0 ... 0 1 0
0 0 ... 1 0 0
R= . . .. .. .. .. .
.. .. . . . .
0 1 ... 0 0 0
1 0 ...0 0 0 0
Show that a centrosymmetric matrix is symmetric.
n,n
(c) Show that the Toeplitz matrix (R(ti − tj ))i=1,j=1 in (a) above is centrosymmetric. To get a picture
of this, take one of the autocovariance functions for a weakly stationary process in the text above,
and write down the the corresponding covariance matrix for n = 4 and check what (9.44) means.
Therefore we may generalize the class of weakly stationary Gaussian processes by defining a class of
Gaussian processes with centrosymmetric covariance matrices.
2. The Lognormal Process (From [8]) {X(t)| − ∞ < t < ∞} is a weakly stationary Gaussian stochastic
process. The process Y = {Y (t)| − ∞ < t < ∞} is defined by
Find the mean function and the autocovariance function of the lognormal process Y.
Aid: Recall the moment generating function of a Gaussian random variable.
3. The Suzuki Process Let Xi = {Xi (t)| − ∞ < t < ∞}, be three (i = 1, 2, 3) independent weakly sta-
tionary Gaussian processes with mean function zero. X2 and X3 have the same autocovariance functions.
Let q
Y (t) = eX1 (t) · X22 (t) + X32 (t).
The stochastic process thus defined is known as the Suzuki process9 and is fundamental in wireless com-
munication (fading distribution for mobile radio) and widely used in dozens of other fields of engineering
and science.
Aid: A mnemonic for the Suzuki process is that it is a product of a lognormal process and a Rayleigh
process.
4. X = {X(t) | −∞ < t < ∞} is a strictly stationary process. Let g(x) be a Borel function. Define a new
process Y = {Y (t) | −∞ < t < ∞} via
(a) Let f (t) be a function, which is periodic with period T , i.e, f (t) = f (t + T ) for all t. Set
Show that the process Y = {Y (t) | −∞ < t < ∞} is periodically correlated in the sense that
RY (t, s) = RY (t + T, s + T ).
Show that the time modulated process {Y (t) | −∞ < t < ∞} is periodically correlated. Show also
that the variance function is a constant function of time.
9 H. Suzuki: A statistical model for urban radio propagation, IEEE Transactions on Communications, 25, pp. 673–680, 1977.
9.7. EXERCISES 255
is an autocorrelation function.
is minimized. Note that the situation is the same as in example 7.5.5 above. Thus check that the optimal
parameter a is given by (7.22), or
CovX (τ )
a= .
CovX (0)
h i
2
What is the optimal value of E (X(t + τ ) − a · X(t)) ?
2. [An Ergodic Property in Mean Square ] Ergodicity in general means that certain time averages are
asymptotically equal to certain statistical averages.
Let X = {X(t)| − ∞ < t < ∞} be weakly stationary with the mean function µX (t) = m. The process X
is mean square continuous. We are interested in the mean square convergence of
Z
1 t
X(u)du
t 0
as t → ∞.
(b) Show that if the autocovariance function CovX (h) is such that
CovX (h) → 0, as h → ∞,
3. (From [57]) Let X = {X(t)| − ∞ < t < ∞} be weakly stationary with the mean function µX (t) = m and
autocovariance function CovX (h) such that
CovX (h) → 0, as h → ∞.
4. (From [57] Let X = {X(t)| − ∞ < t < ∞} be a Gaussian random process such that
! !!
′ α + βt 2 1 e−λ|t−s|
(X(t), X(s)) ∈ N ,σ .
α + βs e−λ|t−s| 1
This is obviously not a weakly stationary process, as there is a linear trend in the mean function µX (t).
Let us define a new process Y by differencing, by which we mean the operation
Comment: Differencing removes here a linear trend and produces a stationary process. This recipe, called
de-trending, is often used in time series analysis.
5. (From [42]) Let s(f ) be a real valued function variable that satisfies
and Z ∞
s(f )df = K < ∞. (9.46)
−∞
Let X1 , X2 , Y1 and Y2 be independent stochastic variables such that
K
E [X1 ] = E [X2 ] = 0, E X12 = E X22 = ,
2π
s(y)
and Y1 and Y2 have the p.d.f. fY (y) = K .
We have in this manner shown that if a function s(f ) satisfies (9.45) and (9.46), then there exists at least
one stochastic process that has s(f ) as spectral density.
6. (From [42]) X = {X(t)| − ∞ < t < ∞} is a weakly stationary process and Z ∈ U (0, 2π) is independent of
{X}. Set
√
Y (t) = 2X(t) cos (fo t + Z) − ∞ < t < ∞.
Show that Y = {Y (t)| − ∞ < t < ∞} has mean function µY = 0, and
RY (h) = RX (h) + µ2X cos (fo h) .
Comment This is a mathematical model for amplitude modulation, c.f., [71, kap.7].
9.7. EXERCISES 257
µX = m,
Show that
(a)
E [Yn ] = m
(b)
2n − 1
Var [Yn ] = .
n2
(c) for ε > 0
ε ε
P (| Yn − m |≤ ε) = Φ 2n−1 − Φ − 2n−1 .
n2 n2
p
Is it true that Yn → m, as n → ∞ ?
3. [Bandlimited Gaussian White Noise] A weakly stationary stationary Gaussian process Z = {Z(t) |
−∞ < t < ∞} with mean zero that has the power spectral density
(
No
2 −W ≤ f ≤ W ,
sZ (f ) =
0 elsewhere,
as n → ∞. This is a stochastic version, [85, pp. 332− 336], of the celebrated sampling theorem10 ,
[100, pp. 187]. It predicts that we can reconstruct completely the band-limited process Z from its
time samples {Zk }∞k=−∞ , also known as Nyquist samples.
Aid: (C.f. [103, p. 106]). The following result (’Shannon’s sampling theorem’) on covariance
interpolation is true (and holds in fact for all bandlimited functions)
∞
X πk
πk sin(W (t − W ))
RZ (h) = RZ · πk
,
W πW (t − W)
k=−∞
k=n
X j=n
X π sin(W (t − πk
πj
W )) sin(W (t − W ))
+ RZ (j − k) · πj .
k=−n j=−n
W πW (t − πk
W) πW (t − W )
Set Z 1
Y = X(t)dt.
0
Check that
2
E [Y ] = 2, Var (Y ) = .
e
2. [Linear Time Invariant Filters] Let X = {X(t)| − ∞ < t < ∞} be a stochastic process with zero as
mean function and with the autocorrelation function RX (t, s). Let
Z ∞ Z ∞
Y (t) = G(t − s)X(s)ds = G(s)X(t − s)ds,
−∞ −∞
assuming existence. One can in fact show that the two mean square integrals above are equal (almost
surely).
(b) Show that if X is (weakly) stationary with zero as mean function and
Z ∞
Y (t) = G(t − s)X(s)ds,
−∞
Show that {Y (t)| − ∞ < t < ∞} is a Gaussian process and find the distribution of Y (t) for any t.
Remark 9.7.1 The findings in this exercise provide a key, viz. the mathematical representations of
analog filters, for understanding of the pre-eminence of weakly stationary processes in [50, 56, 71, 80,
85, 97, 101]. One thinks of G(t) as the impulse response of a linear time-invariant filter with
the process as {X(t)| − ∞ < t < ∞} input and the process {Y (t)| − ∞ < t < ∞} as output. An
instance of applications is described in IEEE standard specification format guide and test procedure
for single- axis interferometric fiber optic gyros, IEEE Std 952-1997(R2008), c.f. Annexes B & C,
1998.
3. [The Superformula](From [71, 101] and many other texts) Let X = {X(t)| − ∞ < t < ∞} be a weakly
stationary stochastic process with zero as mean function and with the autocorrelation function RX (h).
Let Z ∞ Z ∞
Y (t) = G(t − s)X(s)ds = G(s)X(t − s)ds,
−∞ −∞
260 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
assuming existence. Suppose that the spectral density of X is sX (f ). Show that the spectral density of
Y = {Y (t)| − ∞ < t < ∞} (recall the preceding exercise showing that Y is weakly stationary) is
Note the connection of (9.49) to (9.48). In certain quarters at KTH the formula in (9.49) used to be
referred to in Swedish as the superformel.
4. (From [42]) X = {X(t)| − ∞ < t < ∞} is a weakly stationary process with mean function = µX and with
the autocorrelation function RX (h) = σ 2 e−|h| . Let
(
e−2t 0 ≤ t
G(t) =
0 t < 0.
Set Z ∞
Y (t) = G(t − s)X(s)ds.
−∞
5. (From [71]) X = {X(t)| − ∞ < t < ∞} is a weakly stationary process with zero as mean function and
with the autocorrelation function RX (h) = e−c|h|. Let
(
1
T 0≤t≤T
G(t) =
0 otherwise.
Set Z ∞
Y (t) = G(t − s)X(s)ds.
−∞
Show that (
2 −c|h|
c2 T 2 (C(T − |h|)) − e + e−cT cosh(ch) |h| < T
RY (h) = 2 −c|h|
c2 T 2 e (cosh(ch) − 1) |h| ≥ T .
6. (From [42]) X = {X(t)| − ∞ < t < ∞} is a Gaussian stationary process with µX = 1 as mean function
2
and with the autocorrelation function RX (h) = e−h /2 . Show that
Z ∞
2 √ 2π
X(t)e−t /2 dt ∈ N 2π, √ .
−∞ 3
7. (From [105]) Let W = {W (t) | t ≥ 0} be a Gaussian process with mean function zero and the autocorre-
lation (i.e., autocovariance)
RW (t, s) = min(t, s).
9.7. EXERCISES 261
2. A stochastic process {X(t) | t ∈ T } is said to be continuous in probability, if it for every ε > 0 and
all t ∈ T holds that
P (| X(t + h) − X(t) |> ε) → 0,
as h → 0. Let {X(t) | t ∈ T } be a weakly stationary process. Suppose that the autocovariance function
is continuous at the origin. Show that then the process is continuous in probability.
Aid: Recall Markov’s inequality (1.38).
For this to be of interest, it is argued, c.f. [33, p.212−217], that a stationary Gaussian process can represent a
broadband analog signal containing many channels of audio and video information (e.g., cabletelevision signals
over optical fiber).
Let us incidentally note that in the context of clipping (e.g., of laser) it is obviously important for the
engineer to know the distribution of the sojourn time in (9.42) or
def
Lxo = Length ({t | X(t) ≥ x0 }) . (9.50)
The approximation in (9.43) is well known to be the practical man’s tool for this analysis.
11 see, e.g., A.J. Rainal: Laser Intensity Modulation as a Clipped Gaussian Process. IEEE Transactions on Communications,
Vol. 43, 1995, pp. 490−494.
262 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
By a nonlinear transformation of a stochastic process X = {X(t)|t ∈ T } with memory we mean, for one
example, a stochastic process Y = {Y (t)|t ∈ T } defined by
Z t
Y (t) = Q (X(s)) ds, [0, t] ⊂ T,
0
where Q is an integrable Borel function. In this case the value of Y (t) depends on the process X between 0 and
t, i.e., has at time t a memory of the ’past’ of the process. One can also say that Y (t) is a nonlinear functional
of the process X over [0, t].
Y (t) = X 2 (t).
Hint: The four product rule in (8.37) of section 8.5 can turn out to be useful.
2
2. X = {X(t)| − ∞ < t < ∞} is a weakly stationary Gaussian stochastic process with µX = 0, variance σX
and autocorrelation RX (h) such that RX (0) = 1. We observe a binarization of the process, c.f., (8.43),
(
1 X(t) ≥ 0,
Y (t) = (9.52)
−1 X(t) < 0.
Show that π
RX (h) = sin RY (h) .
2
Aid: (From [89]). You may use the fact that (c.f., (8.24))
Z ∞Z ∞
1 − 1
(x2 −2ρxy+y2 ) 1 arcsin(ρ)
p e 2(1−ρ2 ) dxdy = + .
0 0 2π 1 − ρ 2 4 2π
3. Hermite Expansions of Nonlinearities We shall next present a general technique for dealing with
a large class of memoryless nonlinear transformations of Gaussian processes. This involves the first
properties of Hermite polynomials, as discussed in section 2.6.2. The next theorem forms the basis of
analysis of non-linearities in section 9.7.8.
Then ∞
X cn
h(x) = Hn (x), (9.54)
n=0
n!
9.7. EXERCISES 263
where Z ∞
1 2
cn = √ h(x)e−x /2
Hn (x)dx, n = 0, 1, 2, . . . , (9.55)
2π −∞
and the series converges with respect to the Hilbert space norm
sZ
∞
k f k= f 2 (x)e−x2 /2 dx.
−∞
Theorem 9.7.2 Let X = {X(t)| − ∞ < t < ∞} be a weakly stationary process with the mean value
function = 0. Let Q(x) satisfy (9.53) and define
X∞ n
Cn2 RX (h)
RY (h) = , (9.57)
n=0
n! RX (0)
where Z ∞
1 2
Cn = √ Q (xσX ) e−x /2
Hn (x)dx.
2π −∞
Proof is outlined. These assertions are derived by Mehler’s Formula [24, p.133], which says the
following. Let ! !!
′ 0 1 ρ
(X1 , X2 ) ∈ N , .
0 ρ 1
′
Then the joint p.d.f. of (X1 , X2 ) is by (8.24)
1 −1 1
·[x2 −2ρ·x1 x2 +x22 ]
fX1 ,X2 (x1 , x2 ) = p e 2 (1−ρ2 ) 1
2π 1 − ρ 2
Now we use in (9.59) the expansions (9.54), (9.58), (2.100) and (2.101) to obtain (9.57) in the special case
σX = 1.
(a) Verify by symbol manipulation (i.e., do not worry about the exchange of order between integration
and the infinite sums) that (9.59) leads to (9.57), as indicated in the last lines of the proof outlined
above.
264 CHAPTER 9. STOCHASTIC PROCESSES: WEAKLY STATIONARY AND GAUSSIAN
2. Let W (t) be a random variable in a Gaussian process (i.e., Wiener process, see next chapter) with the
autocorrelation function RW (t, s) = min(t, s). Find a function τ (t) = u(t)/v(t) so that the process defined
by
B(t) = v(t)W (τ (t)) , (9.61)
10.1 Introduction
10.1.1 Background: The Brownian Movement, A. Einstein
The British botanist Robert Brown examined1 in the year 1827 pollen grains and the spores of mosses suspended
in water under a microscope, see figure ??, and observed minute particles within vacuoles in the pollen grains
executing a continuous jittery motion. Although he was not the first to make this observation, the phenomenon
or the movement became known as the Brownian movement.
A note on terminology is at place here. Here we shall refer to the physical phenomenon as the Brownian
movement, and to the mathematical model, as derived below, as Brownian motion/Wiener process thus minding
of the accusations about ’mind projection fallacies’.
In one of his three great papers published in 1905 Albert Einstein carried a probabilistic analysis of molecular
motion and its effect on particles suspended in a liquid. Einstein admits to begin with [31, p.1] that he does not
know much of Brown’s movement. His purpose was not, as pointed out by L. Cohen2 , to explain the Brownian
movement but to prove that atoms existed. In 1905, many scientists did not believe in atomic theory. Einstein’s
approach was to derive a formula from the atomic theory, and to expect that someone performs the experiments
that verify the formula.
In the 1905 paper, see [31], Einstein derives the governing equation for the p.d.f. f (x, t) of the particles,
which are influenced by the invisible atoms. The equation of evolution of f (x, t) is found as
∂ ∂2
f (x, t) = D 2 f (x, t).
∂t ∂x
Two brief and readable and non-overlapping recapitulations of Einstein’s argument for this are [58, pp. 231−234]
and [78, chapter 4.4]. Then Einstein goes on to show that the square root of the expected squared displacement
of the particle is proportional to the square root of time as
p √
σX = E [X(t)2 ] = 2Dt. (10.1)
It is generally overlooked that Einstein’s ’coarse time’ approach to thermodynamics implies that his finding in
(10.1) is valid only for very large t. Then Einstein derives the formula for the diffusion coefficient D as
RT 1
D= , (10.2)
N 6πkP
1 A clarification of the intents of Brown’s work and a demonstration that Brown’s microscope was powerful enough for observing
265
266 CHAPTER 10. THE WIENER PROCESS
where R is the gas constant, T is the temperature, k is the coefficient of viscosity (Einstein’s notation) and P is
the radius of the particle. The constant N had in 1905 no name, but was later named Avogadro’s number3 ,
see [58, p. 236]. Next Einstein explains how to estimate N from statistical measurements. We have
r
√ RT 1
σX = t
N 3πkP
so that
1 RT
N= 2 3πkP ,
σX
2
where σX is measured ’per minute’.
Besides the formulas and ideas stated, Einstein invoked the Maxwell & Boltzmann statistics, see, e.g., [17,
p. 39, p. 211], [58, chapter 6], and saw that the heavy particle is just a big atom pushed around by smaller
atoms, and according to energy equipartition, c.f., [17, chapter 19], the statistical properties of the big particle
are the same as of the real invisible atoms. More precisely, the mean kinetic energy of the pollen is the same
as the mean kinetic energy of the atoms. Therefore we can use the heavy particle as a probe of the ones we
cannot see. If we measure the statistical properties of the heavy particle, we know the statistical properties of
the small particles. Hence the atoms exist by the erratic movements of the heavy particle4 .
J.B. Perrin5 was an experimentalist, who used (amongst other experimental techniques) direct measurements
of the mean square displacement and Einstein’s formula to determine Avogadro’s number, and was awarded
Nobel Prize in Physics in 1926 in large part, it is said, due to this. Actually, Perrin proceeded to determine
Boltzmann’s constant and the electronic charge by his measurement of Avogadro’s number, [58, p. 239].
x2
1 −
fW (t) (x) = √ e 2t .
2πt
x2
−
1
We set p(t, x) = √2πt e 2t . Then p(t, x) is the solution of the partial differential equation known as the diffusion
(or the heat) equation [96, pp.130−134]
∂ 1 ∂2
p(t, x) = p(t, x), (10.3)
∂t 2 ∂x2
3 The Avogadro constant expresses the number of elementary entities per mole of substance, c.f. [17, p.3].
4 More on this and the history of stochastic processes is found in L. Cohen: The History of Noise. IEEE Signal Processing
Magazine, vol. 1053, 2005.
5 Jean Baptiste Perrin 1870-1942, Perrin’s Nobel lecture with a discussion of Einstein’s work and Brownian movement is found
on
http://nobelprize.org/nobel prizes/physics/laureates/1926/perrin-lecture.html
6 is named after Norbert Wiener, 1894−1964, who constructed it as a stochastic process in mathematical terms, as given here,
and proved that the process has continuous sample paths that are nowhere differentiable.
http://www-groups.dcs.st-and.ac.uk/∼history/Biographies/Wiener Norbert.html
10.1. INTRODUCTION 267
and can be interpreted as the density (in fact p.d.f.) at time t of a cloud issuing from a single point source at
time 0.
We shall study the one-dimensional Wiener process starting from the mathematical definition in 10.2.1
below and derive further properties from it. The Wiener process can be thought of as modelling the projection
of the position of the Brownian particle onto one of the axes of a coordinate system. A sample path of the
one-dimensional Wiener process is given in figure 10.3. In the literature, especially that emanating from British
universities, see, e.g., [26], this stochastic process is also known as the Brownian motion.
Apart from describing the motion of diffusing particles, the Wiener process is widely applied in mathematical
models involving various noisy systems, for example asset pricing at financial markets, c.f. [13, chapter 4].
Actually, Louis Bachelier (1870−1946)7 is nowadays acknowledged as the first person to define the stochas-
tic process called the Wiener process. This was included in his doctoral thesis with the title Théorie de la
spéculation, 19008 reprinted, translated and commented in [27]. This thesis, which treated Wiener process to
evaluate stock options, is historically the first contribution to use advanced mathematics in the study of fi-
nance. Hence, Bachelier is appreciated as a pioneer in the study of both financial mathematics and of stochastic
processes.
7 http://www-groups.dcs.st-and.ac.uk/∼history/Biographies/Bachelier.html
8 R. Mazo, an expert in statistical mechanics, chooses to write in [78, p. 4]:
The subject of the thesis (by Bachelier) was a stochastic theory of speculation on the stock market, hardly a topic
likely to excite interest among physical scientists (or among mathematicians either).
268 CHAPTER 10. THE WIENER PROCESS
(y − x)2
1 −
p(t, y, x) = √ e 2t , t > 0, −∞ < x < ∞, −∞ < y < ∞. (10.4)
2πt
Clearly p(t, x, y) is the p.d.f. of a random variable with the distribution N (x, t). This p(t, x, y) is in fact the
transition p.d.f. of a Wiener process , as will be explained below.
Definition 10.2.1 The Wiener process a.k.a. Brownian motion is a stochastic process W = {W (t) |
t ≥ 0} such that
10.2. THE WIENER PROCESS: DEFINITION AND FIRST PROPERTIES 269
ii) for any n and any finite suite of times 0 < t1 < t2 < . . . < tn and any x1 , x2 , . . . , xn the joint p.d.f. of
W (t1 ), W (t2 ), . . . , W (tn ) is
1. We should perhaps first verify that (10.6) is in fact a joint p.d.f.. It is clear that fW (t1 ),W (t2 ),...,W (tn ) ≥ 0.
Next from (10.6)
Z ∞ Z ∞ n−1
Y
··· p(t1 , x1 , 0) p(ti+1 − ti , xi+1 , xi )dx1 · · · dxn
−∞ −∞ i=1
Z ∞ Z ∞
= p(t1 , x1 , 0) · · · p(tn − tn−1 , xn , xn−1 )dxn · · · dx1 .
−∞ −∞
We integrate first with respect to xn and get
Z ∞
p(tn − tn−1 , xn , xn−1 )dxn = 1,
−∞
since we have seen that p(tn − tn−1 , xn , xn−1 ) is the p.d.f. of N (xn−1 , tn − tn−1 ). An important thing is
that the integral is not a function of xn−1 . Hence we can next integrate the factor containing xn−1 w.r.t
270 CHAPTER 10. THE WIENER PROCESS
xn−1 , whereby the second of the two factors containing xn−2 will disappear, and continue successively in
this way, and get that
Z ∞ Z ∞
p(t1 , x1 , 0) · · · p(tn − tn−1 , xn , xn−1 )dxn · · · dx1 = 1.
−∞ −∞
The preceding computation indicates also how to prove that the Wiener process exists by the
Kolmogorov Consistency Theorem 9.1.1.
x2 (y − x)2
−
1 − 1
= √ e 2s p e 2(t − s) . (10.8)
2πs 2π(t − s)
In view of (10.7) p(s, x, 0) is the marginal p.d.f. fW (s) (x) of W (s). Hence it holds for all x, y that
(y − x)2
fW (s),W (t) (x, y) −
1
= p e 2(t − s) ,
fW (s) (x) 2π(t − s)
W (t) = Z + W (s),
where Z ∈ N (0, t − s) and Z is independent of W (s). We shall, however, in the sequel obtain this finding
as a by-product of a general result.
4. Hence we have the interpretation of p(t − s, x, y) as a transition p.d.f., since for any Borel set A and
t>s
Z Z (y − x)2
−
1
P (W (t) ∈ A | W (s) = x) = p(t − s, y, x)dy = p e 2(t − s) dy.
A A 2π(t − s)
This gives the probability of transition of the Wiener process from x at time s to the set A at time t.
The preceding should bring into mind the properties of a Gaussian process.
Proof: We make a change of variables in (10.6). We recall (2.71): if X has the p.d.f. fX (x), Y = AX + b,
and A is invertible, then Y has the p.d.f.
1
fY (y) = fX A−1 (y − b) . (10.11)
| det A |
We are going to define a one-to-one linear transformation (with Jacobian = 1) between the n variables of the
Wiener process and its increments. We take any n and any finite suite of times 0 < t1 < t2 < . . . < tn . We
recall first
Z0 = W (t0 ) = W (0) = 0
so that
Z1 = W (t1 ) − W (t0 ) = W (t1 )
and then
def
Zi = W (ti ) − W (ti−1 ), i = 2, . . . n.
n
The increments {Zi }i=1 are a linear transformation of (W (ti ))ni=1 , or in matrix form
Z1 1 0 ... 0 0 0 W (t1 )
Z2 −1 1 . . . 0 0 0 W (t2 )
.. .. ..
. 0 −1 1 . 0 0 .
= . (10.12)
. . . .. . . . .
.. .. .. . .. .. .. ..
Zn−1 0 0 . . . −1 1 0 W (tn−1 )
Zn 0 0 . . . 0 −1 1 W (tn )
We write this as
Z1 W (t1 )
Z2 W (t2 )
.. ..
= A .
. .
Zn−1 W (tn−1 )
Zn W (tn )
The matrix A is lower triangular, and therefore its determinant is the product of the entries on the main
diagonal, see [92, p. 93]. Thus in the above det A = 1, and the inverse A−1 exists and det A−1 = det1 A = 1.
Hence the Jacobian determinant J is = 1.
It looks now, in view of (10.11), as if we are compelled to invert A−1 and insert in fW (t1 ),...,W (tn ) . However,
due to the special structure of fW (t1 ),...,W (tn ) in (10.6), we have a kind of stepwise procedure for this. By (10.6)
n
Y
fW (t1 ),W (t2 ),...,W (tn ) (x1 , x2 , . . . , xn ) = p(t1 , x1 , 0) p(ti − ti−1 , xi , xi−1 ).
i=2
Here, by (10.9),
(xi − xi−1 )2
−
1
p(ti − ti−1 , xi , xi−1 ) = fW (ti )|W (ti−1 )=xi−1 (xi ) = p e 2(ti − ti−1 ) .
2π(ti − ti−1 )
Hence, if we know that W (ti−1 ) = xi−1 , then Zi = W (ti ) − xi−1 or W (ti ) = Zi + xi−1 and since we are
evaluating the p.d.f. at the point Zi = zi and W (ti ) = xi , we get
zi2
−
1
p(ti − ti−1 , xi , xi−1 ) = p e 2(ti − ti−1 ) = fZi (zi ).
2π(ti − ti−1 )
272 CHAPTER 10. THE WIENER PROCESS
Thus
Zi ∈ N (0, ti − ti−1 )
and n
Y
fZ1 ,Z2 ,...,Zn (z1 , z2 , . . . , zn ) = fZi (zi ).
i=1
This shows that the increments are independent, and that
1 ′
Λ−1 z/2
fZ1 ,Z2 ,...,Zn (z1 , z2 , . . . , zn ) = √ e−z
(2π)n/2 det Λ
where Λ is the diagonal matrix
t1 0 ... 0 0 0
0 t2 − t1 ... 0 0 0
..
0 0 t3 − t2 . 0 0
Λ=
. .. .. ..
.
(10.13)
.. .. ..
. . . . .
0 0 ... 0 tn−1 − tn−2 0
0 0 ... 0 0 tn − tn−1
Hence the matrix Λ is a covariance matrix. In other words, Z1 , Z2 , . . . , Zn has a joint Gaussian distribution
N (0, Λ) and since
W (t1 ) Z1
W (t ) Z
2 2
.
. = A−1 .. ,
. .
W (tn−1 ) Zn−1
W (tn ) Zn
then (W (t1 ), W (t2 ), . . . , W (tn−1 ), W (tn )) has a joint Gaussian distribution
′
N 0, A−1 Λ(A−1 ) . (10.14)
Since n and t1 , . . . , tn were arbitrary, we have now shown that the Wiener process is a Gaussian process.
Remark 10.2.2 The proof above is perhaps overly arduous, as the main idea is simple. The increments {Zi }ni=1
and W (t1 ), . . . , W (tn ), correspond to each other
which yields uniquely the Wiener process variables from the increments, remembering that W0 = W (0) = 0.
Thus, when we know that W (ti−1 ) = xi−1 , then Zi = W (ti ) − xi−1 and in the conditional p.d.f. p(ti −
ti−1 , xi , xi−1 ) the change of variable is simple.
10.2. THE WIENER PROCESS: DEFINITION AND FIRST PROPERTIES 273
A Gaussian process is uniquely determined by its mean function and its autocovariance function. We can
readily find the mean function µW (t) and the autocorrelation function RW (t, s). This will give us the matrices
′
A−1 Λ(A−1 ) in (10.14), too, but without any matrix operations. The mean function is from (10.7) and i) in
the definition
µW (t) = E [W (t)] = 0 t ≥ 0. (10.16)
for any t, s ≥ 0.
E [W (t)W (s)] = t.
′
The equation (10.17) implies that the covariance matrix CW of (W (t1 ), . . . , W (tn )) , 0 < t1 < t2 < . . . < tn , is
t1 t1 ... t1 t1 t1
t1 t2 ... t2 t2 t2
t1 t2 ... t3 t3 t3
CW = .. .. .. .. .. .. . (10.19)
. . . . . .
t1 t2 . . . tn−2 tn−1 tn−1
t1 t2 . . . tn−2 tn−1 tn
′
One could check that CW = A−1 Λ(A−1 ) , as it should by (10.14). We have encountered the matrix CW in an
exercise on autocovariance in section 9.7.1 of chapter 9 and shown without recourse to the Wiener process that
CW is indeed a covariance matrix.
274 CHAPTER 10. THE WIENER PROCESS
Lemma 10.2.3 h i
2
E (W (t) − W (s)) = |t − s| (10.20)
for any t, s ≥ 0.
Proof h i
2
E (W (t) − W (s)) = E W 2 (t) − 2W (t)W (s) + W 2 (s)
= E W 2 (t) − 2E [W (t)W (s)] + E W 2 (s)
= t − 2 min(t, s) + s
by (10.17) and (10.7). Then we have
(
t − 2s + s = t − s if t > s
=
t − 2t + s = s − t if s > t.
By definition of absolute value,
(
t−s t>s
| t − s |= (10.21)
−(t − s) = s − t s > t.
Thus h i
2
E (W (t) − W (s)) = |t − s|.
Proof Because the Wiener process is a Gaussian process, W (t) − W (s) is a Gaussian random variable. The
rest of the proof follows by (10.16) and (10.20).
The result in the following lemma is already found in the proof of theorem 10.2.1, but we state and prove it
anew for ease of reference and benefit of learning.
Lemma 10.2.5 For a Wiener process and for 0 ≤ u ≤ v ≤ s ≤ t
Theorem 10.2.6 For a Wiener process and any finite suite of times 0 < t1 < t2 < . . . < tn the increments
It follows also by the above that the increments of the Wiener process are strictly stationary, since for
all n and h
W (t1 ) − W (t0 ), W (t2 ) − W (t1 ), . . . , W (tn ) − W (tn−1 )
d
= W (t1 + h) − W (t0 + h), W (t2 + h) − W (t1 + h), . . . , W (tn + h) − W (tn−1 + h),
by (10.20).
Let us solve this with R(t, s) = min(t, s) in [0, T ], we follow [103, p. 87]. We insert to get
Z T
min(t, s)ei (s)ds = λi ei (t), (10.25)
0
or Z Z
t T
sei (s)ds + t ei (s)ds = λi ei (t). (10.26)
0 t
This is a case, where we can solve an integral equation by reducing it to an ordinary differential equation. We
differentiate thus once w.r.t. t in (10.26) and get
Z T
′
ei (s)ds = λi ei (t). (10.27)
t
which is an interesting expression for min(t, s) in [0, T ] × [0, T ] in its own right. In addition, by example 9.2.3
we can construct the Wiener process as
∞ r
X T 2 1 t
W (t) = · Xi · sin i+ π t ∈ [0, T ],
i=0
π i + 12 T 2 T
where Xi are I.I.D. and N (0, 1). We are omitting further details that would enable us to prove almost sure
convergence of the series [7, pp. 7−9].
and hence the Wiener process is continuous in quadratic mean in the sense of the definition 9.3.2. As is known,
convergence in quadratic mean does not imply convergence almost surely. Hence the result in the following
section requires a full proof, which is of a higher degree of sophistication than (10.29). As we shall see below,
the actual proof does exploit (10.29), too.
10.4.1 The Sample Paths of the Wiener Process are Almost Surely Continuous
We need an additional elementary fact.
Z ∈ N (0, σ 2 ) ⇒ E Z 4 = 3σ 4 . (10.30)
This can be found by the fourth derivative of either the moment generating or the characteristic function and
evaluation of this fourth derivative at zero9 .
We shall now start the proof of the statement in the title of this section following [103, p.57−58] and [7,
chap.1]. The next proof can be omitted at first reading.
The Markov inequality (1.38) gives for every ε > 0 and h > 0
E | W (t + h) − W (t) |4
P (| W (t + h) − W (t) |≥ ε) ≤ .
ε4
9 By (4.50) the general rule is given as follows. If Z ∈ N (0, σ2 ), then
(
n 0 n is odd
E [Z ] = (2k)! 2k
2k k!
σ n = 2k, k = 0, 1, 2, . . ..
10.4. THE SAMPLE PATHS OF THE WIENER PROCESS 277
The reason for selecting above the exponent = 4 becomes clear eventually. By (10.29) and (10.30)
we get h i
4
E (W (t + h) − W (t)) = 3h2 .
Therefore
P (| W (t + h) − W (t) |≥ hγ ) ≤ 3h2−4γ .
These are preparations for an application of the Borel-Cantelli lemma. With that strategy in
mind we consider for nonnegative integers v the random variables
def
Zv = sup | W ((k + 1)/2γ ) − W (k/2γ ) | .
0≤k≤2v −1
Then γ γ
1 2v −1 γ γ 1
P Zv ≥ ≤ P ∪k=0 | W ((k + 1)/2 ) − W (k/2 ) |≥ ,
2v 2v
γ
since if Zv ≥ 21v , then there is at least one increment such that | W ((k + 1)/2γ ) − W (k/2γ ) |≥
1 γ
P
2v . Then by subadditivity, or A ⊂ ∪i Ai then P(A) ≤ i P(Ai ), see chapter 1,
v
2X −1 γ
γ γ 1
≤ P | W ((k + 1)/2 ) − W (k/2 ) |≥
2v
k=0
1+δ
1
≤ 3 · 2v = 3 · 2−δv ,
2v
P∞
where we used (10.31). Since v=0 2−δv < ∞ we have
∞
X
1
P Zv ≥ vγ < ∞.
v=0
2
1
Zv ≤
2vγ
and therefore
∞
X
lim Zv = 0, a.s..
n→∞
v=n+1
as n → ∞, where T is any finite interval ⊂ [0, ∞). This assertion is intuitively plausible, but requires
a detailed analysis omitted here, see [103, p. 86] for details.
278 CHAPTER 10. THE WIENER PROCESS
By the preceding we have in bits and pieces more or less established the following theorem, which is frequently
evoked as the very definition of the Wiener process, see [13, chapter 2].
Theorem 10.4.1 A stochastic process {W (t) | t ≥ 0} is a Wiener process if and only if the following four
conditions are true:
1) W (0) = 0.
10.4.2 The Sample Paths of the Wiener Process are Almost Surely Nowhere
Differentiable; Quadratic Variation of the Sample Paths
We are not going to prove the following theorem.
Theorem 10.4.2 The Wiener process {W (t) | t ≥ 0} is almost surely non-differentiable at any t ≥ 0.
We shall next present two results, namely lemma 10.4.3 and theorem 10.4.4, that contribute to the understanding
of the statement about differentiation of the Wiener process. Let for i = 0, 1, 2, . . . , n
(n) iT
ti = .
n
(n) (n) (n)
Clearly 0 = t0 < t1 < . . . < tn = T is a partition of [0, T ] into n equal parts. We denote by
def (n) (n)
△ni W = W ti+1 − W ti (10.32)
the corresponding increment of the Wiener process. For future reference we say that the random quadratic
variation of the Wiener process is the random variable
n−1
X
(△ni W )2 .
i=0
as n → ∞.
Thus !2 !2
n−1
X n−1
X T
E (△ni W )2 − T =E (△ni W )2 −
i=0 i=0
n
n−1
" 2 #
X T
= E (△ni W )2 −
i=0
n
X
2 T 2 T
+2 E (△ni W ) − △nj W − . (10.34)
i<j
n n
By Theorem 10.2.6 the increments of the Wiener process are independent, when considered over non-overlapping
intervals. Thus
2 T 2 T 2 T 2 T
E (△ni W ) − △nj W − = E (△ni W ) − E △nj W − .
n n n n
By (10.20) we get
h i h 2 i T
E (△ni W )2 = E △nj W = ,
n
and the cross products in (10.34)vanish.
Thus we have obtained
!2 n−1 "
n−1
X X 2 #
2 2 T
E (△ni W ) − T = E (△ni W ) − .
i=0 i=0
n
In view of (10.20)
h i T
2
E (△ni W ) = ,
n
and thus (10.30) entails
h i 3T 2
4
E (△ni W ) = 2 .
n
Thus !2 n−1
n−1
X 2
X 3T 2 2T 2 T 2
E (△ni W ) − T = − 2 + 2
i=0 i=0
n2 n n
n−1
X 2T 2 2T 2
= 2
= .
i=0
n n
Hence the assertion follows as claimed, when n → ∞.
In the theory of stochastic calculus, see e.g., [70, p.62], one introduces the notation
n−1
X
def 2
[W, W ]([0, T ]) = lim (△ni W )
i=0
280 CHAPTER 10. THE WIENER PROCESS
or in [29, p.86],
n−1
X
def 2
< W >T = lim (△ni W )
i=0
and refers to [W, W ]([0, T ]) as quadratic variation, too, but for our purposes we need not load the presentation
with these brackets.
We need to recall a definition from mathematical analysis [36, p.54].
△ = max | ti+1 − ti | .
i=0,...,n
The following theorem 10.4.4 implies that the length of a sample path of the Wiener process in any finite interval
is infinite. Hence we understand that a simulated sample path like the one depicted in figure 10.3 cannot be
but a computer approximation.
At this point we should pay attention to Brownian Scaling. If {W (t)|t ≥ 0} is the Wiener process, we
define for c > 0 a new process by
def 1
V (t) = W (c2 t).
c
An exercise below shows that {V (t) | t ≥ 0} is a Wiener process. In words, if one magnifies the process
{W (t)|t ≥ 0}, i.e., chooses a small c, while at the same time looking at the process in a small neighborhood
of origin, then one sees again a process, which is statistically identical with the original Wiener process. In
another of the exercises we study Time Reversal
def 1
V (t) = tW ,
t
in which we, for small values of t, we look at the Wiener process at infinity, and scale it back to small amplitudes,
and again we are looking at the Wiener process.
These phenomenona are known as self-similarity and explain intuitively that the length of a sample path of
the Wiener process in any finite interval must be infinite.
Theorem 10.4.4 The total variation of the sample paths of the Wiener process on any interval [0, T ] is infinite.
(n) (n) (n)
Proof As in lemma 10.4.3 we consider the sequence of partitions t0 , t1 , . . . , tn of [0, T ] into n equal parts.
Then with the notation of (10.32) we get
n−1
X n−1
X
| △ni W |2 ≤ max | △ni W | | △ni W | . (10.35)
i=0,1,...,n
i=0 i=0
Since the sample paths of the Wiener process are almost surely continuous on [0, T ], we must have
n−1
X 2 2
(△ni W ) → T > 0,
i=0
as n → ∞, which implies (this is a general fact about the relationship between almost sure and mean square
convergence) that there is a subsequence nk such that
nX
k −1
2 a.s.
(△ni W ) → T, (10.37)
i=0
as k → ∞.
Next, from (10.35)
Pn−1 n−1
X
| △ni W |2
i=0
n |≤ | △ni W | .
maxi=0,1,...,n | △i W i=0
as the subsequences of partitions of [0, T ] become more and more refined as k increases.
We make a summary and an interpretation of the preceding. Take the partition of thetime axis used in lemma
10.4.3 and set
n−1
X
Sn = (△ni W )2 .
i=0
The important fact that emerged above is that the variance of Sn is negligible compared to its expectation, or
E [Sn ] = T,
2T 2
Var [Sn ] = .
n
Thus, the expectation of Sn is constant, whereas the variance of Sn converges to zero, as n grows to infinity.
Hence Sn must converge to a non-random quantity. We write this as
Z t
[dW ]2 = t
0
or
2
[dW ] = dt. (10.38)
The formula (10.38) is a starting point for the intuitive handling of the differentials behind Itb
o’s formula in
stochastic calculus, see [13, pp. 50−55], [62, pp. 32−36], [68, chapter 5] and [29, 70].
282 CHAPTER 10. THE WIENER PROCESS
Theorem 10.4.5 For any t1 < . . . < tn−1 < tn and any x1 , . . . , xn−1 , xn
Proof
P (W (tn ) ≤ xn | W (t1 ) = x1 , . . . , W (tn−1 ) = xn−1 )
Z xn
= fW (tn )|W (t1 )=x1 ,...,W (tn−1 )=xn−1 (v) dv
−∞
Z xn
fW (t1 ),...,W (tn−1 ),W (tn ) (x1 , . . . , xn−1 , v)
= dv
−∞ fW (t1 ),...,W (tn−1 ) (x1 , . . . , xn−1 )
and we use (10.6) to get
Z xn
p(t1 , x1 , 0)p(t2 − t1 , x2 , x1 ) · · · p(tn − tn−1 , v, xn−1 )
dv
−∞ p(t1 , x1 , 0)p(t2 − t1 , x2 , x1 ) · · · p(tn−1 − tn−2 , xn−1 , xn−2 )
Z xn
= p(tn − tn−1 , v, xn−1 )dv = P (W (tn ) ≤ xn | W (tn−1 ) = xn−1 ) .
−∞
The Wiener process is a Gaussian Markov process and its autocorrelation function is RW (t, s) = min(t, s).
Then, if t0 < s < t we check (9.37) by
10.5.1 Definition
Rb
Definition 10.5.1 Let {W (t)|t ≥ 0} be a Wiener process and f (t) be a function such that a f 2 (t)dt < ∞,
where 0 ≤ a < b ≤ +∞. The mean square integral with respect to the Wiener process or the Wiener integral
Rb
a
f (t)dW (t) is defined as the mean square limit
n
X Z b
2
f (ti−1 ) (W (ti ) − W (ti−1 )) → f (t)dW (t), (10.40)
i=1 a
where a = t0 < t1 < . . . < tn−1 < tn = b and maxi |ti − ti−1 | → 0 as n → ∞.
In general, we know that the sample paths of the Wiener process have unbounded total variation, but have by
Rb
lemma 10.4.3 finite quadratic variation. Hence we must define a f (t)dW (t) using mean square convergence,
which means that we are looking at all sample paths simultaneously.
10.5. THE WIENER INTEGRAL 283
The reader should note the similarities and differences between the left hand side of (10.40) and the
the discrete stochastic integral in (3.56) above.
In physics the Wiener integral is a name for a different mathematical concept, namely that of a path integral.
By this we refer to an integral of a functional of the Wiener process with respect to the Wiener measure, which
is a probability measure on the set of continuous functions over [0, T ], see [78, chapter 6].
Remark 10.5.1 As pointed out in [105, p. 88], Wiener himself introduced the integral later named after him
by the formula of integration by parts
Z b Z b
b
f (t)dW (t) = [f (t)W (t)]a − W (t)df (t), (10.41)
a a
where the function f (t) is assumed have bounded total variation in the sense of definition 10.4.1. As the sample
functions of a Wiener process are continuous, the right hand side is well-defined, inasmuch the integral is a
Stieltjes integral [69, chapter 6.8].
i
That is, we have ti = n in definition, see eq. (10.40), and 0 = t0 < t1 < . . . < tn−1 < tn = 1. We expect this to
converge to
Z 1
2
Xn → e−λu dW (u),
0
and then
1
Yi ∈ N 0,
n
for all 1 ≤ i ≤ n. Thus, as a linear combination of normal random variables,
n
X i−1
Xn = e−λ n Yi
i=1
and since the increments (i.e., here Yi ) of a Wiener process are independent for non overlapping intervals
n
X n
i−1 1 X −2λ i−1
Var (Xn ) = e−2λ n Var (Yi ) = e n .
i=1
n i=1
284 CHAPTER 10. THE WIENER PROCESS
We can check the convergence in distribution by means of this without invoking Riemann sums. In fact we have
n n−1
1 X −2λ i−1 1 X −2λ i 1 1 − e−2λ
e n = e n = .
n i=1 n i=0 n 1 − e−2λ n1
We write this as
1 1 − e−2λ 1 − e−2λ
1 = 1 .
n 1 − e−2λ n 1−e−2λ n
1
n
1 = − 1 → −f (0) = 2λ,
n n
as n → ∞. Hence
n
1 X −2λ k−1 1 − e−2λ
lim e n = .
n→∞ n 2λ
i=1
R1 1−e−2λ
We note that 0 e−2λu du = 2λ . Thus we have shown that
Z 1 Z 1
d −λu −2λu
Xn → e dW (u) ∈ N 0, e du ,
0 0
as n → ∞.
10.5.2 Properties
Since (10.40) defines the Wiener integral in terms of convergence in mean square, we can easily adapt the
techniques in section 9.2 to this case and derive some of the basic properties of the Wiener integral defined in
(10.40).
2. "Z #
b
E f (t)dW (t) = 0. (10.42)
a
This follows in the same way as the proof of the analogous statement in theorem 9.2.1, since
" n # n
X X
E f (ti−1 ) (W (ti ) − W (ti−1 )) = f (ti−1 )E [(W (ti ) − W (ti−1 ))] = 0,
i=1 i=1
3. "Z #
b Z b
Var f (t)dW (t) = f 2 (t)dt (10.43)
a a
This follows again as in the proof of the analogous statement in theorem 9.2.1, since by theorem 10.2.6
the increments of the Wiener process over non-overlapping intervals are independent,
" n # n
X X
Var f (ti−1 ) (W (ti ) − W (ti−1 )) = f 2 (ti−1 )Var [(W (ti ) − W (ti−1 ))] ,
i=1 i=1
n
X
= f 2 (ti−1 )(ti − ti−1 ),
i=1
Rb Rb
5. If a f 2 (t)dt < ∞ and a g 2 (t)dt < ∞,
"Z Z # Z
b b b
E f (t)dW (t) g(t)dW (t) = f (t)g(t)dt. (10.45)
a a a
Here we see a case of the heuristics in (10.38) in operation, too. To prove this, we fix a = t0 < t1 < . . . <
tn−1 < tn = b and start with the approximating sums, or,
Xn n
X
E f (ti−1 ) (W (ti ) − W (ti−1 )) · g(tj−1 ) (W (tj ) − W (tj−1 ))
i=1 j=1
n X
X n
= f (ti−1 )g(tj−1 )E [(W (ti ) − W (ti−1 )) · (W (tj ) − W (tj−1 ))]
i=1 j=1
and as by theorem 10.2.6 the increments of the Wiener process over non-overlapping intervals are inde-
pendent,
Xn Z b
= f (ti−1 )g(ti−1 )(ti − ti−1 ) → f (t) · g(t)dt,
i=1 a
To establish this claim, let I[0,t] (u) = 1, if 0 ≤ u ≤ t and I[0,t] (u) = 0 otherwise, and
This is natural, but cannot be argued by differentiation. To establish (10.48) we write using (10.47)
Z t Z ∞
def
Y (t) = dW (s) = I[0,t] (s)dW (s).
0 0
Example 10.5.3 By (10.48) and the first property of the Wiener integral we can write for any τ > 0
Z t+τ
d
W (t + τ ) − W (t) = dW (s). (10.50)
t
as it should, c.f., (10.20). The integral in (10.50) is sometimes called a gliding window smoother, see [97].
We assume for the sake of simplicity that f (u) > 0 for all u. We let
Z s
2
τ (t) = inf s | f (u)du = t ,
0
Rs
or, τ (t) is the time, when 0 f 2 (u)du as an increasing function of s first reaches the level t > 0. Evidently
t 7→ τ (t) is one-to-one, or, invertible, and the inverse is
Z s
−1
τ (s) = f 2 (u)du.
0
Hence, if we define
V (t) = Y (τ (t)),
then {V (t) | t ≥ 0} is a Wiener process. Furthermore,
d
Y (t) = V τ −1 (t) = W τ −1 (t) , (10.51)
which shows that a Wiener integral is a Wiener process on a distorted or scrambled time scale.
where δ(t − s) is the Dirac delta, see [96, p. 354]. As stated in [96, loc.cit], δ(t − s) is not a function in
the ordinary sense, but has to be regarded as a distribution, not in the sense of probability theory, but in the
sense of the theory of generalized functions (which is a class of functionals on the set of infinitely differentiable
functions with support in a bounded set).
Let us, as a piece of formal treatment, c.f., (10.48), set
Z t
o
W (t) = W (u)du. (10.53)
0
Then we get by a formal manipulation with the rules for integrals above that
Z tZ s h i
o o
E [W (t)W (s)] = E W (u) W (v) dudv
0 0
Z tZ s
= δ(u − v)dudv.
0 0
The ’delta function’ δ(u − v) is zero if u 6= v and acts (inside an integral) according to
Z ∞
f (v)δ(u − v)dv = f (u).
−∞
Thus Z tZ Z
s min(t,s)
δ(u − v)dudv = dv = min(t, s).
0 0 0
o
Hence, if we think of the process with variables W (u) as being Gaussian, then the process introduced by the
variables W (t) in (10.53) is like a Wiener process ! Of course, by (10.53) one should get then
d o
W (t) =W (t),
dt
which is not possible, as the Wiener process has almost surely non-differentiable sample paths. Hence, the white
noise makes little, or perhaps, should make no sense. One can, nevertheless, introduce linear time invariant
filters, c.f. the exercises in section 9.7.6, with white noise as input, or
Z ∞
o
Y (t) = G(t − s) W (s)ds,
−∞
and compute the autocovariances and spectral densities of the output process in a very convenient way. Thus,
despite of the fact that the white noise does not exist as a stochastic process in our sense, it can be formally
manipulated to yield useful results, at least as long as one does not try to do any non-linear operations. A
consequence of (10.52) is that the spectral density of the white noise is a constant for all frequencies,
(To ’check’ this, insert s o (f ) = 1 in the right hand side of (9.25).) In engineering, see, e.g., [77, 105], the white
W
noise is thought of as an approximation of a weakly stationary process that has a power spectral density which
is constant over very wide band of frequencies and then equals to, or decreases rapidly to, zero. An instance of
this argument will be demonstrated later in section 11.4 on thermal noise.
Definition 10.6.1 Let F be a sigma field of subsets of Ω. Let T be an index set, so that for every t ∈ T , Ft
is a sigma field ⊂ F and that
Fs ⊂ Ft s < t. (10.55)
Then we call the family of sigma fields (Ft )t∈T a filtration.
Definition 10.6.2 Let X = {X(t) | t ∈ T } is a stochastic process on (Ω, F , P). Then we call X a martingale
with respect to the filtration (Ft )t∈T , if
def
FtW = the sigma field generated by W (s) for 0 ≤ s ≤ t.
We write this as
FtW = σ (W (s); 0 ≤ s ≤ t) .
We should read this according to the relevant definition 1.5.3 in chapter 1. We take any number of indices
t1 , . . . , tn , all ti ≤ s. The sigma field FtW
1 ,...,tn ,s
generated by the random variables W (ti ) i = 1, . . . , n, is defined
to be the smallest σ field containing all events of the form {ω : W (ti )(ω) ∈ A} ∈ F , A ∈ B, where B is the Borel
σ field over R.
By independent increments, theorem 10.2.6, and lemma 10.2.4, eq. (10.22), we get that
E W (t) − W (s) | FtW
1 ,...,tn ,s
= E [W (t) − W (s)] = 0. (10.57)
⇔ E W (t) | FsW = E W (s) | FsW ,
but since W (s) is by construction FsW -measurable, the rule of taking out what is known gives the martingale
property
E W (t) | FsW = W (s). (10.58)
Theorem 10.6.1 W = {W (t) | t ≥ 0} is a Wiener process and the sigma field is FtW = σ (W (s); 0 ≤ s ≤ t),
then W is a martingale with respect to the filtration FtW t≥0 .
This has to be regarded as a very significant finding, because there is a host of inequalities and convergence
theorems e.t.c., that hold for martingales in general, and thus for the Wiener process. In addition, the martingale
property is of crucial importance for stochastic calculus.
While we are at it, we may note the following re-statement of the Markov property (10.39) in theorem 10.4.5.
Theorem 10.6.2 W is a Wiener process and the filtration is FtW t≥0
, where FtW = σ (W (s); 0 ≤ s ≤ t).
Then, if s < t and y ∈ R, it holds almost surely that
P W (t) ≤ y | FsW = P (W (t) ≤ y | W (s)) . (10.59)
290 CHAPTER 10. THE WIENER PROCESS
10.7 Exercises
10.7.1 Random Walks
Random walk is a mathematical statement about a trajectory of an object that takes successive random steps.
Random walk is one of the most important and most studied topics in probability theory. The exercises on
random walks in this section are adapted from [10, 48] and [78, chapter 9.1]. We start with the first properities
of the (unrestricted) random walk, and then continue to find the connection to the Wiener process, whereby we
can interpret a random walk as the path traced by a molecule as it travels in a liquid or a gas and collides with
other particles [10, 78].
1. Let {Xi }∞
i=1 be I.I.D. random variables with two values so that Xi = +1 with probability p and Xi = −1
with probability q = 1 − p. We let
Sn = X 1 + X 2 + . . . + X n , S0 = 0 (10.60)
The sequence of random variables {Sn }∞ i=0 is called a random walk. We can visualize the random walk
as a particle jumping on a lattice of sites j = 0, ±1, ±2, . . . starting at time zero in the site 0. At any n
the random walk currently at j jumps to the right to the site j + 1 with probability p or to the left to
the site j − 1 with probability q. A random walk is thus constructed also a time- and space-homogeneous
finite Markov chain, see [95, lecture 6] for a treatment from this point of view.
Aid: We hint at a combinatorial argument. Consider the random variable R(n) defined by
def
R(n) = the number of steps to the right in n steps.
Sn = R(n) − (n − R(n)),
and hence, if Sn = j, then R(n) = n+j2 . Next, find the number of paths of the random walk such
that R(n) = n+j
2 and S n = j. Find the probability of each of these paths and sum up them.
(b) Reflection principle Let
Sn = X1 + X2 + . . . + Xn + a.
Let
Nn0 (a, b) = the number of paths from a to b, which pass 0.
2. (From [43, 78]) Determine the characteristic function ϕSn (t) of the random walk Sn in (10.60), and use
ϕSn (t) to find the probability expression (10.61).
3. Here we impose a parameter (= δ) on the random walk in the preceding exercises. Thus, for all i ≥ 1,
Xi = δ > 0 with probability 12 , Xi = −δ with probability 12 , and {Xi }∞
i=1 are independent.
We let
Sn = X1 + X2 + . . . + Xn , S0 = 0.
This sequence of random variables {Sn }∞ i=0 is called a symmetric random walk. We can visualize
∞
{Sn }i=0 as a particle jumping on sites jδ on a lattice with {0, ±δ, ±2δ, . . .} with δ as lattice spacing or
a step size. The random walk starts at time zero in the site 0.
Sn ∈ N (0, nδ 2 ). (10.63)
Remark 10.7.1 In view of (10.15) we can write the random variables W (t1 ), . . . , W (tn ) of a Wiener
process in the form
i
X
W (ti ) = Zk ,
k=1
where Zk = W (tk ) − W (tk−1 )s are I.I.D. for tk − tk−1 =constant. Since we have (10.62) and (10.63),
the symmetric random walk has a certain similarity with the Wiener process sampled at equidistant
times, recall even (10.18).
4. The random walk is as in the preceding exercise except that for any integer n ≥ 0 and for all i ≥ 1,
(n) (n) (n)
Xi = δn > 0 with probability 12 , Xi = −δn with probability 12 , and {Xi }∞i=1 are independent. Here
δn is a sequence of positive numbers such that
δn → 0,
as n → ∞. In words, the spacing of the lattice becomes denser (taken as a subset of the real line) or, in
other words, the step size becomes smaller.
We impose a second sequence of parameters, τn > 0 for the purpose of imbedding the random walk in
continuous time. For a fixed n we can think of a particle undergoing a random walk moving to right or
to left on the lattice {0, ±δn , ±2δn , . . .} at every τn seconds. We assume that
τn → 0,
292 CHAPTER 10. THE WIENER PROCESS
Define for t ≥ 0
⌊ τtn ⌋
def
X (n)
W (n) (t) = Xi , (10.66)
i=1
(n)
so that W (n) (kτn ) = Sk . The process {W (n) (t) | t ≥ 0} is a random walk in continuous time, or, a
(n)
stepwise constant interpolation of {Sk }∞k=0 .
We shall rediscover this finding using the Langevin dynamics in chapter 11.
The random walks above are known as unrestricted random walks. Random walks with absorbing
and/or reflecting boundaries, are concisely analyzed in [15]. The mathematical results on random
walks have been applied in computer science, physics, molecular biology, ecology, economics, psychology
and a number of other fields. For example,the price of a fluctuating stock and the financial status of a
gambler have been studied by random walks.
10.7. EXERCISES 293
On the other hand, by our definition of the Wiener process, we have (10.8) or
x2 (y − x)2
−
1 − 1
fW (s),W (t) (x, y) = √ e 2s p e 2(t − s) . (10.70)
2πs 2π(t − s)
Verify by explicit expansions that (10.69) and (10.70) are one and the same p.d.f..
def 1
V (t) = W (c2 t). (10.72)
c
Show that V = {V (t)|t ≥ 0} is a Wiener process.
6. Brownian Bridge We give a first a general definition from [19, p.64]. Let x, y ∈ R and l > 0. A Gaussian
process X = {X(t) | 0 ≤ t ≤ l} with continuous sample paths and X(0) = x such that
t st
µX (t) = x + (y − x) , Cov (X(t), X(s)) = min(s, t) −
l l
is called a Brownian Bridge from x to y of length l or a tied-down Wiener process. Note that
µX (l) = y and Cov (X(s), X(t)) = 0 if s = l or t = l, and hence X(l) = y.
294 CHAPTER 10. THE WIENER PROCESS
(a) (From [19, p.64]) Let {W (t)|t ≥ 0} be a Wiener process. Show that if Xx,l is a Brownian Bridge
from x to x of length l, then
d t
X x,l (t) = x + W (t) − W (l) , 0 ≤ t ≤ l. (10.75)
l
(b) (From [19, p.64]) Let {W (t)|t ≥ 0} be a Wiener process. Show that if Xx,l is a Brownian Bridge
from x to x of length l, then
x,l d l−t lt
X (t) = x + W , 0 ≤ t ≤ l. (10.76)
l l−t
(c) (From [19, p.64]) Show that if X is a Brownian Bridge from x to y of length l, then
d t
X(t) = X x,l (t) + (y − x) , 0 ≤ t ≤ l, (10.77)
l
where X x,l (t) is a random variable of Xx,l , as in (a) and (b) above.
(d) Define the process
def
B(t) = W (t) − tW (1) , 0 ≤ t ≤ 1. (10.78)
This is a process is tied down by the condition B(0) = B(1) = 0, Brownian bridge from 0 to 0 of
length 1. Compare with (9.61) in the preceding.
(i) Show that the autocorrelation function of {B(t)|0 ≤ t ≤ 1} is
(
s(1 − t) s ≤ t
RB (t, s) = (10.79)
(1 − s)t s ≥ t.
(ii) Show that the increments of the Brownian bridge are not independent.
(iii) Let B = {B(t) | 0 ≤ t ≤ 1} be a Brownian bridge. Show that the Wiener process in [0, T ], in
the sense of (10.18), can be decomposed as
t t
W (t) = B √ + √ · Z,
T T
where Z ∈ N (0, 1) and is independent of B.
(iv) Show that for 0 ≤ t < ∞
t
W (t) = (1 + t)B .
1+t
Compare with (9.62) in the preceding.
7. The Reflection Principle and The Maximum of a Wiener Process in a Finite Interval Let us
look at the collection of all sample paths t 7→ W (t) of a Wiener process {W (t)|t ≥ 0} such that W (T ) > a,
where a > 0 and T > 0, here T is a time point. Since W (0) = 0, there exists a time τ , a random variable
depending on the particular sample path, such that W (τ ) = a for the first time.
For t > τ we reflect W (t) around the line x = a to obtain
(
def
f (t) = W (t) if t < τ
W (10.80)
a − (W (t) − a) if t > τ .
and that W f (t) is a Wiener process, but we have to admit that the proof is beyond the resources of these
10
lectures
Conversely, by the nature of this correspondence, every sample function t 7→ W (t) for which M0≤u≤T ≥
f (t) with equal probability, one
a results from either of the two sample functions t 7→ W (t) and t 7→ W
of which is such that W (T ) > a unless W (T ) = a, but P(W (T ) = a) = 0. Show now that
Z ∞
2 2
P (M0≤u≤T ≥ a) = √ e−x /2T dx. (10.83)
T 2π a
8. The Ornstein-Uhlenbeck Process {W (t)|t ≥ 0} is a Wiener process. Let a > 0. Define for all t ≥ 0
def
X(t) = e−at W e2at . (10.84)
µX (t) = 0
9. The Geometric Brownian Motion {W (t)|t ≥ 0} is a Wiener process. Let a, σ > 0 and x0 be real
constants. Then
X(t) = x0 e(α− 2 σ )t+σW (t) .
def 1 2
Show that
10 The statements to follow are true, but the complete analysis requires strictly speaking the so called strong Markov property
[70, p. 73]. For handling the strong Markov property one has definite advantage of both the techniques and the ’jargon of modern
probability’, but an intuitive physics text like [62, p.57−58] needs, unlike us, to pay no lip service to this difficulty in dealing with
the reflection principle.
296 CHAPTER 10. THE WIENER PROCESS
(a)
E [X(t)] = x0 eαt .
(b) 2
Var(X(t) = x20 e2αt eσ t − 1 .
10. {Wi (t)|t ≥ 0}, i = 1, 2, are two independent Wiener processes. Define a new stochastic process {V (t)| −
∞ < t < +∞} by (
W1 (t) t≥0
V (t) =
W2 (−t) t < 0.
|h| p
→ 0,
| |W (t + h) − W (t) |
as h → 0.
12. Partial diffential equation for a functional of the Wiener process Let h(x) be a bounded and
continuous function defined in the whole real line. {W (t)|t ≥ 0} is a Wiener process. Set
13. Fractional Brownian Motion WH = {WH (t) | 0 ≤ t < ∞} is a Gaussian stochastic process. Its
expectation is = 0 and its autocorrelation function equals
1 2H 2H
RWH (t, s) = E [WH (t)WH (s)] = t + s2H − |t − s| , (10.87)
2
where 0 < H < 1 (H is known as the Hurst parameter ).
d
(a) Show that WH (t)= a1H WH (at), where a > 0.
1
(b) Which process do we obtain for H = 2 ?
(c) Define the random variable
Y = WH (t + h) − WH (t)
which means that WH is the same in all time scales. This implies also that its sample paths are
fractals11 .
11 For this and other statements given here, see
B.B. Mandelbrot & J.W. van Ness: Fractional Brownian Motions, Fractional Noises and Applications. SIAM Review, vol. 10,
1968, pp. 422−437.
10.7. EXERCISES 297
def 1
WH,δ (t) = (WH (t + δ) − WH (t)) ,
δ
where WH is the fractional Brownian motion with the autocorrelation function
vH 2H 2H
RWH (t, s) = E [WH (t)WH (s)] = t + s2H − |t − s| , (10.88)
2
1
where 2 < H < 1 and
Γ(2 − 2H) cos(πH)
vH = − .
πH(2H − 1)
Show that WH,δ is a stationary Gaussian process with zero mean and with autocorrelation function
2H 2H 2H !
vH δ 2H−2 |h| |h| | h |
RWH,δ (h) = +1 −2 + − 1 .
2 δ δ δ
sWH,δ (f ) ≈ f 1−2H .
This implies that the increments of WH are a good model for so called 1/f -type noise encountered,
e.g., in electric circuits [11]. The 1/f -type noise models physical processes with long range dependencies.
(b) Use the Fourier transform of RY (h) (use the table of transform pairs in section 9.3.1 above) to argue
that Y (t) approaches white noise as τ → 0.
2. Let Z t
d
Z(t) = s2 dW (s).
0
for t ∈ [0, T ].
def 1
WH (t) = {I1 + I2 } ,
Γ(H + 1/2)
and Z t
I2 = | t − u |H−1/2 dW (u),
0
where the Wiener process has been extended to the negative values as in exercise 10. above. When
Rt R0
t < 0 the notation 0 should be read as − t . Show that this is a zero mean Gaussian process with
autocorrelation function given by (10.88).
FtW = σ (W (s); 0 ≤ s ≤ t) .
Set for t ≥ Z t
Y (t) = f (s)dW (s),
0
R∞
where 0
f 2 (t)dt < ∞. Show that the process
{Y (t) | 0 ≤ t}
is a martingale w.r.t. FtW t≥0
.
6. Discrete Stochastic Integrals w.r.t. the Wiener Process Let W be a Wiener process, and let
0 = t0 < t1 < . . . ti−1 < ti < . . . < tn and
FtW
i
= σ (W (t0 ), W (t1 ), . . . W (ti )) .
R∞
Let −∞
| f (x) |2 dx < ∞ and X(ti ) = f (W (ti−1 )) and X = {X(ti ) | i = 0, . . . , n}. Consider as in (3.56)
n
X
def
(X ⋆ W)n = Xti (△W )ti . (10.91)
i=1
n
Show that this is a well defined discrete stochastic integral and that it is a martingale w.r.t. FtW
i i=0
.
X(t) = x0 eW (t) , t ≥ 0.
10.7. EXERCISES 299
(a) Check first using the Taylor expansion of ex that for h > 0
1 2 1 3
X(t+h)−X(t) = X(t) (W (t + h) − W (t)) + (W (t + h) − W (t)) + (W (t + h) − W (t)) + . . . .
2! 3!
(b) Show that
E [X(t + h) − X(t) − X(t) (W (t + h) − W (t))] = O(h),
O(h)
where O(h) is a function of h such that h → M (= a finite limit). You will need (4.50).
Thus, if one tried to express X(t) formally by the seemingly obvious differential equation
the error would in the average be of order dt, which makes no sense, as truncation errors add linearly.
(c) Show that h i
2
E (X(t + h) − X(t) − X(t) (W (t + h) − W (t))) = O(h2 ).
(d) Show that
1 2
E X(t + h) − X(t) − X(t) (W (t + h) − W (t)) − X(t) (W (t + h) − W (t)) = o(h),
2!
where o(h) is a function of h such that o(h)
h → 0 as h → 0. Show that
" 2 #
1 2
E X(t + h) − X(t) − X(t) (W (t + h) − W (t)) − X(t) (W (t + h) − W (t)) = o(h2 ).
2!
where U (s) is the velocity of the particle. The Newtonian second law of motion gives
d
U (t) = −aU (t) + σF (t), (11.2)
dt
where a > 0 is a coefficient that reflects the drag force that opposes the particle’s motion through the solution
and F (t) is a random force representing the random collisions of the particle and the surrounding molecules.
1 http://en.wikipedia.org/wiki/Paul Langevin
There are streets, high schools (’lycée’), town squares and residential areas in France named after Paul Langevin. He is buried in
the Parisian Panthéon.
301
302 CHAPTER 11. THE LANGEVIN EQUATIONS AND THE ORNSTEIN-UHLENBECK PROCESS
d2 d
X(t) = −a X(t) + σF (t).
dt2 dt
The expression (11.2) is called the Langevin equation (for the velocity of the Brownian motion). In physical
terms the parameters are √
γ g
a = ,σ = ,
m m
where γ is the friction coefficient, and is by Stokes law given as γ = 6πkP , P = radius of the diffusing particle,
k = viscosity of the fluid, m is the mass of the particle. g is measure of the strength of the force F (t).
Additionally, τB = m γ is known as the relaxation time of the particle. We obtain by (10.2) the Einstein relation
between the diffusion coefficient D and the friction coefficient γ as
RT 1 kB T
D= = ,
N 6πkP γ
or, the random force is white noise as described in section 10.5.4 above. In view of (10.53) we write the Langevin
theory of Brownian movement as
Z t
X(t) = X(0) + U (s)ds, (11.3)
0
and
dU (t) = −aU (t)dt + σdW (t). (11.4)
By virtue of lessons in the theory of ordinary differential equations we surmise that (11.4) be solved with
Z t
−at
U (t) = e U0 + σ e−a(t−u) dW (u). (11.5)
0
The stochastic interpretation of this requires the Wiener integral from section 10.5 above.
From now on the treatment of the Langevin theory of Brownian movement will differ from the physics texts like
[17, 62, 73, 78]. The quoted references do not construct an hexpression like (11.5).
i By various computations the
2
physics texts do, however, obtain the desired results for E (X(t) − X(0)) that will be found using (11.5).
We shall next study the random process in (11.5), known as the Ornstein-Uhlenbeck process without paying
attention to physics and eventually show, in the set of exercises for this chapter, that U (t) in (11.5) does satisfy
(11.4) in the sense that it solves
Z t Z t
U (t) − U0 = −a U (s)ds + σ dW (s).
0 0
h i Z min(t,s)
E Ue (t) · U
e (s) = σ 2 e−a(t+s) e2au du. (11.7)
0
σ 2 −a(t−s)
= e − e−a(t+s) . (11.8)
2a
If t < s
h i 2
E U e (s) = σ e−a(s−t) − e−a(t+s) .
e (t) · U (11.9)
2a
Then we observe that ( 2
σ −a(t−s)
σ 2 −a|t−s| 2a e if t > s
e = σ 2
−a(s−t)
2a 2a e if s > t,
and thus h i 2
E U e (s) = σ e−a|t−s| − e−a(t+s) .
e (t) · U (11.10)
2a
2
Let us next take U0 ∈ N (0, σ2a ), independent of the Wiener process, and define
e (t).
U (t) = e−at U0 + U (11.11)
σ 2 −a(t+s) h i
E [U (t) · U (s)] = e +E Ue (t) · U
e (s) ,
2a
and by (11.10)
σ 2 −a|t−s|
E [U (t) · U (s)] = e . (11.12)
2a
2
As a summary, with U0 ∈ N (0, σ2a ),
Z t
−at
U (t) = e U0 + σ e−a(t−u) dW (u) (11.13)
0
defines a Gaussian weakly stationary process, known as the Ornstein-Uhlenbeck process. This implies
in view of the derivation of (9.41), or (11.12), from the functional equation (9.40) that the Ornstein-Uhlenbeck
process given in (11.13) is the only weakly stationary
and mean square continuous Gaussian Markov process.
σ2 d
Let us note that from (11.12) U (t) ∈ N 0, 2a and thus U (t) = U0 for all t ≥ 0. In statistical physics this is
called equilibrium (of the dynamical system with the environment).
The following result can be directly discerned from (8.16) by means of (11.12), but we give an alternative
narrative as a training exercise on computing with Wiener integrals.
Proof: We shall first find E [U (t + h) | U (t)] with h > 0. In order to do this we show the intermediate result
that for any h > 0,
Z t+h
U (t + h) = e−ah U (t) + σe−ah e−a(t−u) dW (u). (11.15)
t
We write Z t+h
U (t + h) = e−a(t+h) U0 + σ e−a((t+h)−u) dW (u)
0
Z t+h
= e−a(t+h) U0 + σe−ah e−a(t−u) dW (u)
0
Z t Z t+h
−a(t+h) −ah −a(t−u) −ah
=e U0 + σe e dW (u) + σe e−a(t−u) dW (u)
0 t
Z t Z t+h
−ah −at −ah −a(t−u) −ah
=e e U0 + σe e dW (u) + σe e−a(t−u) dW (u)
0 t
Z t Z t+h
= e−ah e−at U0 + σ e−a(t−u) dW (u) + σe−ah e−a(t−u) dW (u)
0 t
" Z #
t+h
−ah
=E e U (t) | U (t) + E σe−ah e −a(t−u)
dW (u) | U (t) ,
t
and since we can take out what is known and since the increments of the Wiener process are independent of
the sigma field σ (U0 , W (s) | s ≤ t) generated by the Wiener process up to time t and by the initial value,
" Z #
t+h
= e−ah U (t) + E σe−ah e−a(t−u) dW (u) = e−ah U (t),
t
and " Z # Z
t+h t+h
−ah −a(t−u)
Var σe e dW (u) = σ 2 e−2ah e−2a(t−u) du
t t
− 12
(v−e−ah u)2
1 σ2 (1−e−2ah )
fU(t+h)|U(t)=u (v) = q e 2a .
2
2π σ2a (1 − e−2ah )
e (t),
U (t) = e−at U ∗ + U (11.18)
As this is not a set of lecture notes in physics, we need to change the order of integration by proving (or referring
to the proof of) the following Fubini-type lemma [26, p.109] or [89, p.43]:
When we insert the fluctuation - dissipation formula in the constant in (11.24), we get
σ2 g 2kB T
2
= 2 =
a γ γ
11.4. THE LANGEVIN EQUATIONS FOR THERMAL OR NYQUIST- JOHNSON NOISE 307
Remark 11.3.1 Langevin is said to have claimed that his approach to Brownian movement is ’infinitely simpler’
than Einstein’s. Of course, for us the simplicity is due to an investment in Wiener integrals, Wiener processes,
convergence in mean square, multivariate Gaussianity and ultimately, sigma-additive probability measures and
the prerequisite knowledge in [16]. None of the mathematical machinery hereby summoned up was available to
Langevin himself, who anyhow figured out a way to deal correctly with the pertinent analysis.
−RI(t) + V (t),
2 For the life and work of Harry T. Nyquist (born in Nilsby in Värmland), see the lecture by K.J. Åström in [106].
3A survey of thermal noise and its physics with another pedagogical method of deriving the above formula is D. Abbott,
B.R. Davis, N.J. Phillips and K. Eshragian: Simple Derivation of the Thermal Noise Formula Using Window-Limited Fourier
Transforms and Other Conundrums. IEEE Transactions on Education, 39, pp. 1−13, 1996.
308 CHAPTER 11. THE LANGEVIN EQUATIONS AND THE ORNSTEIN-UHLENBECK PROCESS
where I(t) is the instantaneous electrical current in the loop and V (t) is the random force. RI(t) is called
the dissipative voltage, and V (t) is called the Johnson emf. By Faraday’s law there will be an induced emf.,
d
−L dt I(t), in the loop, and since the integral of the electric potential around the loop must vanish, we get the
circuit equation
d
−RI(t) + V (t) − L I(t) = 0,
dt
or
d 1
I(t) = −aI(t) + V (t), (11.26)
dt L
where
R
a= .
L
We argue (c.f, [44, pp. 392−394]) that
√ o
V (t) = L c W (t).
What this means is that a simple resistor can produce white noise in any amplifier circuit. Thus we have arrived
at the Ornstein-Uhlenbeck process, c.f., (11.4), or
√
dI(t) = −aI(t)dt + cdW (t),
Here sV (f ) is the spectral density of theorem 9.3.1 restricted to the positive axis and multiplied by 2 (recall
that sV (f ) = sV (−f )), so the presence of the factor 4 seems to have no deeper physical meaning. The standard
derivation of the Nyquist formula without the Langevin hypothesis is given, e.g., in [71, p. 131] or [11]. Today,
as pointed out in [106], the Nyquist formula is used daily by developers of micro and nano systems and for space
communication.
In view of (11.12) and the model above we get
σ 2 −a|t−s| kB T − RL |t−s|
E [I(t) · I(s)] = e = e . (11.28)
2a L
By the table of pairs of autocorrelation functions and spectral densities in section 9.3.1 we have that
kB T − R |h| F kB T 2R
L
RI (h) = e L ↔ sI (f ) = . (11.29)
L L R 2
L + f2
11.5. EXERCISES 309
This form of spectral density is sometimes called the Lorentzian distribution [58, p.257]. Then
2kB T 1
sI (f ) = .
R 1 + Rf 2 2
(L)
If L ≈ 0, then sI (f ) is almost constant up to very high frequencies and the autocorrelation function RU (h) is
almost zero except at h = 0. For all L Z ∞
2kB T
RI (h)dh = ,
−∞ R
and we can regard RI (h) as behaving for small L almost like the scaled Dirac’s delta 2kRB T δ(h).
Or, for small L we get RI(t) ≈ V (t), and the practical distinction between the dissipative voltage and the
Johnson emf V (t) disappears. Thus, for L ≈ 0,
Z t r
2kB T
Q(t) = I(s)ds ≈ W (t),
0 R
q
2kB T
where Q(t) is something like the net charge around the loop and the factor R reminds us of Einstein’s
diffusion formula (10.2).
11.5 Exercises
1. Consider the Ornstein-Uhlenbeck process in (11.13). Find the coefficient of correlation ρU(t+h),U(t) for
h > 0. Use this to check (11.14) w.r.t. (8.16).
which is a linear stochastic differential equation. Because the sample paths of a Wiener process
are nowhere differentiable, the expression in (11.32) is merely a (very good) notation for (11.31),
c.f., (10.93) and (10.94) above. Since this is a linear differential equation, we do not need the full
machinery of stochastic calculus [13, 29, 70].
Rt
Aid: Start with −a 0 U (s)ds so that You write from (11.30)
Z t Z t Z tZ s
−as
−a U (s)ds = −a e U0 ds + −aσ e−a(s−u) dW (u)ds.
0 0 0 0
which is (11.31).
310 CHAPTER 11. THE LANGEVIN EQUATIONS AND THE ORNSTEIN-UHLENBECK PROCESS
d
µU (t) = −aµU (t), t > 0, µU (0) = m.
dt
(c) Let the variance function be VarU (t) = Var [U (t)]. Show that
d
VarU (t) = −2aVarU (t) + σ 2 t > 0, VarU (0) = σo2 .
dt
(d) Let the covariance function be CovU (t, s). Show that
∂
CovU (t, s) = −aCovU (t, s), s > t.
∂s
d
(e) Show that the limiting distribution in U (t) → U ∗ , as t → ∞ is
∗ σ2
U ∈ N 0, . (11.33)
2a
Thus (11.33) shows that the velocities of the particles in the Langevin model of the Brownian move-
ment eventually attain a Maxwell-Boltzmann distribution from any initial normal distribution.
What happens with the limiting distribution, if we only assume in (11.30) E [U0 ] = m and Var [U0 ] =
σ02 but no Gaussianity?
3. A Linear Stochastic Differential Equation with Time Variable Coefficients (From [62, p. 53])
Solve the stochastic differential equation
and then invoke this solution formula to suggest a representation using a Wiener integral (c.f., (11.31)).
Then proceed as in the exercise above.
4. Let X(t) be the position of the Brownian movement particle according to the Langevin theory.
(a) Show that the expected displacement, conditionally on U ∗ and X(0), has the mean function
1
µX(t)|U ∗ ,X(0) (t) = E [X(t) | U ∗ , X(0)] = X(0) + U ∗ 1 − e−at . (11.35)
a
5. Give the expression for the transition p.d.f. of X(t), the position of the Brownian movement particle
according to the Langevin theory.
11.5. EXERCISES 311
6. The Brownian Projectile 4 This exercise is adapted from [73, pp. 69−73]. Suppose that a Brownian
particle with x − y -coordinates X(t), Y (t) (= horizontal displacement, vertical height) is initialized as a
projectile with
X(0) = Y (0) = Z(0) = 0,
Here UY,0 is the initial vertical velocity. The equations of motion are
Z t Z t
X(t) = UX (s)ds, Y (t) = UY (s)ds (11.37)
0 0
with
Show that
UX,0 −at
2
X(t) ∈ N 1−e , σ (t)
a
and
UY,0 g
Y (t) ∈ N 1 − e−at − 2 at + e−at − 1 , σ 2 (t) .
a a
(b) Suppose at << 1, or the time is close to the initial value. Show that you get (leading terms)
E [X(t)] = UX,0 t,
and
gt2
E [Y (t)] = UY,0 t − ,
2
and
β 2 t3
Var [Y (t)] = Var [X(t)] = .
3
Observe that E [Y (t)] is the expected height of the Brownian projectile at time t. Thus, close to
the start, the Brownian projectile preserves the effect of the initial conditions and reproduces the
deterministic projectile motion familiar from introduction to physics courses.
4 In Swedish this might be ’brownsk kastparabel’, the Swedish word ’projektilbana’ corresponds to a different physical setting.
312 CHAPTER 11. THE LANGEVIN EQUATIONS AND THE ORNSTEIN-UHLENBECK PROCESS
(c) Suppose at >> 1, or that the time is late. Show that you get (leading terms)
E [X(t)] = 0,
and
gt2
E [Y (t)] = − ,
2
and
β2t
Var [Y (t)] = Var [X(t)] = .
a2
This recapitulates the statistical behaviour of two standard Wiener processes with superimposed
constant drift downwards in the height coordinate, or for large t
β β
dX(t) = dWX (t), dY (t) = −gtdt + dWY (t).
a a
7. Stochastic Damped Harmonic Oscillator The harmonic oscillator is a multipurpose workhorse of
theoretical physics. We shall now give a description of the damped harmonic oscillator using the Langevin
dynamics. This discussion is adapted from [73, pp. 75−80], but is originally due to Subrahmanyan
Chandrasekhar in 1943 5 .
A massive object of mass = m is attached to a spring and submerged in a viscous fluid. If it is set into
motion, the object will oscillate back and forth with an amplitude that decays in time. The collisions that
cause the oscillations to decay will also cause the object to oscillate randomly. γ is the friction coefficient,
not the Euler gamma. The Chandrasekhar equations for the random motion are
Z t
X(t) = U (s)ds, (11.39)
0
where √
d γ γ 2kB T
U (t) = − U (t) − X(t) + dW (t). (11.40)
dt m m m
(a) Let µU (t) = E [U (t)] and µX (t) = E [X(t)]. Show that
d γ γ
µU (t) = − µU (t) − µX (t),
dt m m
and
d
µX (t) = µU (t).
dt
Solve these equations !
(b) Show that the autocorrelation function of X is
kB T − γ h γ
RX (h) = e 2m cos(ω1 h) + sin(ω1 h) ,
mωo2 2mω12
q
γ γ2
where ωo = m and ω1 = ωo2 − 4m 2 . We have written this expression so that we emphasize the case
γ
2ωo >> m (lightly damped oscillator), so that ω1 is defined.
Aid: This is tricky ! Some help can be found in [17, p. 440. example 33.5]. Find first the suitable
Fourier transforms.
What sort of formula is obtained by means of RX (0) ?
5 There are several well known Indian born scientists with the family name Chandrasekhar. Subrahmanyan C. was
an applied mathematician, who worked with astrophysics, and became a laureate of the Nobel Prize in Physics in 1983
http://nobelprize.org/nobel prizes/physics/laureates/1983/chandrasekhar.html
Chapter 12
12.1 Introduction
The Poisson process is an important example of a point process. In probability theory one talks about a point
process, when any sample path of the process consists of a set of separate points. For example, the sample
paths are in continuous time, and assume values in integers. The Poisson process will be connected to the
Poisson distribution in the same way as the Wiener process is connected to the normal distribution: namely as
the distribution of the independent increments. We shall start with the definition and basic properties of the
Poisson process and then proceed with (engineering) models, where the Poisson process has been incorporated
and found useful. Poisson processes are applied in an impressing variety of topics. The applications range from
queuing, telecommunications, and computer networks to insurance and astronomy.
(1) N (0) = 0.
(2) The increments N (tk ) − N (tk−1 ) are independent stochastic variables for non-overlapping intervals, i.e.,
1 ≤ k ≤ n, 0 ≤ t0 ≤ t1 ≤ t2 ≤ . . . ≤ tn−1 ≤ tn and all n.
There are alternative equivalent definitions, but this text will be restricted to the one stated above. Following
our line of approach to stochastic processes we shall first find the mean function, the autocorrelation function
and the autocovariance function of N, i.e., some of the the second order properties.
313
314 CHAPTER 12. THE POISSON PROCESS
since N (t) = N (t) − N (0) ∈ Po(λt) by (3). The autocorrelation function is by definition (9.4)
RN (t, s) = λ2 t · s + λt.
Hence the autocovariance function of N is equal the autocovariance function in (10.18). Hence it must be clear
that the autocovariance function does not tell much about a stochastic process.
τk = Tk − Tk−1 ,
is the kth occupation/interarrival time. In words, τk is the random time the process occupies or visits the value
N (t) = k − 1, or the random time between the kth and the k − 1th arrival. In view of these definitions we can
(after a moment’s reflection) write
N (t) = max{k|t ≥ Tk }. (12.3)
In the same way we observe that
{N (t) = 0} = {T1 > t}. (12.4)
The contents of the following theorem are important for understanding the Poisson processes.
12.2. DEFINITION AND FIRST PROPERTIES 315
1
Theorem 12.2.1 1. τ1 , τ2 . . . , τk . . . are independent and identically distributed, τi ∈ Exp λ .
2. Tk ∈ Γ k, λ1 , k = 1, 2, . . ..
The full proof will not be given. We shall check the assertion for τ1 and then for (τ1 , τ2 ). We start with
deriving the distribution of τ1 = T1 . Clearly τ1 ≥ 0. Then (12.4) yields P (T1 > t) = P (N (t) = 0)= e−λt , since
N (t) ∈ Po(λ). Then for t > 0
1 − FT1 (t) = P (T1 > t) = e−λt ,
i.e., (
1 − e−λt t>0
FT1 (t) =
0 t ≤ 0.
Hence τ1 = T1 ∈ Exp(1/λ).
Next we consider (τ1 , τ2 ) and find first FT1 ,T2 (s, t). We assume t > s. It will turn out to be useful to consider
the following expression.
Thus, by (12.5),
FT1 ,T2 (s, t) = P (T1 ≤ s, T2 ≤ t) = P (T1 ≤ s) − P (T1 ≤ s, T2 > t)
= P (T1 ≤ s) − sλe−λt .
Therefore
∂
FT ,T (s, t) = fT1 (s) − λe−λt .
∂s 1 2
By a second partial differentiation we have established the joint p.d.f. of (T1 , T2 ) as
∂2
fT1 ,T2 (s, t) = FT ,T (s, t) = λ2 e−λt . (12.6)
∂t∂s 1 2
Now, we consider the change of variables (T1 , T2 ) 7→ (τ1 , τ2 ) = (u, v) by
u = τ1 = T1 = g1 (T1 , T2 ) ,
v = τ2 = T2 − T1 = g2 (T1 , T2 )
T2 = T1 + τ2 = u + v = h2 (u, v) .
= λ2 e−λ(v+u)
−λu −λv
= |λe{z } · λe
| {z } .
p.d.f. of Exp(1/λ) p.d.f. of Exp(1/λ)
Thus fτ1 ,τ2 (u, v) = fT1 (u) · fτ2 (v) for all pairs (u, v) and this ascertains that τ1 and τ2 are independent. Thus
τ1 and τ2 are independent. By example 4.4.9, the distribution of the sum of two I.I.D. r.v.’s ∈ Exp(1/λ) is T2 ∈
Γ 2, λ1 . We should now continue by considering in an analogous manner T1 , T2 , T3 to derive fτ1 ,τ2 ,τ3 (u, v, w)
and so on, but we halt at this point. The Gamma distributions in this theorem are all Erlang, c.f., example
2.2.10.
or
∞
X 2k
(tλ) 1 λt
= e + e−λt .
2k! 2
k=0
By the above
1 λt
P (N (t) = even) = e−λt · e + e−λt = e−λt · cosh(λt),
2
and
1 λt
P (N (t) = odd) = e−λt · e − e−λt = e−λt · sinh(λt).
2
Proof: The process Ns has clearly nonnegative integers as values and has nondecreasing sample paths, so we
are talking about a counter process. We set
def
Ns (t) = N (t + s) − N (s),
and get Ns (0) = 0, so the premise (1) in the definition is there. Next we show (3). For t > u > 0 we get
Theorem 12.3.2 N = {N (t) | t ≥ 0} is a Poisson process with parameter λ > 0, and Tk is the kth occur-
rence/arrival time. Then the process Nk = {N (t + Tk ) − N (Tk ) | t ≥ 0} is a Poisson process.
Proof: The process NTk has clearly nonnegative integers as values and has nondecreasing sample paths, and
therefore we are dealing with a counter process. We set for t ≥ 0
def
Nk (t) = N (t + Tk ) − N (Tk ).
For any l = 0, 1, . . . ,
P (Nk (t) − Nk (s) = l) = P (N (t + Tk ) − N (s + Tk ) = l)
Z ∞ Z ∞
= P (N (t + Tk ) − N (s + Tk ) = l, Tk = u) du = P (N (t + Tk ) − N (s + Tk ) = l | Tk = u) fTk (u)du
0 0
Z ∞
= P (N (t + u) − N (s + u) = l | Tk = u) fTk (u)du
0
318 CHAPTER 12. THE POISSON PROCESS
But we have that the events {N (t + u) − N (s + u) = l} and {Tk = u} are independent for any u. This
follows, since Tk = u is an event measurable w.r.t. the sigma field generated by N (v), 0 < v ≤ s and the
N (t + u) − N (s + u) = l is an event measurable w.r.t. the sigma field generated by N (v), u + s < v ≤ t + u,
and the Poisson process has independent increments. Hence
Z ∞
= P (N (t + u) − N (s + u) = l) fTk (u)du
0
Z ∞
= P N (t + u) − N (s + u) = l fTk (u)du
0 | {z }
∈Po(λ(t−s))
Z ∞ l Z ∞
−λ(t−s) (λ(t − s)) (λ(t − s))l (λ(t − s))l
= e fTk (u)du = e−λ(t−s) fTk (u)du = e−λ(t−s) ,
0 l! l! l!
|0 {z }
=1
as fTk (u) is a p.d.f.! Hence we have shown (12.9). Finally we should show that the increments Nk (ti ) −
Nk (ti−1 ) =N (ti + Tk ) − N (ti−1 + Tk ) are independent over nonoverlapping intervals. But if we use an analo-
gous consideration as in the corresponding step of the proof 12.3.1 , we are watching the increments over the
nonoverlapping intervals with the endpoints Tk ≤ t1 + Tk ≤ t2 + Tk ≤ . . . ≤ tn−1 + Tk ≤ tn + Tk . In the proof
above we have found that
N (t + Tk ) − N (s + Tk ) | Tk = u ∈ Po(λ(t − s)). (12.10)
Hence, the probabilistic properties of the increments of NTk are independent of {Tk = u} for any u ≥ 0, and
(2) follows for NTk , of course, by property (2) of the restarted Poisson process N.
is a model called a filtered shot noise. We shall once more find the mean function of the process thus defined.
In addition we shall derive the m.g.f. of Z(t).
Borrowing from control engineering and signal processing we should/could refer to h(t) with h(t) = 0
for t < 0 in (12.11) as a causal impulse response of a linear filter. With
(
1 t ≥ 0,
U (t) =
0 t<0
an example is
h(t) = e−t U (t).
1 Shot noise is hagelbrus in Swedish.
12.4. FILTERED SHOT NOISE 319
Since h(t) = 0 for t < 0, the sum in (12.11) contains for any t only a finite number of terms, since
there is only a finite number of arrivals in a Poisson process in a finite interval. The word ’causal’
means thus simply that Z(t) does not for a given t depend on the arrivals of events in the future
beyond t, i.e., on Tj > t, recall (12.3).
We start with the mean function. Since the sum in (12.11) consists for any t only of a finite number of
terms, there is no mathematical difficulty in computing as follows.
∞
X
E [Z(t)] = E [h (t − Tk )] . (12.12)
k=1
The individual term in the sum is by the law of the unconscious statistician (2.4)
Z ∞
E [h (t − Tk )] = h (t − x) fTk (x)dx,
0
1
where we know by theorem 12.2.1 that Tk ∈ Γ k, λ (Erlang distribution, example 2.2.10). Thus
Z ∞ Z ∞
λk xk−1 −λx
h (t − x) fTk (x)dx = h (t − x) e dx,
0 0 (k − 1)!
Z ∞
xk−1 −λx
= λk h (t − x) e dx.
0 (k − 1)!
When we insert this in (12.12) we get
∞
X Z ∞
xk−1 −λx
E [Z(t)] = λk h (t − x) e dx
0 (k − 1)!
k=1
Z ∞ X∞ Z ∞ X∞
λk−1 xk−1 −λx λk xk −λx
=λ h (t − x) e dx = λ h (t − x) e dx
0 (k − 1)! 0 k!
k=1 k=0
| {z }
=eλx
Z ∞
=λ h (t − x) dx.
0
In the final step above we exploited the causality of the impulse response. Next we shall derive the m.g.f. of
filtered shot noise. We attack the problem immediately by double expectation to get
h i h h ii
ψZ(t) (s) = E esZ(t) = E E esZ(t) | N (t) =
and continue the assault by the law of the unconscious statistician (2.4)
∞
X h i
= E esZ(t) | N (t) = l P (N (t) = l)
l=0
∞
X h Pl i
= E es k=1 h(t−Tk ) | N (t) = l P (N (t) = l) ,
l=0
320 CHAPTER 12. THE POISSON PROCESS
Here we have in view of the result in (12.30) in one of the exercises that
h Pl i Yl Z t Z t l
s k=1 h(t−Tk ) sh(t−x) 1 sh(t−x) 1
E e | N (t) = l = e dx = e dx .
0 t 0 t
k=1
R l
t
∞
X λ 0 esh(t−x) dx Rt
(esh(t−x) −1)dx .
Rt
esh(t−x) dx
= e−λt = e−λt eλ 0 = eλ 0
l!
l=0
Then X = {X(t) | t ≥ 0} is a process in continuous time flipping between UP (+1) and DOWN (−1) with a
random initial value and with the Poisson process generating the flips in time . The figure 12.1 shows in the
upper part a sample path of UPs and DOWNs and in the lower part the corresponding sample path of Poisson
process. In the figure we have Y = 0, since X(0) = 1.
12.5. RANDOM TELEGRAPH SIGNAL 321
X(t)
+1
−1
N(t)
3
Figure 12.1: A sample path of the random telegraph signal and the corresponding sample path of a Poisson
process
Because N has the nonnegative integers as values, the definition 12.5.1 implies that:
(
(−1)Y · 1 N (t) is even
X(t) = Y
(12.17)
(−1) · (−1) N (t) is odd.
12.5.2 The Marginal Distribution and the Mean Function of RTS Modelled by
Poisson Flips
Lemma 12.5.1
1
P (X(t) = +1) = P (X(t) = −1) = , t ≥ 0. (12.18)
2
Proof The law of total probability (3.35) gives
We have that
0+N (t) N (t)
P (X(t) = +1 | Y = 0) = P (−1) = +1 | Y = 0 = P (−1) = +1 | Y = 0
1
N (t)
= P (−1) = +1 = P (N (t) = even) = 1 + e−2λt ,
2
since Y is independent of N and by (12.7). In the same way we get
P (X(t) = +1 | Y = 1) = P (−1) · (−1)N (t) = +1
322 CHAPTER 12. THE POISSON PROCESS
1
= P (N (t) = odd ) = 1 − e−2λt
2
from (12.8). Since Y ∈ Be(1/2) by construction
1
P (Y = 0) = P (Y = 1) = .
2
Insertion of the preceding results in (12.19) gives
1 1 −2λt
1 −2λt
1
P (X(t) = +1) = 1+e + 1−e = ,
2 2 2 2
as was claimed.
The mean function is by definition
Lemma 12.5.2
µX (t) = 0, t ≥ 0. (12.20)
We shall compute
RX (t, s) = E [X(t)X(s)] .
We work out case by case the conditional expectations in the right hand side. By (12.16)
h i
0+N (t) 0+N (s)
E [X(t)X(s) | Y = 0] = E (−1) (−1) |Y =0
h i
N (t)+N (s)
= E (−1) ,
since N is independent of Y . Hereafter we must distinguish between two different cases, i) s > t och ii) s < t.
Then N (s) − N (t) is independent of 2N (t) and 2N (t) is an even integer, so that
2N (t)
(−1) = 1.
This implies h i h i
E (−1)N (t)+N (s) = E (−1)N (s)−N (t)+2N (t)
12.5. RANDOM TELEGRAPH SIGNAL 323
h i h i h i
N (s)−N (t) 2N (t) N (s)−N (t)
= E (−1) · E (−1) = E (−1) .
But N (s) − N (t) ∈ Po (λ(s − t)), and the very same reasoning that produced (12.7) and (12.8) entails also
1
P (N (s) − N (t) = even) = 1 + e−2λ(s−t) ,
2
as well as
1
P (N (s) − N (t) = odd ) = 1 − e−2λ(s−t) .
2
Therefeore
(+1) · P (N (s) − N (t) = even) + (−1) · P (N (s) − N (t) = odd )
1 1
= 1 + e−2λ(s−t) − 1 − e−2λ(s−t) = e−2λ(s−t) .
2 2
For the second term in the right hand side of (12.21) it is ascertained in the same manner
h i
N (t)+N (s)
E [X(t)X(s) | Y = 1] = (−1)2 E (−1)
= e−2λ(s−t) .
2
The results in (12.22) and (12.23) are expressed by a single formula.
Lemma 12.5.3
E [X(t)X(s)] = e−2λ|t−s| . (12.24)
2
( (
t−s t>s −(t − s) t > s case ii) in eq. (12.23)
| t − s |= ⇔ − | t − s |=
−(t − s) s≥t t − s = −(s − t) s ≥ t case i) in eq. (12.22)
324 CHAPTER 12. THE POISSON PROCESS
Proof The mean function is a constant (=0), as established in (12.20). The autocorrelation function is
as given in (12.24).
In fact we are going to show that the RTS modelled by Poissonian flips is strictly stationary, which implies
proposition 12.5.4. We shall, however, first establish mean square continuity and find the power spectral density.
We know, see theorem 9.3.2, that a weakly stationary process is mean square continuous, if its autoco-
variance function is continuous in origin. Autocovariance function is e−2λ|h| and is continuous in the origin, and
the conclusion follows. In other words
h i
2
E (X(t + h) − X(t)) → 0, as h → 0.
Here we see very clearly that continuity in mean square does not tell about continuity of sample paths. Ev-
ery sample path of the weakly stationary RTS is discontinuous, or, more precisely, every sample path has a
denumerable number of discontinuities of the first kind3 . The discontinuities are the level changes at random
times.
The astute reader recognizes in (12.24) the same expression as in (11.13), the autocorrelation function of an
Ornstein-Uhlenbeck process. This shows once more that identical second order properties can correspond to
processes with quite different sample path properties.
By the table of pairs of autocorrelation functions and spectral densities in section 9.3.1 we get the Lorentzian
distribution
F 4λ
RX (h) = e−2λ|h| ↔ sX (f ) = 2 . (12.25)
4λ + f 2
The figure 12.2 depicts the spectral density sX (f ) for λ = 1 and λ = 2. This demonstrates that the random
telegraph signal moves to higher frequencies, i.e., the spectrum is less concentrated at frequences f around zero,
for higher values of the intensity λ, as seems natural.
for all n, h > 0 and all 0 ≤ t1 ≤ t2 . . . ≤ tn . This is nothing but a consequence of the fact that the Poisson
process has independent increments over non-overlapping intervals, and that the increments have a distribution
that depends only on the mutual differences of the times, and that lemma 12.5.1 above holds.
We observe by (12.16)
Y +N (t+h)
X(t + h) = (−1) , h > 0.
3 We say that a function f (t) for t ∈ [0, T ], has only discontinuities of the first kind, if the function is 1) bounded and 2) for
every t ∈ [0, T ], the limit from left lims↑t f (s) = f (t−) and the limit from the right lims↓t f (s) = f (t+) exist [91, p. 94].
12.5. RANDOM TELEGRAPH SIGNAL 325
0.9
0.8
0.7
0.6
0.5
0.4
0.3
lambda =2
0.2
0.1
0
−5 −4 −3 −2 −1 0 1 2 3 4 5
Then
X(t + h) = (−1)Y +N (t+h)−N (t)+N (t) = (−1)N (t+h)−N (t) (−1)Y +N (t)
N (t+h)−N (t)
= (−1) X(t),
This expression for X(t + h) implies the following. If the UP-DOWN status of X(t) is known and given, the
status of X(t + h) is determined by N (t + h) − N (t). But N (t + h) − N (t) is independent of X(t), because the
increments of the Poisson process are independent. Hence we have shown that
with respective combinations of a = ±1, b = ±1 and of even/odd. But as the increments of the Poisson process
are independent,
P (X(t + h) | X (tn ) , . . . , X (t1 )) = P (X(t + h) | X (tn ))
for t1 ≤ . . . ≤ tn < t + h, which is a Markov property. Then the chain rule in (3.34) implies that
Every factor in the last product is one of the combinations of the form in (12.27)
Every factor in this last product is one of the combinations of the same form as prior to the time shift, and are
of the form in (12.27)
and
N (ti ) − N (ti−1 ) ∈ Po (λ(ti − ti−1 ))
P (X (t1 + h) = x1 ) = P (X (t1 ) = x1 ) .
for all n, h > 0 and all 0 ≤ t1 ≤ t2 . . . ≤ tn . By this we have shown that the Poisson (and Markov) model of
RTS is strictly stationary.
12.6 Exercises
12.6.1 Basic Poisson Process Probability
1. N = {N (t) | t ≥ 0} is a Poisson process with parameter λ > 0. Find
3
Answer: 8.
N (t) P
→ λ,
t
as t→∞.
12.6. EXERCISES 327
3. What is the probability that one of two independent Poisson processess reaches the level 2, before the
other reaches the level 1. Answer: 12 .
5. N1 = {N1 (t) | t ≥ 0} is a Poisson process with intensity λ1 , and N2 = {N2 (t) | t ≥ 0} is another Poisson
process with intensity λ2 . N1 and N2 are independent of each other.
Consider the probability that the first event occurs for N1 , i.e., that N1 jumps from zero to one before
N2 jumps for the first time. Show that
λ1
P ( first jump for N1 ) = .
λ1 + λ2
6. (From [35]) Let N = {N (t) | t ≥ 0} be a Poisson process with intensity λ = 2. We form a new stochastic
process by
N (t)
X(t) = , t ≥ 0,
2
where ⌊x⌋ is the floor function, the integer part of the real number x, i.e., if k is an integer,
as s → t.
8. N = {N (t) | t ≥ 0} is a Poisson process with parameter λ > 0. T = the time of occurrence of the first
event. Determine for all t ∈ (0, 1) the probability P (T ≤ t | N (1) = 1). Or, what is the distribution of
T | N (t) = 1? Answer: U (0, 1).
9. N = {N (t) | t ≥ 0} is a Poisson process with parameter λ > 0. We take the conditioning event {N (t) = k}.
Recall (12.3), i.e, N (t) = max{k|t ≥ Tk }. Tj =, j = 1, . . . , k are the times of occurrence of the jth event,
respectively.
328 CHAPTER 12. THE POISSON PROCESS
10. The Distribution Function of the Erlang Distribution Let X ∈ Erlang (n, 1/λ). Show that
n−1
X (λt)j
FX (t) = P (X ≤ t) = 1 − e−λt .
j=0
j!
Aid: Let N = {N (t) | t ≥ 0} be a Poisson process with parameter λ > 0. Tn is its nth arrival time of N,
then convince yourself first of the fact that, {Tn ≤ t} = {N (t) ≥ n}.
11. N = {N (t) | t ≥ 0} is a Poisson process with intensity λ. Tk and Tk+1 are the kth and k + 1th
occurrence/arrival times.
uk−1 −λ(v+u)
fTk ,τk+1 (u, v) = λk+1 e
(k − 1)!
What is the conclusion?
2. Let Xi , i = 1, 2, . . . , be Xi ∈ Fs (p) and I.I.D.. Let N = {N (t) | t ≥ 0} be a Poisson process with intensity
λ > 0. The process N is independent of Xi , i = 1, 2, . . . ,.
We define a new stochastic process X = {X(t) | t ≥ 0} with
N (t)
X
X(t) = Xi , X(0) = 0, X(t) = 0, if N (t) = 0.
i=1
12.6. EXERCISES 329
λ λ · (2 − p)
E [X(t)] = · t, Var [X(t)] = · t.
p p2
(c) It is being said that a Pólya-Aeppli process is a generalisation of the Poisson process (or that the
Poisson process is a special case of the Pólya-Aeppli process). Explain what this means. Aid:
Consider a suitable value of p.
(d) The process
Z(t) = ct − X(t), t ≥ 0, c > 0,
where {X(t) | t ≥ 0} is a Pólya-Aeppli process, is often used as a model of an insurance business and
is thereby called the risk process of Pólya and Aeppli. How should one interpret c, N and Xi :s
with respect to the needs of an insurance company ?
3. (From [99] and sf2940 2012-10-17) N = {N (t) | t ≥ 0} is a Poisson process with intensity λ > 0. We
define the new process Y = {Y (t) | 0 ≤ t ≤ 1} by
def
Y (t) = N (t) − tN (1), 0 ≤ t ≤ 1.
(a) Are the sample paths of Y nondecreasing? Justify your answer. Answer: No.
(b) Find E [Y (t)]. Answer: 0.
(c) Find Var [Y (t)]. Answer: λt(1 − t).
(d) Find the autocovariance of Y. Answer:
(
λs(1 − t) s < t,
CovY (t, s) =
λt(1 − s) t ≤ s.
(e) Compare the autocovariance function in (d) with the autocovariance function in (10.79). What is
Your explanation?
4. (From [99]) N1 = {N1 (t) | t ≥ 0} is a Poisson process with intensity λ1 , and N2 = {N2 (t) | t ≥ 0} is
another Poisson process with intensity λ2 . N1 and N2 are independent of each other. Let T1 and T2 be
the times of occurrence/arrival of the first two events in N1 . Let
Y = N2 (T2 ) − N2 (T1 )
Y = N2 (τ2 + T1 ) − N2 (T1 ) .
330 CHAPTER 12. THE POISSON PROCESS
where τ2 is independent of T1 = τ1 by theorem 12.2.1. By extending the argument in the proof of the
restarting theorems 12.3.1 and 12.3.2 we have that
5. M.g.f. of the Filtered Shot Noise and Campbell’s Formula Use the m.g.f. in (12.15) derive (12.13).
P∞
6. Find the mean and variance of Z(t) = k=1 h (t − Tk ) , t ≥ 0, when
h(t) = e−t U (t),
and (
1 t ≥ 0,
U (t) =
0 t < 0.
12.6.3 RTS
1. A Modified RTS If the start value Y is removed, the modified RTS is
def N (t)
Xo (t) = (−1) , t ≥ 0. (12.31)
Show that
µX0 (t) = E [Xo (t)] = e−2λt . (12.32)
2. Give a short proof of (12.20) without using (12.7), (12.8) and of (12.18).
3. Give a short proof of (12.24) without using (12.7), (12.8) and of (12.18).
Find E (X(t)) and Var (X(t)) by this m.g.f. and compare with the determination without a generating
function.
5. RTS as a Markov Chain in Continuous Time In midst of the proof of strict stationarity we observed
that for any h > 0 and any t1 < . . . < tn
P (X(t + h) | X (tn ) , . . . , X (t1 )) = P (X(t + h) | X (tn )) . (12.33)
As already asserted, this says that the process X = {X(t) | t ≥ 0} is a Markov chain in continuous
time. As for all Markov chains in continuous time, we can define the transition probabilities
def
Pij (t) = P (X(t) = j|X(0) = i) , i, j ∈ {+1, −1}.
This is the conditional probability that the RTS will be in state j at time t given that the RTS was in
state i at time t = 0. These functions of t are arranged in the matrix valued function
13.1 Background
The recursive algorithm known as the Kalman filter was invented by Rudolf E. Kalman1 . His original work was
on random processes in discrete time, the extension to continuous time is known as the Kalman-Bucy Filter.
The Kalman-Bucy filter produces an optimal (in the sense of mean squared error) estimate of the sample path
(or, trajectory) of a stochastic process, which is observed in additive noise. The estimate is given by a stochastic
differential equation. We consider only the linear model of Kalman-Bucy filtering.
The Kalman filter was prominently applied to the problem of trajectory estimation for the Apollo space
program of the NASA (in the 1960s), and implemented in the Apollo space navigation computer. It was also
used in the guidance and navigation systems of the NASA Space Shuttle and the attitude control and navigation
systems of the International Space Station.
Robotics is a field of engineering, where the Kalman filter (in discrete time) plays an important role [98].
Kalman filter is in the phase-locked loop found everywhere in communications equipment. New applications
of the Kalman Filter (and of its extensions like particle filters) continue to be discovered, including global
positioning systems (GPS), hydrological modelling, atmospheric observations.
It has been understood only relatively recently that the Danish mathematician and statistician Thorvald
N. Thiele2 discovered the principle (and a special case) of the Kalman filter in his book published in Copenhagen
in 1889: Forelæsningar over Almindelig Iagttagelseslære: Sandsynlighedsregning og mindste Kvadraters Methode.
A translation of the book and an exposition of Thiele, s work is found in [72].
http://www-groups.dcs.st-ac.uk./∼history/Biographies/Thiele.html
331
332 CHAPTER 13. THE KALMAN-BUCY FILTER
where V = {V (s) | 0 ≤ t} is another Wiener process, which is independent of the Wiener process W = {W (s) |
0 ≤ t}. Let us set
Ub (t) def
= E U (t) | FtY (13.3)
and S(t) satisfies the deterministic first order nonlinear differential equation known as Riccati equation
d c2 h i
2
S(t) = 2aS(t) − 2 S(t)2 + σ 2 , S(0) = E (U (0) − E [U (0)]) . (13.6)
dt g
The Riccati equation can be solved as
(α1 −α2 )c2 t
α1 − Kα2 e g2
S(t) = (α1 −α2 )c2 t
, (13.7)
1 − Ke g2
where
p
α1 = c−2 ag 2 − g a2 g 2 + c2 σ 2
p
α2 = c−2 ag 2 + g a2 g 2 + c2 σ 2
and
S(0) − α1
K= .
S(0) − α2
b (t) as a projection on the linear span of {Y (s) | 0 ≤ s ≤ t}
To derive these expressions we need the results about U
(these hold by Gaussianity), c.f. section 7.5, its representation as a Wiener integral and in the mean square.
(Forthcoming, ≈ 10 pages))
Bibliography
[1] P. Albin: Stokastiska processer. (Stochastic Processes; in Swedish), Studentlitteratur, Lund 2003.
[2] D. Aldous: Probability Approximations via the Poisson Clumping Heuristic. Springer-Verlag, New York
1989.
[3] L.C. Andrews: Special Functions of Mathematics for Engineers. SPIE Optical Engineering Press, Belling-
ham; Washington, and Oxford University Press. Oxford, Tokyo, Melbourne, 1998.
[4] A. H. S. Ang & W. H. Tang. Probability Concepts in Engineering: Emphasis on Applications to Civil and
Environmental Engineering 2nd Edition. John Wiley & Sons, New York, 2007.
[5] H. Anton & C. Rorres: Elementary Linear Algebra with Supplemental Applications. John Wiley & Sons
(Asia) Pte Ltd, 2011.
[6] C. Ash: The Probability Tutoring Book. An Intuitive Course for Engineers and Scientists (and everyone
else!). IEEE Press, Piscataway, New Jersey, 1993.
[7] A.V. Balakrishnan: Stochastic Differential Systems I. Springer Verlag, Berlin 1973.
[8] A.V. Balakrishnan: Introduction to Random Processes in Engineering. John Wiley & Sons, Inc., New
York, 1995.
[9] A. Barbour, L. Holst & S. Jansson: Poisson Approximation. Clarendon Press, Oxford, 1992.
[10] H.C. Berg: Random Walks in Biology. Expanded Edition. Princeton University Press, Princeton, New
Jersey, 1993.
[11] A. Bernow, T. Bohlin, C. Davidson, R. Magnusson, G. Markesjö & S-O. Öhrvik : Kurs i elektroniskt brus.
(A Course in Electronic Noise; in Swedish) Svenska teknologföreningen, Stockholm, 1961.
[12] D.P. Bertsekas & J.N. Tsitsiklis: Introduction to Probability. Athena Scientific, Belmont, Massachusetts,
2002.
[13] T. Björk: Arbitrage Theory in Continuous Time. Third Edition. Oxford University Press, Oxford, 2009.
[14] G. Blom: Sannolikhetsteori för FEMV. (Lecture notes in Probability for Engineering Physics, Electrical,
Mechanical and Civil Engineering; in Swedish) Lund, 1968.
[15] G. Blom, L. Holst & D. Sandell: Problems and Snapshots from the World of Probability. Springer Verlag,
Berlin, New York, Heidelberg, 1994.
333
334 BIBLIOGRAPHY
[16] G. Blom, J. Enger, G. Englund, J. Grandell & L. Holst: Sannolikhetsteori och statistikteori med
tillämpningar. (Probability Theory and Statistical Theory with Applications; in Swedish) Studentlit-
teratur, Lund 2005.
[17] S.J. Blundell & K.M. Blundell: Concepts in Thermal Physics (Second Edition). Oxford University Press,
Oxford, New York, Auckland, Cape Town, Dar es Salaam, Hong Kong, Karachi, Kuala Lumpur, Madrid,
Melbourne, Mexico City, Nairobi, New Delhi, Shanghai, Taipei, Toronto, 2010.
[18] L. Boltzmann: Entropie und Wahrscheinlichkeit. Oswalds Klassiker der Exakten Wissenschaften Band
286. Verlag Harri Deutsch, Frankfurt am Main, 2008.
[19] A.N. Borodin & P. Salminen: Handbook of Brownian Motion. Second Edition, Birkhäuser, Basel, Boston
& Berlin, 2002.
[20] Z. Brzeźniak & T. Zastawniak: Basic Stochastic Processes. Springer London Ltd, 2005.
[21] G. Chaitin: Meta Math. The Quest for Omega. Vintage Books, A Division of Random House Inc., New
York, 2005.
[22] C.V.L. Charlier: Vorlesungen über die Grundzüge der Mathematischen Statistik. Verlag Scientia, Lund,
1920.
[23] T.M. Cover & J.A. Thomas: Elements of Information Theory. J. Wiley & Sons, Inc., New York, 1991.
[24] H. Cramér: Mathematical Methods of Statistics. Princeton Series of Landmarks in Mathematics and
Physics. Nineteenth Printing & First Paperback Edition. Princeton University Press, Princeton, 1999.
[25] H. Cramér & M.R. Leadbetter: Stationary and Related Stochastic Processes: Sample Function Properties
and Their Applications, Dover Publications Inc., Mineola, New York 2004 (a republication of the work
originally published in 1967 by John Wiley & Sons Inc., New York).
[26] M.H.A. Davis: Linear Estimation and Stochastic Control. Chapman and Hall, London, 1977.
′
[27] M. Davis & A. Etheridge: Louis Bachelier s Theory of $peculation. The Origins of Modern Finance.
Princeton University Press, Princeton and Oxford, 2006.
[28] J.L. Devore: Probability and Statistics for the Engineering and Sciences. Fourth Edition. Duxbury Press,
Pacific Grove, Albany, Belmont, Bonn, Boston, Cincinnati, Detroit, Johannesburg, London, Madrid,
Melbourne, Mexico City, New York, Paris, Singapore, Tokyo, Toronto, Washington, 1995.
[29] B. Djehiche: Stochastic Calculus. An Introduction with Applications. Lecture Notes, KTH, 2000.
[30] A.Y. Dorogovtsev, D.S. Silvestrov, A.V. Skorokhod & M.I. Yadrenko: Probability Theory: Collection of
Problems, Translations of Mathematical Monographs, vol. 163, American Mathematical Society, Provi-
dence, 1997.
[31] A. Einstein: Investigations on the Theory of the Brownian Movement. Dover Publications Inc., Mineola,
New York 1956 (a translation of the work in German originally published in 1926).
[32] I. Elishakoff: Probabilistic Theory of Structures. Second Edition. (Dover Civil and Mechanical Engineer-
ing), Dover Inc., Mineola, N.Y., 1999.
[33] G. Einarsson: Principles of Lightwave Communications. John Wiley & Sons, Chichester, New York,
Brisbane, Toronto, Singapore, 1996.
BIBLIOGRAPHY 335
[34] J.D. Enderle, D.C. Enderle & D.J. Krause: Advanced Probability Theory for Biomedical Engineers. Morgan
& Claypool Publishers, 2006.
[36] A. Friedman: Foundations of Modern Analysis. Dover Publications Inc., New York 1982.
[37] A.G.Frodesen, O. Skjeggestad & H. T∅fte: Probability and statistics in particle physics. Universitetsfor-
laget, Bergen, 1979.
[38] W. Gardner: Introduction to Random Processes. With Applications to Signals and Systems. Second Edi-
tion. McGraw-Hill Publishing Company, New York, St.Louis, San Francisco, Auckland, Bogotá, Caracas,
Lisbon, London, Madrid, Mexico City, Milan, Montreal, New Delhi, Oklahoma City, Paris, San Juan,
Singapore, Sydney, Tokyo, Toronto, 1990.
[39] D.T. Gillespie: Fluctuation and dissipation in Brownian motion. American Journal of Physics. vol. 61,
pp. 1077−1083, 1993.
[40] D.T. Gillespie: The mathematics of Brownian motion and Johnson noise. American Journal of Physics.
vol. 64, pp. 225−240, 1996.
[41] R.L. Graham, D.E. Knuth & O. Patashnik: Concrete Mathematics. A Foundation for Computer Science.
Addison-Wesley Publishing Company, Reading Massachusetts, Menlo Park California, New York, Don
Mills, Ontario, Wokingham England, Amsterdam, Bonn, Sydney, Singapore, Tokyo, Madrid, San Juan,
1989.
[42] J. Grandell, B. Justusson & G. Karlsson: Matematisk statistik för E-linjen. Exempelsamling 3 (Math-
ematical Statistics for Electrical Engineers, A Collection of Examples; in Swedish), KTH/avdelning för
matematisk statistik, Stockholm 1984.
[43] R.M. Gray & L.D. Davisson: Random Processes. A Mathematical Introduction to Engineers. Prentice-Hall,
Inc., Englewood Cliffs, 1986.
[44] R.M. Gray & L.D. Davisson: An Introduction to Statistical Signal Processing. Cambridge University Press,
Cambridge, 2004.
[45] D.H. Green & D.E. Knuth & O. Patashnik: Mathematics for the Analysis of Algorithms. Third Edition
Birhäuser, Boston, Basel, Berlin, 1990.
[46] U. Grenander: Abstract Inference. John Wiley & Sons, New York, Chichester, Brisbance, Toronto, 1981.
[47] U. Grenander & G. Szegö: Toeplitz Forms and Their Applications. American Mathematical Society,
Chelsea, 2nd (textually unaltered) edition, 2001.
[48] G.R. Grimmett & D.R. Stirzaker: Probability and Random Processes. Second Edition. Oxford Science
Publications, Oxford, New York, Toronto, Delhi, Bombay, Calcutta, Madras, Karachi, Kuala Lumpur,
Singapore, Hong Kong, Tokyo, Nairobi, Dar es Salaam, Cape Town, Melbourne, Auckland, Madrid.
Reprinted Edition 1994.
[49] A. Gut: An Intermediate Course in Probability. 2nd Edition. Springer Verlag, Berlin 2009.
[50] B. Hajek: Random Processes for Engineers. Cambridge University Press, Cambridge, 2015.
336 BIBLIOGRAPHY
[51] A. Hald: Statistical theory with engineering applications, John Wiley & Sons, New York, 1952.
[52] B. Hallert: Elementär felteori för mätningar. (Elementary Error Theory for Measurement; in Swedish)
P.A. Norstedt & Söners Förlag, Stockhom, 1967.
[53] R.W. Hamming: The art of probability for scientists and engineers. Addison-Wesley Publishing Company,
Reading Massachusetts, Menlo Park California, New York, Don Mills, Ontario, Wokingham England,
Amsterdam, Bonn, Sydney, Singapore, Tokyo, Madrid, San Juan, Paris, Seoul, Milan, Mexico City,
Taipei, 1991.
[54] J. Havil: Gamma. exploring euler’s constant. Princeton University Press, Princeton, Oxford, 2003.
[55] L.L. Helms; Introduction to Probability Theory with Contemporary Applications. W.H. Freeman and Com-
pany, New York, 1997.
[56] C.W. Helstrom: Probability and Stochastic Processes for Engineers. Second Edition. Prentice-Hall, Upper
Saddle River, 1991.
[57] U. Hjorth: Stokastiska Processer. Korrelations- och spektralteori. (Stochastic Processes. Correlation and
Spectral Theory; in Swedish) Studentlitteratur, Lund, 1987.
[58] K. Huang: Introduction to Statistical Physics. CRC Press, Boca Raton, London, New York, Washington
D.C., 2001.
[59] H. Hult, F. Lindskog, O. Hammarlid & C.J. Rehn: Risk and Portfolio Analysis: Principles and Methods.
Springer, New York, Heidelberg, Dordrecht, London, 2012.
[60] H.L. Hurd & A. Miamee: Periodically Correlated Random Sequences. Spectral Theory and Practice. John
Wiley & Sons, Inc., Hoboken, New Jersey, 2007.
[61] K. Itb
o: Introduction to probability theory. Cambridge University Press, Cambridge, New York, New
Rochelle, Melbourne, Sydney, 1986.
[62] K. Jacobs: Stochastic Processes for Physicists: Understanding Noisy Systems. Cambridge University
Press, Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo, Delhi, Dubai,
Tokyo, 2010.
[63] J. Jacod & P. Protter: Probability Essentials. Springer Verlag, Berlin 2004.
[64] F. James: Statistical Methods in Experimental Physics. 2nd Edition. World Scientific, New Jersey, London,
Singapore, Beijing, Shanghai, Hong Kong, Taipei, Chennai, 2006.
[65] E. Jaynes: Probability Theory. The Logic of Science. Cambridge University Press, Cambridge, 2003.
[66] A.H. Jazwinski: Stochastic Processes and Filtering Theory. Dover Publications Inc., Mineola, New York,
1998.
[67] O. Kallenberg: Foundations of Modern Probability. Second Edition. Springer Verlag, Berlin, 2001.
[68] G. Kallianpur & P. Sundar: Stochastic Analysis and Diffusion Processes. Oxford Graduate Texts in
Mathematics, Oxford University Press, Oxford 2014.
[69] A. Khuri: Advanced Calculus with Applications to Statistics. John Wiley & Sons, Inc., New York, Chich-
ester, Brisbane, Toronto, Singapore, 1987.
BIBLIOGRAPHY 337
[70] F.M Klebaner: An Introduction to Stochastic Calculus with Applications. Imperial College Press, Singa-
pore, 1998.
[71] L. Kristiansson & L.H. Zetterberg: Signalteori I −II (Signal Theory; in Swedish). Studentlitteratur, Lund
1970.
[72] S.L. Lauritzen: Thiele: pioneer in statistics. Oxford University Press, 2002.
[73] D.S. Lemons: An Introduction to Stochastic Processes in Physics (containing ’On the Theory of Brow-
nian Motion’ by Paul Langevin translated by Anthony Gythiel). The Johns Hopkins University Press.
Baltimore, London, 2002.
[74] A. Leon-Garcia: Probability and Random Processes for Electrical Engineers. Addison-Wesley Publishing
Company. Reading, 1989.
[75] J.W. Lindeberg: Todennäköisyyslasku ja sen käytäntö tilastotieteessä. Alkeellinen esitys. (Probability
calculus and its practice in statistics. An elementary presentation. in Finnish ) Otava, Helsinki, 1927.
[76] G. Lindgren: Stationary Stochastic Processes: Theory and Applications. Chapman & Hall CRC Texts in
Statistical Science Series, Boca Raton, 2013.
[77] H.O. Madsen, S. Krenk & N.C. Link: Methods of Structural Safety. Dover Publications Inc., Mineola,
New York, 2006.
[78] R.M. Mazo: Brownian Motion. Fluctuations, Dynamics, and Applications. International Series of Mono-
graphs on Physics No. 112, Oxford Science Publications, Oxford University Press, Oxford 2002, (First
Paperback Edition 2009).
[79] M. Mitzenmacher, & E. Upfal: Probability and computing: Randomized algorithms and probabilistic anal-
ysis. Cambridge University Press, Cambridge, New York, Port Melbourne, Madrid, Cape Town, 2005.
[80] R.E. Mortensen: Random Signals and Systems. John Wiley & Sons, Inc., New York, 1987.
[81] J. Neveu: Bases Mathématiques du Calcul des Probabilites, Masson et Cie., Paris 1964.
[82] M. Neymark: Analysens Grunder Del 2 (Foundations of Mathematical Analysis; in Swedish). Studentlit-
teratur, Lund, 1970.
[85] A. Papoulis: Probability, Random Variables, and Stochastic Processes. Second Edition. McGraw-Hill
Book Company. New York, St.Louis, San Francisco, Auckland, Bogotá, Caracas, Lisbon, London, Madrid,
Mexico City, Milan, Montreal, New Delhi, San Juan, Singapore, Sydney, Tokyo, Toronto, 1984.
[86] J. Rissanen: Lectures on statistical modeling theory. Helsinki Institute of Information Technology, Helsinki,
2004.
http://www.lce.hut.fi/teaching/S-114.300/lectures.pdf
338 BIBLIOGRAPHY
[87] B. Rosén: Massfördelningar och integration. 2:a upplagan (Mass distributions and integration; in Swedish)
Unpublished Lecture Notes, Department of Mathematics, KTH, 1978.
[88] M. Rudemo & L. Råde: Sannolikhetslära och statistik med tekniska tillämpningar. Del 1 (Probability and
Statistics with Engineering Applications Part 1; in Swedish) Biblioteksförlaget, Stockholm, 1970.
[89] M. Rudemo & L. Råde: Sannolikhetslära och statistik med tekniska tillämpningar. Del 2 (Probability and
Statistics with Engineering Applications Part 2; in Swedish) Biblioteksförlaget, Stockholm, 1967.
[90] M. Rudemo: Prediction and Filtering for Markov Processes. Lectures at the Department of Mathematics,
Royal Institute of Technology, TRITA-MAT-6 (Feb.), Stockholm, 1974.
[91] W. Rudin: Principles of Mathematical Analysis. Third Edition. McGraw-Hill Inc., New York, St.Louis,
San Francisco, Auckland, Bogotá, Caracas, Lisbon, London, Madrid, Mexico City, Milan, Montreal, New
Delhi, San Juan, Singapore, Sydney, Tokyo, Toronto, 1973.
[92] L. Råde & B. Westergren: Mathematics Handbook for Science and Engineering. Second Edition. Stu-
dentlitteratur, Lund & Beijing 2004.
[93] E.B. Saff & A.D. Snider: Fundamentals of Complex Analysis for Mathematics, Science, and Engineering.
Third Edition. Pearson Educational International. Upper Saddle River, New Jersey 2003.
[94] Z. Schuss: Theory and Applications of Stochastic Processes An Analytical Approach. Springer Verlag, New
York, Dordrecht, Heidelberg, London, 2010.
[95] Y.G. Sinai: Probability Theory. An Introductory Course. Springer Textbook. Springer Verlag, Berlin,
Heidelberg, 1992.
[96] G. Sparr & A. Sparr: Kontinuerliga system. (Continuous Systems; in Swedish) Studentlitteratur, Lund
2010.
[97] H. Stark & J.W. Woods: Probability, Random Processes and Estimation Theory for Engineers. Prentice
- Hall, 1986.
[98] S. Thrun, W. Burgard & D. Fox: Probabilistic Robotics. MIT Press, Cambridge, London, 2005.
[99] Y. Viniotis: Probability and Random Processes for Electrical Engineers. WCB McGraw- Hill, Boston,
Burr Ridge, IL, Dubuque, WI, New York, San Francisco, St.Louis, Bangkok, Bogotá, Caracas, Lisbon,
London, Madrid, Mexico City, Milan, New Delhi, Seoul, Singapore, Sydney, Taipei, Toronto, 1998.
[100] A. Vretblad: Fourier Analysis and Its Applications. Springer Verlag, New York, 2003.
[101] R.D. Yates & D.J. Goodman: Probability and Stochastic Processes. A Friendly Introduction for Electrical
and Computer Engineers. Second Edition. John Wiley & Sons, Inc., New York, 2005.
[102] D. Williams: Weighing the Odds. A Course in Probability and Statistics. Cambridge University Press,
Cambridge, 2004.
[103] E. Wong & B. Hajek: Stochastic Processes in Engineering Systems. McGraw-Hill Book Company, New
York, 1985.
[104] A. Zayezdny, D. Tabak & D. Wulich: Engineering Applications of Stochastic Processes. Theory, Problems
and Solutions. Research Studies Press Ltd., Taunton, Somerset, 1989.
BIBLIOGRAPHY 339
[105] K.J. Åström: Introduction to Stochastic Control Theory. Academic Press, New York, San Francisco,
London, 1970.
[106] K.J. Åström: Harry Nyquist (1889 − 1976): A Tribute to the Memory of an Outstanding Scientist. Royal
Swedish Academy of Engineering Sciences (IVA), January 2003.
Index
340
INDEX 341
Random Telegraph Signal, see Telegraph Signal Band-limited Gaussian white, 257
Random variable, 26 Definition, 227
Random walk, 290 Gaussian, definition, 242
Reflection principle for , 290 Gauss-Markov, 246
Range of n I.I.D. r.v.’s, 86 Kolmogorov consistency theorem, 228
Ratio of two r.v.’s, 67 Lognormal, 254
Reflection principle Mean function, 231
for random walks, 290 Mean square continuous, 242
for Wiener porcess, 294 Strictly stationary, 244
Rice method for mean of a transformed r.v., 139 Suzuki, 254
Robotics, 331 Transition density, 247
Rosenblatt transformation, 108 Weakly stationary, 240
for bivariate Gaussian Variables, 218 Spectral density, 241
Rotation matrix, 220 Stochastic integral,
Sampling theorem, 258 discrete, 114
Scale-free probability distribution, 111 discrete w.r.t. a Wiener process, 298
Small world, 111 in mean square, 237
Skewness, 77 Strictly stationary process, 244
Stable distributions, 136 Sum of random number of independent r.v.’s, 149
Set theory Superformel, 260
decreasing sequence of sets, 16 Taking out what is known, 99
De Morgan’s rules, 13 Telegraph signal, random
field of sets, 14 Autocorrelation function, 322
increasing sequence of sets, 15 Chapman - Kolmogorov equations, 330
pairwise disjoint sets, 13 Definition, 320
power set, 14 Discontinuities of the first kind, 322
set difference, 13, 22 Mean function, 322
Sine wave with random phase, 224 Spectral density, 324
Sigma fields Strictly stationary, 324
filtration, 112 Weakly stationary, 324
filtration, continuous time, 284 Théorie de la spéculation, 267
monotone class, 45 Thinning, 150
predictable, 114 Toeplitz matrix, 253
σ -field, 14 Total probability, law of, 104
σ algebra, 14 Total variance, 94
σ -field generated by a random variable, 27 Tower property, 99
Signal-to-noise ratio, 85 Transfer function, 260
Singular part of a measure, 73 Weakly stationary process, 240
Spectral density, 240 Unconscious statistician, law of, 34
′
Steiner s formula, 48 White noise, 287
Stein’s lemma, 224 Wiener integral, 298
Stochastic differential equation, 299 Wiener process
Stochastic damped harmonic oscillator, 312 Autocorrelation function, 273
Stochastic Processes Bivariate p.d.f. for, 293
Autocorrelation function, 231 Brownian bridge, 293
346 INDEX