0% found this document useful (0 votes)
5 views25 pages

CH 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
5 views25 pages

CH 2

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 25

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/290738771

Exponential Distribution

Chapter · August 2010


DOI: 10.1007/978-0-8176-4987-6_2

CITATIONS READS

2 5,706

3 authors, including:

Arjun Gupta Yanhong Wu


Bowling Green State University California State University, Stanislaus
529 PUBLICATIONS 7,204 CITATIONS 76 PUBLICATIONS 879 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Mixed model View project

Change Point View project

All content following this page was uploaded by Yanhong Wu on 21 January 2016.

The user has requested enhancement of the downloaded file.


Chapter 2

Exponential Distribution

2.1 Introduction
In establishing a probability model for a real-world phenomenon, it is always
necessary to make certain simplifying assumptions to render the mathematics
tractable. On the other hand, however, we cannot make too many simplifying
assumptions, for then our conclusions, obtained from the probability model,
would not be applicable to the real-world phenomenon. Thus, we must make
enough simplifying assumptions to enable us to handle the mathematics but
not so many that the model no longer resembles the real-world phenomenon.
The reliability of instruments and systems can be measured by the sur-
vival probabilities such as P [X > x]. During the early 1950’s, Epstein and
Sobel, and Davis analyzed statistical data of the operating time of an instru-
ment up to failure, and found that in many cases the lifetime has exponential
distribution. Consequently, the exponential distribution became the under-
lying life distribution in research on reliability and life expectancy in the
1950’s. Although further research revealed that for a number of problems in
reliability theory the exponential distribution is inappropriate for modeling
the life expectancy, however, it can be useful to get a first approximation (see
the reference by Barlow and Proschan (1975)). Exponential distributions are
also used in measuring the length of telephone calls and the time between
successive impulses in the spinal cords of various mammals.
This chapter is devoted to the study of exponential distribution, its prop-
erties and characterizations, and models which lead to it and illustrate its
applications.

23
24 CHAPTER 2. EXPONENTIAL DISTRIBUTION

2.2 Exponential Distribution


A continuous nonnegative random variable X (X ≥ 0) is called to have an
exponential distribution with parameter λ, λ > 0, if its probability density
function is given by
f (x) = λe−λx , for x ≥ 0,
or equivalently, if its distribution function is given by
Z x
F (x) = f (t)dt = 1 − e−λx , for x ≥ 0.
−∞

If follows that the survival function F̄ (x) is given by

F̄ (x) = 1 − F (x) = e−λx , for x ≥ 0.

We list several properties of exponential distribution:

Property 1.
1 1
E(X) =
, V ar(X) = 2 ,
λ λ
and the moment generating function is given by
λ
M (t) = E[etX ] = ,
λ−t
for t < λ.

In fact,
Z ∞
M (t) = E[etX ] = etx λe−λx dx
0
λ Z∞
= (λ − t)e−(λ−t)x dx
λ−t 0
λ
= ,
λ−t
since the last integral is an exponential density with parameter λ − t. The
first two moments of X can be calculated as
d
E[X] = M (t)|t=0
dt
2.2. EXPONENTIAL DISTRIBUTION 25

λ
= |t=0
(λ − t)2
1
= ,
λ
d2
E[X 2 ] = M (t)|t=0
dt2

= |t=0
(λ − t)3
2
= ,
λ2
and thus

V ar(X) = E[X 2 ] − (E[X])2


2 1
= 2
− 2
λ λ
1
= .
λ2
Property 2. The failure rate function r(t) = λ, i.e. a constant.

In fact,
f (t)
r(t) =
F̄ (t)
λe−λt
= = λ.
e−λt
Property 3. (Lack of Memory) The residual lifetime (X − t)|X > t has the
same distribution as X. That means, for all x, t ≥ 0,

P [X > x + t|X > t] = P [X > x].

This is clear by observing that

F̄ (x + t)
P [X > x + t|X > t] =
F̄ (t)
−λ(x+t)
e
=
e−λt
−λx
= e = P [X > x].
26 CHAPTER 2. EXPONENTIAL DISTRIBUTION

In reliability terms, if we think X as the lifetime of some instrument, then


the memoryless property implies that if the instrument is working at time
t, then the distribution of the residual lifetime is the same as the original
lifetime distribution.
In particular, the mean residual lifetime µ(t) = E[X − t|X > t] is the
same as the original mean µ = E[X]:

µ(t) = E[X − t|X > t]


1
= E[X] = µ = .
λ
Property 4.(Extreme Value) Suppose X1 , ..., Xn are independent following
the same distribution as X. Then n min(X1 , ..., Xn ) also has the same expo-
nential distribution.

In fact,
x
P [n min(X1 , ..., Xn ) > x] = P [min(X1 , ..., Xn ) > ]
n
x x
= P [X1 > ]...P [Xn > ]
n n
x n
= (P [X > ])
n
= (e−λx/n )n = e−λx .

Example 2.1 Suppose the lifetime of certain items follows exponential dis-
tribution with parameter λ = 0.1.
1
(i) From Property 1, µ = 0.1 = 10; V ar(X) = 0.11 2 = 100.
(ii)The density function is

f (x) = 0.1e−0.1x ,

and the survival function is

F̄ (x) = e−0.1x .

Therefore as stated in Property 2, the failure rate is equal to

f (t)
r(t) = = 0.1.
F̄ (t)
2.3. CHARACTERIZATION OF EXPONENTIAL DISTRIBUTION 27

(iii) Suppose one item has survived 10 units of time, then from Property
3, the conditional survival function given X > 10 is
P [X > x + 10|X > 10] = P [X > x] = e−0.1x .
In particular, the mean residual life µ(t) = µ = 10.
(iv) (Serial system) Consider a serial system consists of five identical items
which are working independently. The system fails as long as one item fails.
Then the lifetime of the system is X = min(X1 , X2 , ...X5 ), where Xi denotes
the lifetime of i-th item. From Property 4, as X follows exponential distri-
µ
bution with parameter 0.5, then the mean system lifetime will be 0.5 = 2.

Example 2.2 (Location-transformed exponential distribution) Suppose the


time headway X between consecutive cars in highway during a period of
heavy traffic follows the location-transformed exponential density function
f (x) = 0.15e−0.15(x−0.5) , for x ≥ 0.5.
(i) X − 0.5 is a regular exponential random variable with parameter λ =
0.15.
1 1
(ii) E[X] = 0.5 + 0.15 and V ar(X) = 0.15 2.

(iii) The survival function can be calculated as


F̄ (t) = P [X > t] = P [X − 0.5 > t − 0.5] = e−0.15(t−0.5) , for t ≥ 0.5.
(iv) The memoryless property still holds with minor modification. For
t ≥ 0.5 and x > 0,
P [X > t + x|X > t] = P [X − 0.5 > t − 0.5 + x|X − 0.5 > t − 0.5] = e−0.15x .
(v) However, the mean residual life is still constant. For t ≥ 0.5,
1
E[X − t|X > t] = .
0.15

2.3 Characterization of Exponential Distri-


bution
It turns out that Properties 2 to 4 can all be used to characterize expo-
nential distribution in the sense that if a distribution possesses one of these
properties, it must be exponential.
28 CHAPTER 2. EXPONENTIAL DISTRIBUTION

2.3.1 Memoryless Property


Theorem 2.1 Let X be a non-degenerate lifetime random variable and its
distribution has the memoryless property, i.e.

P [X > x + t|X > t] = P [X > x], for x, t ≥ 0.

Then X has an exponential distribution.

Proof. The memoryless property implies that

F̄ (x + t)
= F̄ (x), for x, t ≥ 0,
F̄ (t)

equivalently,
F̄ (x + t) = F̄ (x)F̄ (t), for x, t ≥ 0.
Following the lines of the proof for Theorem 1.2, this implies that for any
rational number r = m/n,

F̄ (r) = [F̄ (1)]r = e−λr ,

where λ = − ln F̄ (1). Since X is not degenerate, F̄ (1) > 0. (Otherwise,


F̄ (x) = 0 for all x > 0, a contradiction.) For any irrational x, let {rn } be
a sequence of rational numbers such that x < rn and limn→∞ rn = x. Since
F̄ (x) is right continuous, we have

F̄ (x) = lim F̄ (rn ) = lim e−λrn


n→∞ n→∞
λ lim rn −λx
= e =e . 

Remark. A more delicate analysis shows that we only have to check the
memoryless property at two points t1 and t2 such that t1 /t2 is irrational and

P [X > x + ti |X > ti ] = P [X > x], for x ≥ 0, i = 1, 2.

In fact, from
F̄ (x + ti ) = F̄ (x)F̄ (ti ), for i = 1, 2,
we see that

F̄ (x + mt1 + nt2 ) = F̄ (x)F̄ (mt1 + nt2 ), for all m, n ∈ Z.


2.3. CHARACTERIZATION OF EXPONENTIAL DISTRIBUTION 29

Without loss of generality, we assume t1 < t2 . Then we can find a sequence


{xi } for i = 1, 2, · · ·, such that

t2 = n0 t1 + x1
t1 = n1 x1 + x2
xi = ni+1 xi+1 + xi+2 , for i = 1, 2, · · · .

Note that each xi is of the form mt1 + nt2 by looking at the equations
backward, and more important,

x1 > x2 > · · · → 0.

Thus, for any given  > 0, there exists a k such that 0 < xk < . This
implies that for any x > 0, there exists a positive integer j such that

(j − 1)xk < x ≤ jxk .

Let zj = jxk and zj is of the form mt1 + nt2 . We see that

0 ≤ zj − x ≤ xk < .

Hence, we can find a monotone decreasing sequence

z1 > z2 > · · · → x.

By the right continuity of F̄ (x), we have

F̄ (x + y) = F̄ (x)F̄ (y) for all x, y ≥ 0.

That means, F̄ (x) is exponential.

Corollary 2.1 If X is a non-degenerate lifetime random variable such that


X has constant mean residual lifetime, i.e.

µ(t) = E[X − t|X > t]


1 Z∞
= F̄ (x)dx = µ, for F̄ (t) > 0,
F̄ (t) t

then X is exponential.
30 CHAPTER 2. EXPONENTIAL DISTRIBUTION

Proof. The constant mean residual lifetime implies

1 F̄ (t)
= R∞
µ t F̄ (x)dx
d Z∞
= − ln F̄ (x)dx, for F̄ (t) > 0.
dt t

Thus, Z ∞
t
ln F̄ (x)dx = C − .
t µ
By taking t = 0, we get C = ln µ. Thus,
Z ∞
t
F̄ (x)dx = µe− µ ,
t

and hence
t
F̄ (t) = e− µ . 

2.3.2 Constant Failure Rate Function


Theorem 2.2 Let X be a nonnegative random variable with probability den-
sity function f (x). If X has a constant failure rate function r(t) = λ, then
X is exponentially distributed.

This is simply observed from the fact that


Rx
F̄ (x) = e− 0
r(t)dt
= e−λt .

2.3.3 Extreme Value Distribution


Let {X1 , ..., Xn } be independent random variables distributed as X, which
may represent the lifetimes of n similar and independent components. Denote
by Xk,n as the k-th order statistics, which represents the k-th failure time
from the n components.

Theorem 2.3 Let X be a non-degenerate nonnegative random variable. If


nX1,n has the same distribution as X for every positive integer n, then X
has exponential distribution.
2.3. CHARACTERIZATION OF EXPONENTIAL DISTRIBUTION 31

Proof.
x
P [nX1,n > x] = P [X1,n > ]
n
x
= P [min(X1 , ..., Xn ) > ]
n
x x
= P [X1 > , ...., Xn > ]
n n
x
= (P [X > ])n .
n
If nX1,n has the same distribution as X, then
x
F̄ ( ) = (F̄ (x))1/n , for n = 1, 2, ....
n
By a similar argument as in Theorem 2.1, F̄ (x) = e−λx for some λ > 0. 

Remark 1. Similar to the remark after Theorem 2.1, we can show that if

F̄ (x/n) = (F̄ (x))1/n

holds for n1 and n2 such that ln n1


ln n2
is irrational, then X is exponential. That
means, we only need two samples.
In fact, from
F̄ (x/ni ) = (F̄ (x))1/ni
for i = 1, 2, we have
j k
F̄ (x) = [F̄ (nj1 nk2 x)]1/n1 n2 ,
for all integers j, k ∈ Z. This implies that
j k
F̄ (nj1 nk2 ) = [F̄ (1)]n1 n2 .

For any x > 0, we have ln x = y ∈ R. Just like in the remark following


Theorem 2.1, since ln n1
ln n2
is irrational, we can find a sequence of real numbers
ym > y which converge to y and all ym are of the form j ln n1 + k ln n2 . By
the right continuity of F̄ (x), we have

F̄ (x) = [F̄ (1)]x .

Remark 2. Under some extra condition for F (x) as x → 0, we only need


one sample to characterize the exponential distribution. The following is a
typical result.
32 CHAPTER 2. EXPONENTIAL DISTRIBUTION

Theorem 2.4 Suppose for some n, nX1,n has the same distribution as X. If

F (x)
0 < lim+ = λ < ∞,
x→0 x

then F (x) = 1 − e−λx , for x ≥ 0.

Proof. For this sample size n, we have F̄ (x) = (F̄ (x/n))n . Inductively, this
implies that for any integer k,
k
F̄ (x) = (F̄ (x/nk ))n , for x ≥ 0.

That means

F (x) = 1 − F̄ (x)
k
= 1 − (F̄ (x/nk ))n
k
= 1 − (1 − F (x/nk ))n .

Since
F (x)
lim+ = λ > 0,
x→0 x
and
F (x/nk ) = λx/nk + o(1/nk ),

as k → ∞. Consequently,

x nk
F (x) = 1 − lim (1 − λ )
k→∞ nk
−λx
= 1−e . 

Remark. The probability interpretation of the above inductive proof is that


we can first take n independent and identically distributed samples of size
(i)
n (total n2 random variables), say, {X1 , ..., Xn(i) }, as copies of {X1 , ..., Xn }
for i = 1, 2, ..., n. This will give a sequence of independent variables, say,
(i)
nX1,n for i = 1, 2, ..., n with the same distribution as nX1,n . This process
can repeatedly be carried to the total sample size n3 , n4 , ....
2.4. ORDER STATISTICS AND EXPONENTIAL DISTRIBUTION 33

2.4 Order Statistics and Exponential Distri-


bution
In life testing, when all n items are tested starting from the same time, the
lifetimes X1 , ..., Xn are recorded as the order statistics X1,n , ..., Xn,n . We call
dr,n = Xr+1,n − Xr,n
the spacing statistics for r = 0, 1, ..., n − 1, and Dr,n = (n − r)dr,n the
normalized spacing statistics. Can any of the spacing statistics characterize
the exponential distribution?

2.4.1 Some Properties of Order Statistics


To find the distribution function Fk,n (y) of Xk,n , we note that the event
{Xk,n ≤ y} is equivalent to the event that there are at least k of Xi ’s ≤ y.
! j failures at y for j ≥ k (thus n − j
Note that given that there are exactly
n
survivals at y), there are total combinations of j out of n. Thus
j
n
X
Fk,n (y) = P [Xk,n ≤ y] = P [Exactly j failures before y]
j=k
n
!
n
F j (y)[1 − F (y)]n−j
X
=
j=k
j
!Z
n F (y)
= k tk−1 (1 − t)n−k dt,
k 0

where the last equation is obtained by integrating by part consecutively for


j ≥ k.
If F (x) is absolutely continuous with density function f (x), then Fk,n (y)
has density
!
n
fk,n (y) = k F k−1 (y)[1 − F (y)]n−k f (y)
k
F k−1 (y) [1 − F (y)]n−k
= n!f (y) .
(k − 1)! (n − k)!
This form can also be explained as follows. There are total n! permutations
from the n failures. One fails at y, k −1 fail before y, and n−k are survival at
34 CHAPTER 2. EXPONENTIAL DISTRIBUTION

n!
y. Thus, there are total 1!(k−1)!(n−k)! combinations of observations and each
occurs with the same probability f (y)[F (y)]k−1 [1 − F (y)]n−k .
More generally, by using the similar explanation, the joint density of

{Xr1 ,n , Xr2 ,n , ..., Xrk ,n }

for 1 ≤ r1 < r2 < ... < rk ≤ n with 1 ≤ k ≤ n is given by


h i [F (yi+1 ) − F (yi )]ri+1 −ri −1
f(r1 ,...,rk ) (y1 , y2 , ..., yk ) = n! Πki=1 f (yi ) Πki=0 ,
(ri+1 − ri − 1)!
where

y0 = 0, yk+1 = ∞, r0 = 0, rk+1 = n + 1, and y1 ≤ y2 ≤ ... ≤ yk .

In particular, the density function of the spacing statistic dr,n is


Z ∞
n!
fdr,n (x) = F r−1 (y)[1−F (x+y)]n−r−1 f (y)f (x+y)dy.
(r − 1)!(n − r − 1)! −∞
If F (x) is exponential, we have the following explicit result:

Theorem 2.5 Let dr,n be the r-th spacing statistic from the exponential dis-
tribution F (x) = 1 − e−λx . Then

(1) Fdr,n (x) = P [dr,n ≤ x] = 1 − e−(n−r)λx ;

1 1
(2) E(dr,n ) = (n−r)λ
, V ar(dr,n ) = (n−r)2 λ2
, r = 1, 2, ..., n − 1;

(3) d1,n , d2,n , ..., dn−1,n are mutually independent.

Proof. (1) The joint density function of Xr+1,n and Xr,n is

(F (s))r−1 (1 − F (t))n−r−1
fr,r+1 (s, t) = n!f (s)f (t)
(r − 1)! (n − r − 1)!
(1 − e−λs )r−1 e−(n−r−1)λt
= n!λe−λs λe−λt , 0 < s < t.
(r − 1)! (n − r − 1)!
From the transformation

(Xr,n , Xr+1,n ) −→ (Xr,n , dr,n ),


2.4. ORDER STATISTICS AND EXPONENTIAL DISTRIBUTION 35

we obtain the joint density function of Xr,n , dr,n , with s = x, t = x + y, and


|J| = 1 (Jacobian is unit), as

(1 − e−λx )r−1 e−(n−r−1)λ(x+y)


fXr,n ,dr,n (x, y) = n!λe−λx λe−λ(x+y) ,
(r − 1)! (n − r − 1)!

for 0 < x, y < ∞. Since the form is separable in x and y, the marginal
density function for dr,n is

fdr,n (y) ∝ e−(n−r)λy ,

where ∝ means equality except for a constant. Hence

Fdr,n (x) = 1 − e−(n−r)λx .

(2) follows from (1).


For (3), we note that the joint density function of {X1,n , ..., Xn,n } is

f(X1,n ,...,Xn,n ) (x1 , ..., xn ) = n! [Πni=1 f (xi )]]


= n!Πni=1 λe−λxi .

Consider the transformation

(X1,n , ..., Xn,n ) −→ (X1,n , d1,n , ..., dn−1,n )

with Jacobian |J| = 1. The joint density function of (X1,n , d2 , ..., dn ) is

f(X1,n ,d1,n ,...,dn−1,n ) (y1 , y2 , ..., yn )


= n!λn exp[−λ(y1 + (y1 + y2 ) + ... + (y1 + ... + yn ))]
= n!λn exp[−λ(ny1 + (n − 1)y2 + ... + yn )]
= n!Πni=1 λe−(n−i+1)λyi .

It follows that X1,n and d1,n , ..., dn−1,n are mutually independent. 

Corollary 2.2 For the exponential distribution, the normalized spacings Dr,n =
(n−r)dr,n , for r = 1, 2, ..., n−1, are identically and independently distributed
with common distribution function F (x) = 1 − e−λx .
36 CHAPTER 2. EXPONENTIAL DISTRIBUTION

Proof. The independence of Dr,n follows from the last theorem. Since
the density function of dr,n is (n − r)λe−(n−r)λx , the density function of
Dr,n = (n − r)dr,n is thus λe−λx . 

Example 2.3 Consider the life testing for n identical components which
follow exponential distribution with λ.
(i) X1,n is exponential with mean nµ = nλ
1
.
(ii) In general, we can write

Xk,n = X1,n + X2,n − X1,n + .... + Xk,n − Xk−1,n


1 1
= nX1,n + (n − 1)(X2,n − X1,n ) + ...
n n−1
1
+ (n − k + 1)(Xk,n − Xk−1,n ),
n−k+1

which is a weighted sum of independent exponential random variables. In


particular,
1 1 1 1
 
E[Xk,n ] = + + ... + ,
n n−1 n−k+1 λ
and " #
1 1 1 1
V ar[Xk,n ] = 2 + + ... + .
n (n − 1)2 (n − k + 1)2 λ2
(iii) The total time on test (TTT) up to the k-th failure Xk,n can be
written as

T T T (Xk,n ) = X1,n + X2,n + ... + Xk,n + (n − k)Xk,n

= nX1,n + (n − 1)(X2,n − X1,n ) + ... + (n − k + 1)(Xk,n − Xk−1,n ),


which is a sum of k independent exponential random variable with parameter
λ.
(iv) In general, we denote by R the total number of failures before time
t, then the total time on test up to time t can be written as

T T T (t) = X1,n + X2,n + ... + XR,n + (n − R)t

= nX1,n +(n−1)(X2,n −X1,n )+...+(n−R+1)(XR,n −XR−1,n )+(n−R)(t−XR,n ).


2.4. ORDER STATISTICS AND EXPONENTIAL DISTRIBUTION 37

2.4.2 Characterization based on Order Statistics


We first give a characterization of exponential distribution based on the first
two spacing statistics.
Theorem 2.6 Let X1 , X2 be independently distributed random variables with
common continuous distribution F (x). If X1,2 and X2,2 − X1,2 are indepen-
dent, then F (x) = 1 − e−λx .
Proof. The independence of X1,2 and d1,2 implies
P [X2,2 − X1,2 > x|X1,2 = y] = P [X2,2 − X1,2 > x],
for all y ≥ 0, which is free of y. On the other hand, the distribution of X2,2
given X1,2 = y is indeed equivalent to the survival distribution of X given
X > y. Thus, for y ≥ 0,
P [X2,2 − X1,2 > x|X1,2 = y] = P [X2,2 > x + y|X1,2 = y]
F̄ (x + y)
= P [X > x + y|X > y] = .
F̄ (y)
This implies
F̄ (x + y)
= F̄ (x), for all x, y ≥ 0.
F̄ (y)
That means F (x) is exponential from the memoryless property. 

Remark. If we know that the normalized spacings are all exponential with
F (x) = 1 − e−λx , does it imply that the population distribution is also expo-
nential without additional assumptions? The answer is no.

Example 2.4 Let X1 , X2 be independent and identically distributed, with


common distribution function

F (x) = 1 − e−x [1 + 4b−2 (1 − cos(bx))], for x ≥ 0, b ≥ 2 2.
Then it can be shown that
d1,2 = X2,2 − X1,2 = |X1 − X2 |
has exponential distribution.

We further give a result without proof.


38 CHAPTER 2. EXPONENTIAL DISTRIBUTION

Theorem 2.7 Let the population distribution F (x) be such that F (0+ ) = 0
and F (x) > 0 for all x > 0. Assume that
Z ∞
ur (s) = e−sx dF r (x) 6= 0
0

for all s such that Re(s) ≥ 0. If for some r ≥ 1,

P [dr,n ≤ x] = 1 − e−(n−r)x , for x ≥ 0,

then F (x) = 1 − e−x for x ≥ 0.

2.4.3 Record Values


Highly related to the order statistics are the following record values. Let
{X1 , ..., Xn , ...} be a sequence of nonnegative random variable which are ob-
served one by one in time-ordered way (longitudinal observations). Let F (x)
be their distribution function with density f (x).

Definition 2.1 Xj is called a record value if Xj > max(X1 , ..., Xj−1 ).

Denote by R1 < R2 < ... the successive record values. The following
simple result is due to Tata (1969).

Theorem 2.8 Let {X1 , X2 , ...} be a sequence of identically and indepen-


dently distributed nonnegative random variables. The distribution function
F (x) is exponential if, and only if, R1 = X1 and R2 − R1 are independent.

Proof. Notice that given R1 = X1 = x, R2 − R1 = R2 − x is equivalent to


the residual life X − x|X > x. Thus, the conditional distribution of R2 − R1
given R1 = x is just
F̄ (x + y)
P [R2 − R1 > y|R1 = x] = P [X − x > y|X > x] = ,
F̄ (x)
for x, y ≥ 0. Thus, R1 and R2 − R1 are independent if, and only if, F̄ (x +
y)/F̄ (x) is free of x. That means,
F̄ (x + y)
= F̄ (y),
F̄ (x)
by letting x = 0. 
2.5. MORE APPLICATIONS 39

2.5 More Applications


Example 2.5( Time of accident)
Insurance companies can collect accident records (history) of drivers. A
driver is considered in ”good” category if his probability of having an accident
(in statistical sense) remains the same small number regardless of the passage
of time. Therefore, if X denotes the random time period up to the first
accident of the driver in question, then

P [X > x + y|X > x] = P [X > y].

This is exactly the Memoryless Property. In this case, the insurance company
does not need to guess the risk of such a driver, but can go ahead with a well-
defined model of setting the amount of premium based on the driver’s record.

Example 2.6 Suppose a store serves a community of n persons. These per-


sons visit the store independently of each other, and their actual times of
entering the store have the same distribution. Therefore, the n individuals
can be associated with n identically and independently distributed random
variables, say {X1 , ..., Xn }. The store owner observes the order statistics
{X1,n , ..., Xn,n } as the successive arrival times at the store. Assume the store
owner observes that the spacing statistics {X1,n , d1,n , ..., dn−1,n } are also in-
dependent. Then from Theorem 2.9, Xj has exponential distribution. This
also provides a well defined model and basis for decision on number of em-
ployees, availability of items, etc.

Example 2.7 (Geometric Sums) Consider a system with exponential lifetime


with parameter λ. Upon a failure, a repair can restore the system like new
with probability p and the failure is irreparable with probability 1 − p.
Denote by N the total number of working periods, which is a geometric
random variable (Problem 2.8) with success probability 1 − p. That means,

P [N = k] = (1 − p)pk−1 ,

for k = 1, 2, .... Denote by X1 , ..., XN the corresponding working period


lengths, which are independent exponential random variables. Then the total
lifetime for the system is
N
X
Y = Xi .
i=1
40 CHAPTER 2. EXPONENTIAL DISTRIBUTION

(i) By conditioning on the values of N , we can calculate the mean of Y


as

X k
X
E[Y ] = E[ Xi ]P [N = k]
k=1 i=1

k X
= = P [N = k]
k=1 λ
1
= .
(1 − p)λ
(ii) Generally, we can calculate the moment generating function of Y as
∞ Pk
E[E tY ] = E[et Xi
X
i=1 ]P [N = k]
k=1

!k
λ
(1 − p)pk−1
X
=
k=1 λ−t

!k−1
(1 − p)λ X pλ
=
λ − t k=1 λ − t
(1 − p)λ
= .
(1 − p)λ − t
(iii) By matching the moment generating function, we see that Y is also
an exponential random variable with parameter (1 − p)λ. In other words,
geometric sum of identically and independently distributed exponential ran-
dom variables is still exponential.

Example 2.8 (Valuation of defaultable Zero-Coupon Bonds)


(i)Consider a unit of bond of face value $1.00 with maturity time T .
Suppose the interest(yield) rate is constant r. Then at time t, the value of
the bond is e−(T −t)r (discounted to time t).
(ii) Suppose the bond’s default time X follows an exponential distribution
with parameter λ and there is no recovery at default. Then the value of this
defaultable bond at time t given it is not defaulted will be
e−(T −t)r P [X > T |X > t] = e−(T −t)(r+λ) .
(iii) Further we consider the partial recovery case by assuming that w
proportion of the market value will be recovered at default. Thus, the (dis-
counted) value at time t given no default up to time t can be evaluated
2.5. MORE APPLICATIONS 41

as

e−(r+λ)(T −t) + E[e−r(X−t) we−r(T −X) I[X<T ] |X > t]


= e−(r+λ)(T −t) + we−r(T −t) P [X < T |X > t]
= e−(r+λ)(T −t) + we−r(T −t) (1 − e−λ(T −t))
= (1 − w)e−(r+λ)(T −t) + we−r(T −t) .

Thus, the value at time t is a mixture of values of no recovery and full


recovery (no default).

Example 2.9 (Two-Unit Reliability Systems)


(i) (Serial System) Suppose the two units are connected in a serial system
and are working independently. That means, as long as one unit fails, the
system will fail. Suppose the lifetimes X1 and X2 follow the exponential
distribution with parameter λ1 and λ2 respectively. Then the system has an
exponential lifetime min(X1 , X2 ) with parameters λ1 + λ2 .
(ii) (Parallel System or Warm Standby System) Suppose the system
works as long as at least one unit is working. Then the system’s lifetime
max(X1 , X2 ) follows the distribution

F (x) = P [max(X1 , X2 ) ≤ x] = (1 − e−λ1 x )(1 − e−λ2 x ).

(iii)(Cold-Redundant System) Suppose the second unit is in cold standby


and it will be put in working as long as the first unit fails. Therefore the
system lifetime is X1 + X2 and its distribution function is the convolution of
the two exponential distribution.
Z y
P [X1 + X2 ≤ y] = P [X2 ≤ y − x]dP [X1 ≤ x]
0
Z y
= λ1 e−λ1 x (1 − e−λ2 (y−x) )dx
0
λ1
= 1 − e−λ1 y − e−λ2 y (1 − e−(λ1 −λ2 )y ).
λ1 − λ2

In particular, when λ1 = λ2 = λ,

P [X1 + X2 ≤ y] = 1 − (1 + λy)e−λy .
42 CHAPTER 2. EXPONENTIAL DISTRIBUTION

Problems
1. Suppose the lifetime of a certain model of car battery follows an exponential
distribution with the mean lifetime of 5 years.
a) Write down the survival function.
b) What is the probability that the lifetime will be over 2 years?
c) What is probability that the battery will work more than 4 years given
that it worked at least two years?
2. The time to repair a machine is an exponentially distributed random variable
with parameter λ = 1/2. What is
a) the probability that the repair time exceeds 2 hours;
b) the conditional probability that a repair takes at least 10 hours, given
that its duration exceeds 9 hours?
3. Let X and Y be independent Exp(λ). Prove that the density function of
Z = X/Y is given by
h(z) = (1 + z)−2 , z > 0.

4. Let X be a nonnegative random variable such that P [X > 0] > 0. Then X is


exponential if and only if
E[X|X > a] = a + E[X], for all a ≥ 0.

5. Let X be an exponential random variable with rate λ.


a) Use the definition of conditional expectation to determine E[X|X ≤ c].
b) Now determine E[X|X ≤ c] by using the following identity:
E[X] = E[X|X ≤ c]P [X ≤ c] + E[X|X > c]P [X > c].

6. Let X1 and X2 be independent exponential random variables with same rate


λ. Let
X(1) = min(X1 , X2 ) and X(2) = max(X1 , X2 )
be the order statistics. Find
a) the mean and variance of X(1) ;
b) the mean and variance of X(2) .
7. Let X1 , · · · , Xn be n identically and independently distributed exponential vari-
ables with rate λ, and X1,n , · · · , Xn,n be the order statistics.
a) What are the mean and variance of X1,n ?
b) Using the property of spacing statistics to find E[Xk,n ] and V ar(Xk,n ).
2.5. MORE APPLICATIONS 43

8. Suppose that independent trials, each having a success probability p, (0 < p <
1), are performed until a success occurs. Denote by X the number of trials
needed. Then
P [X = k] = (1 − p)k−1 p, k = 1, 2, ...,

is called the Geometric probability distribution.


(a) Calculate the mean and variance of X.
(b) Calculate P [X > k].
(c) Show that X has the similar memoryless property in the sense that

P [X > k + m|X > k] = P [X > m], k, m ≥ 1.

(d) Calculate the mean and variance of X.

9. Suppose a certain kind of insurance claim follows exponential distribution with


rate λ = 0.02. Assume a deductible D = 400. Refer to Problem 11 of
Chapter 1.
(a) Find the expected payment E[XD ] from the insurer.
(b) Find the expected payment E[XDF ] under the franchise deductible.

10. By conditioning on the value of the first record value R1 , find the distribution
function of the second record value R2 .

11. If X is an exponential random variable with parameter λ = 1, compute the


probability density function of the random variable Y defined by Y = log X.

12. If X is uniformly distributed over (0, 1), find the density function of Y = eX .

13. Magnitude M of an earthquake, as measured on the Richter scale, is a random


variable. Suppose the excess M − 3.25 for large magnitudes bigger than 3.25
is roughly exponential with mean 0.59 (or λ = 1/0.59).
(i) Find the probability that an earthquake has scale larger than 5?
(ii) Given an earthquake has scale larger than 5, what is the conditional
probability that its scale is larger than 7?

14. The duration of pauses (and the duration of vocalizations) that occur in a
monologue follows an exponential distribution with mean 0.70 seconds.
(i) What is the variance of the duration of pauses?
(ii) Given that the duration of a pause is longer than 1.0 seconds, what will
be the expected total duration time?
44 CHAPTER 2. EXPONENTIAL DISTRIBUTION

15. Suppose the time X between two successive arrivals at the drive-up window
of a local bank is an exponential random variable with λ = 0.2.
(i) What is the expected time between two successive arrivals?
(ii) Find the probability P [X > 4].
(iii) Find the probability P [2 < X < 6].

16. Consider the two similar unit parallel systems. That means the two units
follow the same exponential distribution with parameter λ.
(i) What is the distribution function of system’s lifetime?
(ii) What is the density function?
(iii) Calculate the failure rate.

17. Consider the two similar unit cold standby systems. That means, the two
units have the same exponential distribution with parameter λ.
(i) What is the distribution function of system’s lifetime?
(ii) What is the density function of system’s lifetime?
(iii) Calculate the failure rate and verify that it is monotone increasing.

18. Suppose X and Y are two independent exponential random variables with
parameters λ and δ respectively.
(i) Show that min(X, Y ) is exponential with parameter λ + δ.
δ
(ii) Show that the probability P [X > Y ] = λ+δ .
(iii)Show by the memoryless property that given X > Y , Y and X − Y are
independent. Thus,

E[Y |X > Y ] = E[min(X, Y )].

(iv) By extending (iii), show that min(X, Y ) and |X − Y | are independent.

19. Suppose a bank branch has two tellers and their service times for each cus-
tomer are exponential with parameters 0.2 and 0.25 respectively. When a
customer arrives at the branch, he finds that both tellers are serving and no
customers are waiting.
(i) What is the distribution of his waiting time?
(ii) What is the probability that he will be served by the first teller?
2.5. MORE APPLICATIONS 45

20. In deciding upon the appropriate premium to charge, insurance companies


sometimes use the exponential principle, defined as follows. If X denotes
the amount of claims which the company has to pay. Then the premium
charged by the insurance company will be
1
P = ln E[eaX ],
a
where a is some specified positive constant. Suppose X is exponential with
parameter λ and let a = αλ for some 0 < α < 1. Calculate the value P .

21. Refer to Problem 1.16. Suppose F (x) is exponential with parameter λ.


(i) Find the Risk at Value at level 99%;
(ii) Find the corresponding expected total loss

Tl = E[L|L ≥ l].

22. Let Y be an exponential random variable and X1 and X2 be two independent


positive random variables. Show by the memoryless property of Y

P [Y > X1 + X2 ] = P [Y > X1 ]P [Y > X2 ].

23. Let X1 , · · · , Xn , · · · , be a sequence of nonnegative identically and indepen-


dently distributed random variables. Define N as the first time the sequence
stops decreasing, i.e.

N = inf{n ≥ 2 : X1 ≥ X2 ≥ · · · ≥ Xn−1 ≤ Xn }.

(i) Show that for n ≥ 2,


1
P [N ≥ n] = P [X1 ≥ X2 ≥ · · · ≥ Xn−1 ] = .
(n − 1)!

(ii) Find E[N ].


46 CHAPTER 2. EXPONENTIAL DISTRIBUTION

View publication stats

You might also like