0% found this document useful (0 votes)
64 views22 pages

Dynamic Indeterminism in Stochastic Processes

Chapter 10 discusses stochastic processes, focusing on their definition and examples such as queueing systems and random walks. It introduces key concepts like dynamic indeterminism and the classification of stochastic processes based on their state spaces and time parameters. The chapter also covers the gambler's ruin problem, providing insights into probability calculations related to gambling scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views22 pages

Dynamic Indeterminism in Stochastic Processes

Chapter 10 discusses stochastic processes, focusing on their definition and examples such as queueing systems and random walks. It introduces key concepts like dynamic indeterminism and the classification of stochastic processes based on their state spaces and time parameters. The chapter also covers the gambler's ruin problem, providing insights into probability calculations related to gambling scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

ÇHAPTER- 10

STOCHASTIC PROCESSES
10.1INTRODUCTION
AND MARKOV CHAINS
aspect of
In general, in the previous chapters, we have been concerned with the static
statistics of
fatistical theory. Now we shall deal with the dynamic aspect of statistics or
change. In some situations in science and technology we will be interested in studying
processes", that is, phenomena that take place with change in time. The theory of probability
died did not have either general procedures or elaborate schemes for solving problems that
mse in the study of such phenomena. Hence it is necessary to develop a general theory of
andom processes to study random variables dependent on one or several discretely or
ontinuously varying parameters. This leads to a new concept of indeterminism in dynamic
oudies. This is referred to as "dynamic indeterminism". Many phenomena occuring in physical
nd life sciences are studied now not only as random phenomena but also as those changing
with time or space.
10.2 DEFINITION

Mathematically, astochastic process is aset of random variables (x,} or (x()} depending


on some real parameter like time t. These are also known as random processes or random
functions.

We will now discuss some examples.


e.g. 1. Aqueueing system that we have studied, is a random process or stochastic process.
The number of people joining in a queue during a time interval, the number of people served
from the queue in a particular interval are examples of random variables or stochastic variables.
e.g. 2. Consider the experiment of throwing an unbiased die. Suppose that X, is the
outcome of n throw, n>1. Then (X, /n > } is a family of randomvariables, such that
lor adistinct value of n (= 1,2,3,...).,we get a distinct random variable X,. Thus (X, /n > }
onstitutes a stochastic process. This process is known as "Bernoulli Process".
e-g.3. In the same experiment, suppose that X, is the number of sixes in the tirst n
ows. For adistinct value of n =1,2,...,. we get a distinct binomial variate X, (X, /n > |}
ch gives a family of random variables. Thus it is a stochastic process and X, is a stochastic
variable.
-g.4. Again in thesame experiment, consider X, as the maximum number shown in the
first nthrows. We can see that (X, /n 1} constitutes astochastic process.

487
488
d Statistical Methods AStiC
489

thesethrec cquations are included in


occuring in timc, likc, number rof telephone 4l
e-g.5 Consider a random cvent calls received t q4, ,1szs a-|
at a suitch board Consider an interval (0,) of duration of units. Suppose (5)
of incoming calls in that interval. X, is the
random variable which represents the number The Riased case) Suppose p#q (ie)p; Then (5 )is alinear
of calls within a fixed interval of
specific duration is a random variable X, and
the nurfanmiberly Werewrite the
difference equation.
I,,te T).Te(0,a) constitutes a stochastic
process. cquation
as 4: F P9:+1 +(-Pl4,
pq. - qz +(1- pq. 0
the velocity
Fluid Flow: In the turblent fluid flow
Turblentvar1ables
6. random
C-g.I are components u,Vwof The characteristic equation is pp -p+(l-p)=0
the fluid depending on the space coordinates X,Vz and the timet
cquation are
The roots of this
liquid: In a gas or liquid| at
e.g. 7. Movement of molecules of a gas or randomi ItI-4p(l- p) Ityl-4p +4p
the molccule collidcs with other molccules. Thus its velocity and position are altered instants ltJ-2pIt(1-2p)
the state of the molecule is subjected to random changes
at every instant of time Thus 2p 2p 2p 2p
in a straight line in steps of
8. Arandom walk model: Aparticle moves
[Link]
At cach
unit length
it can move one step to the right with the probability p orrone step toothe left
after nmos
with
|+(1-2p)_-P and p:
2p
1-(1-2p)p1
2p 2p
the probability gwhere p +q =1. lfthe particlestarts from ongin, its position
random variable, which depends on discrete parameter n.
IS a 1-P or I
eg. 9. Communication Process :The amplitude of the signals to be transmitted
arnplitude of the noIse produced in the channcl depending on time are both random vanak
or p=l
e.e. 10 Gamblers Ruin Problem (Classical Ruin Problem) : Suppose a Gamkl or p=

wins or looses a unit (Rupce or dollor or pound) with probabil1ties pand q respectivcly Lek
initial capital be z and his opponents initial capital be (a - 2) so that the combined capital lis general solution is, 4, = A+ B(q/p)' ... (6)
increased to a h
The gamc continues until the gambler's capital is cither reduced to zero or Using boundary conditions from (4)
Is, until one of the two plavers is ruined. We are interested in the probability of the thegamblels
notice that cnitt =|-B
ruin and the probability distribution of the duration of the game. We can qo = A+B=lA
of the gambler after n stages is a random stochastic variable.
10.3 RESULT: TO FIND THE PROBABILITY OF GAMBLER RUIN
44
Let g. denote the probability of ultimate ruin of the gambler. After the first trial, the
gambier's capital is (2 ) if he wins (with probability p) or it is (:-1) if he loses (with B
probabil1ty q).
The probabilties of his ruin, after one trial, will be 4,., and q,-, respeetively
so that 4, = PA,., *99, I<2<a-1 (|) p

for : 1l, the first trial may lead to ruin and we may have From this, we get A=|-B

(2)

Smilarly for z =d-1, the first trial may result in victory, we may have From (6), 4, =A+ B
(3)
lo unity our cqualions we define (4lp) -(q/p)' (q/p)'llq/p)-) (7)
(q/p -I (q/p) -1
9o =I,q, 0 (4)
Computer Oriented Aastc
processesano Mar Chains
490
Statistical 491

can be obtaincd from this


by
Methen
The probabihty P, of the opponent's ruin writing a-z for -I
zand interchanging p and q so that
wefind9, and P, 1
(glp)' -I
(plq) lp/q)' -]_ 1-(p/q)
1-(q/p) (q/p) -I (8)
(p/g-1
has to terminate skillful player with even a slight capital can haves
From (8) &9.p. *q. =l . (9). so that the game (and there is no Thus"a
less chance of ruin than a
draw) withlarger capital, but less skillful"
aver
Case Il : (UnBiased case) Thisproblem can also be discussed with diverse variations like
(10) (a)Stakes are doubled with no change in initial capilat.
If p=q= 1/2. (5) becomes 24, =9,.1+4.1
(b) p*9 (biased)
give that p= l as a
Its characteriste equation is p'-2p+1=0 which double ro0t. with draw
i) Gambler's ruin
General solution is g. - Az(|)' +B(U)' =Az+B Gambler's ruin with correlation.
()
+B= Aa +l are above the comprehension of anormal student
Using conditions in (4), we gct g =l= B.q, =0 = Aa Allthese at this level. Those who are
tolearn are
advised to look into advanced texts on statistics.
n
MODELs
4 RANDOM WALK
A
a
We have introduced carlier arandom walk model in eg. 8 on page 488.
Substituting we get q, +1
a
Suppose a particle moves in a straight line in steps of unit length. At each stage it can
pOVeonesteptothee right with probability por one step to the left with the probability 4. where
is a solution for (10) ... (11)
7"q=1. It is natural that if the particle starts from origin its position after n movements in a

In this case writing (a -2) for z, we get p, zla .. (12) sndomvariable which depends on the discrete parameter n.
The Random walk models described above is called symmetric if p=q Ifp >
so that p. *q, -listrue here.
ben we say that
there is adrift towards the right, if q > we say that there is a drift
We can reformulate our results as follows: "Let a gambler with an initial capital : nlay 2
aganst an infinitely nich adversary who is always will1ng to play, although the gambler has the ards the left. The walk is said to be unrestricted if the particle can go to anypount on the
prvilege of stopping at his plcasure. The gambler adopts the strategy of playing until he eithet
loses his capitalor incrcases it to a(with anet gain (a -2) ). Then g, is the probability of hs In two dimensional random walk the particle moves in unit steps in one of the four
los1ng, and (|-g,) the probability of his winning" irections parallel to x axis and y- axis.
Note 1. When Pq,wc have q,= For aparticle starting at the origin the possible positions are all points of the plane with
and P. =. Suppose the capital of the Megral valued coordinates. Each position has four neighbours. Similarly in three dimensions
opponent is comparably greater than the capital of theplayer ie. a-z is infinitely greater han ah position has six neighbours. The random walk is defined by spec1fying the corresponding
: which implies that a is infinitely greater than , then p. ’[Link], we can say that run
bu or six probabilities. Here for simplicity we willconsider only the symmetric case where
of the opponent is practically impossible, when the players are of equal skill and capital of
ddirections have the same probability.
opponent is comparit1vely grcater than the capital of theplayer. 10.5 SPECIFICATION OF STOCHASTIC PROCESSES
2. Whern p and p>4 and (a- z) ie. a set oe
State space : The values assumed by the random variable are called states. The
),

possible values of an individual random variable X, of astochastie process (X,,n 2|) is


is said to be discrete if it
Cwn as its state space. It is denoted by I. The state space
NLains a finite or countable intinity of points, otherwise it is called continuous.
Computer Oriented processesand Markov Chains
492

denotethe total number of


Stalsica Method
gchaslc
MULTI-DIMENSIONAL STOCHASTIC PROCESS
493

is rolled. Let X,
e-g. I1. Suppose afair die SiXeS we have assumed that the

in the first n throws of a die.


Then the set of possible values
Here the state X, is discrete.
of X. is the
finite mppcaing Untilnow
menstona But the process
values assumed by the random
(X,n21) may be
variable X,
multi-dimensional
arc one-

hegative integers 0,1, 2,....


Consider
+...+ Z where Z is a continuous
Cg9. Xn=)20) where X, represents maximum and X,
represents
e.g. 12. Consider X, =Z, +Z,
assuming values in [o0, o). Then the set
ofpossible values of random atiable
X, is in the interval (0
v minimumntemparature at a place in the interval of time
(0,). This is atwo
ochasticprocssin continuous time having continuous state space. Similarly dimensional

the state space of X, is continuous. (0,).Then malti-dimensionalI process.


CLASSIFICATION OF
STOCHASTIC PROCESSES
we can have

have considered thc parameter n of X, as 08


In the above two examples, we
integer (n20). We considered
the state of system at distinct
wider sense.
"time"
non-n1eg,2at,ive
points n=0,
Fromthe above discussion it is clear that a Stochastic process is a function of sample
ntsandtime. The sample points may have discrete or continuous values. Similarly the
be
only. Here the word "ime" is used in a aperiments may defined as discrete of continuous time intervals, Thus a Stochastic proccss
We can again think of a family of random variables (X,,te T} such that the state of beclassifiedinto 4 types.
nay
system is known at every instant over a finite or infinite interval. Then the system is defined x-and t are continuousthe Stochastic process is called as "Continuous Stochastic
1. Ifboth
We say that we have afamily of random variablesiin
Process"
of times.
for a continuous
time. range
AStochastic process in continuous time can have either a disrete or a continuous continuous
slale
[Link] continuous and is discrete, the stochastic process is caled as a Discrete stochastlc
process.
space.
e.g. 13. Suppose that X, gives the number of outgoing calls at a switch board in an
, Wris discrete and r is continuous, the stochastic process is called as aDlscrete Stochostic
interval (0,). Here the state space of X, is discrete, though X, is defined for acontinuous Process.

time having discrete state space f hothxand f are discrete, then the Stochastie process is called aDiscrete Stochastie
range of time. Thus this is a process in continuous Process.
particular place in (a..
e.g. 14. Suppose X, represents the minimum temparature at a We put these in tabular
form
a system in continuous time kes:
then the set of possible values of X, is continuous. This is x()
ContinuIous Discrete
a continuous state space. Continuous Stochastic Discrete Stochastic
Continuous
10.6 RELATIONSHIP
Process Sequence
The relationship among the numbers of a family (X,} is of importance. The nature of Discrete Stochastic Discrete Stochastic
Discrete
the dependence varies. We will describe some Stochastic Process according to the nature of Process Sequence
dependence relationship existing among the members of the family. It can be classified as
Definition : Dependent Stochastic Processes We can classify Stochastic process in another way also.
and non-determininstic Stochastic process.
deterministic Stochastic Process
In some cases the members of the family (X,) are mutually dependent. They are calld Stochastic Process if all the future
Def. A random process is called a Deterministic
Dependent Stochastic Processes. The Bernoulli's process discribed earlier is an example of salues can be predicted from past observations.
Dependent Stochastic Processes. Non-Deterministie Stochastic Process ie future values
Astochastic process is called past observations.
Definition : Stochastic Process with Independent Increments of any sample function cannot be predicted tfrom
everywhere if their respective sample
If for all ,...44 <(, <.. <I,,the random variables X(,)- X(4 ),X() - X(,). Two random variables X(/) and Y() are equal
spaces are identical for every .
., X(,) - X(I,-) are independent, then (X(/), teT} is said to be a Stochastle Process tinte
symbols (X,,te T} or (X (), te T} where T is
with Independent Increments. In case of continuous time both the it may
rintinite interval is used. The
parameter is usually interpretted as time, though
Iepresent distance, length, thickness and so on.
chasticprocesses and Markov Chains
Computer Oriented Slatistical
495
494

10.9 PROBABILISTIC STRUCTURE


As ( : )isindependcnt of ,,the mean of each random variable is sarne Hence the
ways. One, as a Is a constant
process in two diflerent collection of the process
We defined a random
as a collection of
different random variables
defined at of dllferlitme. Thus, E[x)] = X- constant
timefunctions andanotherprobabilistic structure of arandom variableiusing
instants. We willstudythe
definition a random process bccomes a
random variable, d iferen t
thesetwhenwtiomedifisnitrm Def. [Link] process is said to be stationary of order two if for all ,.2 and ×its
orderdensity functions
By second function and
characterisedby a probability density moments. it is
satisfy the cond1tion, f,(K,., h.)- f,(y.x
random variable is possible to fixed aond
order stationary: h6./2-8)
A
different statistical properties
like mean, variance and
DISTRIBUTION AND DENSITY
other
FUNCTION calcuak. Aprocessis stationary to order n, if for nrandom variables of the process considered
10.10 PROBABILITY function as omes ./2,....t, their nthorder joint density function is invariant with time origin shift.
can define the Probability
To cach stochastic variable we
real number x :h.2,,) =f(M.X..htö,l, +.,.I, +ö)
I,(x :4)= P[xt;)Sx ]for any Definition : Evolutionary process
This is called the first order distribution function of the random variable x\t).
(x:) is defincd as the Aprocess which is not stationary is said to be evolutionary.
function
The first order probability density
o12 TIME AVERAGES
first order probability distribution function.
derivat1ve of Previously we defined statistical averages of the stochostic process, by viewing that as
iollection of random variables indexed in time. Now, we shall consider the stochastic process
f(q:4)=(:4)
collectionof time(sample) functions and define the time averages for the random process.
Similarly. we can define second - order joint probability distribution function as sa
nef. The time average of a stochastic process is defined as
I,(. :4.,) =P{xth) sx,xt,) Sxz} A[x()] = Lt
probability density function as
and the second order joint
We can define time mean, variance for a sample function also. If x(0) represents a
amnle function, then mean of the sample function (time function) is
variables.
We can extend this idea to n random
Thus we can define nth order joint probability
distribution function as i= A [x(1)] = Lt 2T
I
Lx) d
-T
I,(.)....,h...)=P}y)S ,x)SX),.., ,) <x}
The time auto correlation of a sample function is defined as
order probability function is defined as
R(t) =A [x(1).x(t+ )]= TLt 2T x) x +) d
-E,(,2, ,ih2t,)
Similarly, the mean of other
10.11 STATIONARITY The time averages like + ofa sample function are constant.
constitute a random variable.
ample functions will each be constant. These constants again
Stationarity of arandom process explains the time invariance of certain propertes Fu variables.
Weecan define the statistical average of these random
a stationary process, the distribution function or certain expected values do not change wih
E[7] =X,
time. For stationary process, which is not time dependent.
Contradictarily, for a non-stationary random process, any of its density functions E[R()] =R(t)
probabilty functions or any of its movements depends on the precise value of time.
If the process has zero variance then X and R() =Ryx(t)
For stationary process, ,(x,h)= (x, +ð) should be true for any 1, and atime shin
of ò.
Computer
Oriented Statistical echasicprocesses and Markov Chains
496

10.13 MARKOV PROCESS


Method at the initial (or zero-th) trial and fixed
497

for E,
occured at the preceding trial. The conditional probabilities
"Markov Process" if given the P, of
A Stochastic
proccss is saidto be a value of that E, has
not dependon the values of X(u)
for u<t. The
future Xit), the possible states of the system. P, is
possible outcomes E, are usually
the of X().v>t
valueMarkov docs
process depends only on the present value and not on the past values

process X) is said to be
Markovian if
behavlour kred toas calledthe probability of a
transition from E,
Def. A Stochastic Definition: The
Another stochastic
(x,,n=0,1,2,. proccss is Called a Markov chain
ij44aEN, Px, =j/x=iX1=h.=
ShSh...sI, <I,1
= P[XU,-)sX(,)= %, where ko P{x,=/X,=}=P,
the process.
are called the states of first number is defined.
Xo, XË, X,....x, NNeverthe
10.14 MARKOVCHAIN outcomes E, are called states of the Markov chain. If X,
if cach X, is a random
The has the outcome E, ie.
[X,,] is a Markov chain variable and if
Asequence of states -/.the process is said to be at the state E, or simply at the state j at the n trial. 1o

thereis nolonger a fixed


probability P(X, = ))but to apair of states (i, )) at the two
=P[X, =n/x, =x] aCessivetrials (say h and (n+1)" trials) there is
conditional probability P,. This is the
10.15 MARKOV CHAINS
of transition from the state at the n" trial to the state i at (n +1y trial.
one of a finite number of states E,,E,,.,E.
Consider a system which can be in any We habilly
also assume that the probability of the system being in a given state at the next trial depends
only on its present state and not upon the states it may have been in earlier times. If al any
The transition probability may or may not be independent of n. Ifthe transition probability
ndependentof n. then Markov chain is said to be "homogeneous" (or to have stationary
it beinginthe state E, at the next trail is
This the
time, system is in astate E, , the probability of
process is known as Markov chain and this property of the process is called Markov pP,itionuprobabilities). Ifit is dependent on n, then thechain is said to be non-homogeneous.
ansition probability P referes to the states (i.j) at two successive trails. The transition
property. Lo step and P, iscalled one-step or unit step transition probability.
associated with a fixed probability hu
In this process the out come E, is no longer In more general case, we are concerned with the pair of states (i, j) at two non-successive
E, has ocurgd
every pair (E,,E,) there corresponds a conditional probability Py i given that Ll say state i at n trial and state jat (n+m)" trial. The corresponding transition probability
addition to the P, we must he
at some trail, the probability of E, at the next trial is Py. In
to have the meanine hen called m-step transition probability and is denoted by pm)
given the probability a, of the outcome E, at the initial trial. For P,
attributed to them, the probabilities of sample sequences corresponding to two, three or four ie p =P(X,y+m =k/X, =0) (and this is beyond the scope of the present book)
trials must be defined by P{(E,,E, )} =4,p,, P{(E,E,E)}=4P,Pk, 17 TRANSITION PROBABILITY MATRIX
PIE,.E,E,E,) =4 P, PA Po, and generally The transition probabilities P, will be arranged in a matrix of transition probab1lities
PI(E0.E..E)} =40PoiPi2-P PnsPa . (1)
We now formally give the definition of a Markov chain. Pi P2 P .:.
P21 P22 P23 .
10.16 DEFINITION (JNTU(H) Sup. 2011 (Set No. 3)]) P=|P P32 P3 .*
A sequence of trials with possible outcomes E,,E,... is called a Markov chain if the
probabilities of sample sequences are defined by (1) in terms of a probability distribution
SIOchasticprocesses and Markov Chains
Computer Oriented
498
column,.
Statistical Method Theorem: If P andQ are
499

subscript stands for row,


the second for Clearly P 10.19 stochastic
matrices then
matrix for all positive integer product
PO is also a stochastic
where the first unit row sums. This matrix is
called Pis asqute Thus pn is a stochastic
non-negative elements and matrix. values of n.
matrix with
probability matrix (tpm).
symbol p, will be used for the
probability of transition from state i translt n 10.20
p W ep r
Definition: A
are positive.
Theorem: A
stochastic
matrix Pis said to be regular if all the entries of some

The 10.21 stochastic matrix Pis not regular if 1 occures in the principle main
one generation. Jagonal.
Matrix
Properties of transition probability
probability matrix has several
features. SOLVED EXAMPLES
A transition used both as Esample I:) Which of the following matrices
possible states must be rows and are stochastic
matrix, since all

entries recproluemsnesnt.
I. It is asquare
this is because all the
between0 and 1, inclusive; [/2 1/2
Z. All entries are
V2 1/2
probabilities.
3. The sum of the entries in any row must be 1, since the nunbers in the row give the
0 2
the right.
state at the left to one of (v) 1/4 1/4| |JNTU(H) May 2011 (Set No. 4)]
probability of changing from the
If P² represents the matrix product P.P, then P gives probabilities of atransition from
Solution:() is not a square matrix.
one state to another intwo repetitions of an experiment. In general pk gives the probabil1tuts it is not stochastic.

repetitions of an experiment. s2 The matrix is a square matrix with non-negative entries and sum of the elements in
of a transition from one state to another in k
Definition : AStochastic Matrix is asquare matrix with non-negativeeelements aand unit ach row is equal to 1.
. The matrix is stochastic.
row sums.
MARKOVCHAIN (i) The matrix 1s a square matrix but sum in each row is not equal to 1. So it is not
10.18 ORDER OF A for all tochastic.
said to be of order s (=1,2,..). if
Definition: A Markov chain (X. is (iv) It is a stochastic matrix.
=0} (v) The matrix is not stochastic, because it contains negative elements.
P(X, = k/X,- = j, X,-2J. Xn-s (v) The matrix is square matrix but sum in each row is not equal to 1. It is not a
=P{X, =k/X,- =j.. . sochastic matrix.
chain) if Esample 2: Test the following matrices are stochastic or not.
AMarkov chain X. is said to be
of order one (or simply a Markov
1 2 4 0
P{(X, =k/X,| =j,X4-2 =y 3 16 16
(a) (b) (c) |JNTU (H) June 2012 (Set No.4))
= P{X, =k/X,-1 =J}=PA 2 4
2
3
Markov chain, A chain of order one.
unless stated otherwise, we mean by
This implies the independence of Solution : (a) Given matrix is not a square matrix.
A chain is said to be of order zero if P, =P, Vi. It is not a stochastic matrix.
X, and X, (b) The matrix is a square matrix with non-negative entries.
is said to be a Finite Markor
Markov chain (X./n20} with k states, when kis finite
A
But sum of elements in each row is not equal to 1.
and columns.
transition matrix in this case is a square matrix with k rows
The matrix is not stochastic.
Chain. The
entries and sum of the elements
2,-1,0,1,2,..then the Markov chain is said lo be (C) The given matrix is square matrix with non-negative
If the possible values of X, are... in each row is equal to 1. Thus it is a stochastic matrix.
denumerably infinite.
Computer Oriented Statistical SOchastic processes and Markov Chains 501
500
the
Sinceall entries of some power of C are
1/3 positive, Cis regular stochastic matrix.
T0.75 0.25 0 10.75 0.25
0 is a 01 0.5625 0.3125 0.1251
Imple3: Find the value
of x,y,z if
1/3 1/4 transition 0 0.5 0.5 0

equal to 1.
|JNTU(H) Dec. proba lity 0.6 0.4 0 0.6
0.5
0.4
0.5
0
0.3
0.45
0.45
0.35
0.25
0.2
matrix. elements in each row is Since all the entries in A are positive,
Solution : Sum ofthe Ais regular.
1 [1/2 1/2 0
BB= 1/2 1/2 0
[1/2 1/2
(n) B' B =B-B= 1/2 1/2 0
3/8 3/8 1/4|
and 0+ y=]’y=-1 |7/16 7/16 1/8
1 1 12-4-3 5 1/2 1/2 0
!.!=1z=l-;
3 4
12 12
B =BB= 1/2 1/2 0
sequence of sample means from arandom 1S/32 7/32 1/16
Evample4: Let M, denote the
+X, +t... X,
process X, Since entries are zero, for all powers of B, Bis not
regular.
M, -¤
=
[0.25 0 0.75
B=
Is M, a Markov process ? (v) We can caluculate 0 1 0
0 0

Solution : M, =X =-X, +(n-)M,-]


n
fo.125 0 0.875 0.0625 0 0.9375
B= 0 1 0 B= 0 1 0
1 0 0 1
on X, and is independentof Ma.M,
Clearly if M- is given then M, depends only
Further powers of Bwill still give the same zero entries, so no power of matrix Bcontains
Therefore, M, is a Markov process. dl possible entries. Then B is not regular,
regular.
Exanple 5: Which of the Stochastic matrices are (or) B is not regular since Ioccurs in principal diagonal.
00 1 | 0.75 0.25 0 Note : If a transition matrix P has some zero entries and P also contains zero entries,
1/2 1/4 1/4]
() A = 0 1 0 (ii) C=|1/2 0 1/2 (ii) A= 0.5 0.5 e may wonder how far shall we compute P* to be certain that the matrix is not regular. The
1/2 0 1/2| 0 1 0 0.6 0.4 0
answer is that if zeros occur in the identical places in both P and P* for any k, they will
uDpear in those places tor higher powers of P, so P is not regular.
[1/2 1/2 0 [0.5 0 0.5]
tnple 6: Which of the following matrices are regular
(iv) B=| 1/2 1/2 0 (v) B=0 1 0 [JNTU(H) Dec. 2011 (Set No. 1j
1/4 1/4 1/2 0 0
3
o
Solution:() Not regular since 1 lies on the main diagonal (a (b)
0 [/2 0 1/2]
(i) C =C.C=0 1/2 1/2, c=C.C' =1/4 1/2 1/4
1
/2 0 1/2 0 1/2 /2|
f0 1/2 /2 [1/4 1/2 1/4] (c) (JNTU(H) May 2012 (Set No.2))
c =C'-C=|/4 1|/4 1/2. c=1/8 1/2 3/8 2

|0 0
/4 1/2 1/4| |1/4 1/4 1/2|
Computer Oriented Statistical Svchaslcprocesses
p and Markov
Chains
502 503
diagonal contains1. Thus they are
matrices the principal
Solution : In allthe not X,:
PX, =2,X, =1, - 2} =P{X, = 2,X, =1/ X, =2;
matrix of t regular
E,
two possible states E, and the 31 1
Evample 7: When there are only P{X, =2; 163

probabilities is necessarily of the


fom P=P |-a
transtion P(X,=1,X, =2, X, =1,X, =2)
- P(X,=1/X,
16

experiment =2,X =1,X, =2}. P(X, = 2, X, =1,X, =21


the following conceptual
Such achain could be realised byabsolute speed remains constant but the A particle Moof ves = P{X, =l/X, =2). 31 3
along the x-axis in such a way
motion can be reversed.
that its
to be in state E, if
The system is said
the left. Then
the direction
particle the
pis the moves in the
16
Evample10 Discuss about system S that is
416 64

in the
E, if the motion is to calelostate occurs in accordance with the scheme of a states A..A,A,.Markov
reversal probabilitty of a
positive direction and in state probability of a homogeneous transition
chain,Irom
the
reversal when the particle moves to the right and a the when moves rAISIIonprobabilities are given bythe matrix
to the left. [1/2 1/6 1/3]
with probability pfor a headi is tossed
Fvample 8: Suppose that a coin
X,, the outcome of the rh trail
be k, where k(= 0,1,2,.., n)
denotes that Le
is arun of k indefinitely.
there
7, =1/2 0 1/2
1/3 1/3 /3
uninterupted block of heads is k. {X,} constitutes sa
successes (i.e.)the length ofthe Solution: We see that if the system was in the state A,,then
chain, with unit step transition probabilities.
Markoy after a change of the state
bonestepit willremain in the same state with a probability of 1/2, and it will pass to state A,
p,k=j+1
PA =P{X, =k/X,- =j}= whaprobabilityof11/6, and to state A, with a probability of 1/3. But if the system was in the
=q,k= 0 tate A,, hen after the transition it can (with equal probability) find itself only in states A,
=0, otherwise A,;it cannot pass from state A, into A,. The last row of the matrix shows us that

Then the transition matrix is given by the state A, the system can pass to any one of the possible states with one and the same
babilityI/3.
0 1 2 kk+l..
Eample 11:The three state Markov chain is given by the transition probability matrix
1g p 0 . 0 0 ..
2q 0 p 00 .... ... 1 2
States of 0
q 0 0 . 0 2
3
1
Prove that the chain is ireducible. |JNTU (H) June 2013 (Set No.3)|
P=l= 2
1
I\mple 9 Let (X..n >0; be a Markov chain with three states {0,1,2; and uh
0 1 2
Solution : In this Markov chain, all the states communicate with cach other.
of3/4 1/4 Suppose we consider the states as 0, 1, 2.
transition matrix 1 1/4 1/2 1/4 and the initial distribution P{X, =i} =1/3,i = 0,1.2 then It is possible to go from state 0 to state 1with a probabil1ty of 12.
2 0 3/4 1/4 Again it is possible to go from state lto state 2 withaprobability of 1/3.
Thus it is possible to go from state 0 to state 2.
we have P{X, =/X, =2} =3/4; P{X, =2/X, =}=1/4
So, the chain is irreducible and all the states are recurrent.
13 3
P{X, - 2,X, =1,X, =2} =P{X, =2/ X,=I}.P{X, =l /X, =2} =4 4 16
eochasticprocesses and Markov Chains
Computer Oriented
504 Statistical Meth HIGHER TRANSITION PROBABILITIES
505

probability matrix, find its graph.


the transition O28CHAPMAN- KOLMOGROV EQUATION
|Eanple 12:Consider
E; E, Es Wehave considered unit step or one-step transition
E, E,
0 thatof We have
probabilities, the probab1lity of X,
0 (0
ven considered the probability of the outcome at the n" step or trial
0
E,| 1/3 0 2/3 0 [Link] at the previous step, P, gives the
1/3 0 2/3 Nen
the statej at the next trial.
probabil1ty of unit step transition from th
E; 0 i ata trialto
0 2/3 stale
1/3
E, 0 0 Also mstep transition
probability given by
is
0
E_ 0 0

below.
P{X,mn=0/X, =i} =pm) . (1)
Solution :: Its graph is shown
Pgivesthe probability that from the state i at the h trial, the state j is reached at
1/3
+n)trial in msteps.

2/3 The one step transition probabilities p are denoted simply by P,


1/3
Consider P=P{Xp=j/X, =i} ..(2)
The statejcan be reached from the state i in two steps through some intermediate stage r.
1/3
2/3
PX2=)X =r/X,=
-P{X2=/X, =r.X, =i.P{X, =r/X, =i}
2/3
=P P=P, P,
Eanple 13: Three universities A, B, Care admitting students. It is given thne Sincethese intermediate states r,r =1,2,...are mutually exclusive, we have
percent of the childeo
percent of the children of Awent to A and the rest went to B. 40
went to B and rest split evenly between A and C of the children of C seventy percent t
m
Cand 20 percent went to Aand 10 percent to B. Form the Markov chain and transition
Solution -E p, P, ...(3) Summing over all intermediate states.
A B C
By induction, we have
0 A0.8 0.2
Transition matrix is P= B0.3 0.4 0.3 Pm+) =P{X:ml =J/X, =i}
C|0.2 0.1 0.7 - SP{X,.m. =j/X,.m =r).P( .,=r/X, =1}
Markov Chain is

0.8 0.4 0.7


0.2 0.3 Similarly we get
B
0.1

0.2
506 Computer Oriented
In general we can have
Slatislical Method,
Svchaslicprocesses

RESULTSIN
and
Markov Chains
TERMS OF 507

... (4)
o29
TRANSITION
D=(D,) denote the transition matrix of the MATRICES
unit sten transitions and Pm)
This equation is a special case of Chapman - Kolmogrov thetransition
matr1x ={p, )
by the transition probabilities of a Markov chain. (This can be seen in
equation, which is Anote

matrix p
of m- step
transitions. We can see that elements of P
dvanced stealxtis+siedon
ementsofthe are the
statistics) Thus p2)= P.P = p2
Ex. 1. Verify Chapman-Kolmgrov equation with the following
example, write the Similarly plm+l) =
plm).P-P. Pm and
probability matrix given by

0 2 3
transit on p(mn) =pm), p = pln) plm)

00 SOLVED EXAMPLES
10.2 0 0.8 Fample 1: An urn initially contains five black balls
P= andfive white balls.
2 0 0.2 0.8 perimentis repeated indefinitely. Aball is
drawn from the urn:, if the ball is The following
0 ackin
the urn, otherwise it is left out. Let
X, be the number of black balls
white it is put
3 0 0
Unafter
nddraws from the urm. remaining in the
2
s X, a Markov process ? If so.
0 1

of0.2 0.8 0 (a) Find the apropriate transition probabilities.


0.36 (0 0.64 A Find the one - step transiton probability matrix P for X..
Solution : We have p²
2 0.4 0 0.96 i Find the two-step transition probability matrix Pby
matrix multiplication.
30 0.2 0 0.8 À What happens to X, asn approaches infinity ? Use
your answer to guess the limit
of P as n’0.
In the matrix P,consider the value of p =0.36 we are considering the
starting from state l and reaching the state 1. It is possible through any of the probability of Solution: The number X of black balls in the urn completely specifies the probability of
stages k(k =0,1,2,3). intermedate Ls outcomes of a trial; therefore X, is independent of its past values and X, is a
Markov
process.
State I to l can be reached is 5
PosPo- here the intermediate state is 0 PX,=4|X,=S]==l-P[X,
10 =5| X, =5]
4
PiP’ here the intermediate state is 1 MX, =3| X =4]==l-P[X, =4| X,- =4]
3
Pi2P1 ’ here the intermediate state is 2 Px, =2| X,=3]==l- P[X, =3| X, =3)
P3P: ’ here the intermediate state is 3 2
PX, =|| Xn=2]==l-HX,
7
=2|X =2)
Pi=Po Po, Pu P1tP2P21 tP2 P1
Hx, =0|X=lJ==1-PMX,
6 =1| X,4=||
= (0.2 x l) + (0x0) + (0.8x 0.2) + (0x0) = 0.36
PX, =0| X-=0]=1
Thus Chapman - Kolmogrov cquations are satisfied.
Allthe transition probability are independent of time.
508
Computer Oriented Statistica
Sochasticprocesses and
Markov Chains
0 0 0 0 0 509
0 0 0 0
6 6
2
0
one-step
transition probability matrixP2 2
7 7 3
(b) P= 3 5
0 0
4
0 0
will
0 0
4 P{Peter attend on Thursday | he went on
9 9
5 two- step transition matrix Friday} Pi = where 2)
P is taken from
the
0 0
10 10 5 3
8
9
(c) The two-step transition probability matrix p°by matrix
multiplication is L16
0
-=|8-|a 0 0 Iample 3 |The alumni office of a college
fund one year will also finds, on review, that 80% of its alumni who
25
0 0 0
contributeto the annual
contribute next. Writecontribute
36 36
1 65 25
contribute one year will next year and 30% of those who do
0 0 not the transition
21 144 49 Solution: Consider the state matrix.
p² = 0
3 225
448
25
0 li corresponding to alumnus giving a donation in any one year
as state and state corresponding alumnus not giving a
28 64
95 25
0
donation in that year as state 2.
0 0
6 192 81 2
2 19
0
9 36 4 The transition probab1lity matriX is given by P= 1f0.8 02]
2\0.3 0.7
Thus
(d) Asn’o eventually all black balls are removed. Lample 4: A raining process is considered as a two state
considered to be in state 0and it does Markoy chain. If it rains, it
is not rain, the chain is in the sate of 1.
1 0 0 0 0 0 The transition
1 0 0 0 0 probability ofthe Markov chain is 0.6 0.4
defined by P= 0.2
10 0 0 0 0
Find the probability that it will
0.8
P = in for 3days from today assuming that it is raining today. Assume that the
1 0 0 0 0
mutual probabilities
1 0 0 0 0 of state Oor state l as 0.4 and 0.6 respectively. [JNTU(H) Apr., June 2012 (Set No. 1)]
|1 0 0 0 0 0
Solution : The l step transition probability of the matrix is given by P= 0.6 0.4]
Ixample Peter takes the course Basic Stochastic Processes this quarter on Tuesday 0.2 0.8|
Thursday, and Friday. The classes start at 10:00 am. Peter is used to work until late in the P(2) = p² = |0.44 0.56
night and consequently, he sometimes misses the class. His attendance behaviour is such thal 0.28 0.72|
he attends class depending only on whether or not he went to the latest class. If he attended
0.376 0.624 |
class one day, then he will go to class next time it meets with probability 1/2. If he did not p0 P(3) = p³
0.312 0.688
to one class, then he will go to the next class with probability 3/4. Describe the Markov chaun
that models Peter's attendance. What is the probability that he will attend class on Thursday. The probability that it willrain on third day, given that it will rain today is 0.376.
if he went to class on Friday ? Evample 5: Consider MarkOv chain
Solution: Let X, =0 if Peter goes to the n-th class meeting and X, =1 ifhe skips it.
0 1 2
The process (X,n >1} is a Markov Chain with state space I= {0,1} and o[3/4 1/4 0
P=l||/4 1/2 |/4|. Find P and P(X, =1,X =0)
2 0 3/4 1/4|
Computer Oriented Statistical ahastic,processes and Markov Chains
511
510
by
transition matrix is given [01 05 040 05 04
Solutien The two step 06 02 02||06 02
3/4 1/4 0 02
3/4 14 0 T/4 03 04 0303 04 03
P4 2 1 4|/4 /2 0 3/4 1/4
0 3/4 4 2 3

2
I0.43 031 026
0
-2 0.24 042 034
o 5/8 5/16 1/16
-l|5/16 1/2 3/16 30.36 035 029
23/16 9/16 1/4

for n20 ()P{X, =3)


= SPIX, - 3/X, i) xPIX, )
Hence P- P(X,., - /X, =0) =16
Thus P(X, =1/X, =0) =5/6 and
-pPOX, - )+ p P(X, =2)+p PX, -3))
-0.26 x 0.7 +034 +02 +0.29 x0.1
P(X, L, =0)
- PX, =1LX =0} =P(N, =
1/X, = 0}. P(X, =0) -0,182+0.068 +0029 -0 279
ú) P{X, =3/X, =2) =P,, =02 ()
16 3 48
PIX, = 3, X, =2) =P[X, 3/X, 2} xP{X, -2)
Iampk t Suppose that the probability of a dry day (state 0) follows a rain day
(state 1) is 13, and probability ofarain day follows arain day is 1/2. Then we have atwo =0.2x02 -004 using (1) (2)
state Markov chain suchthat pe l/3 are po =1/2 and transition probability matrix (tpm) P(X, = 3, X, =3,X, =2) =P{X, -3/X, - 3, X, 2) xP{X, -3,X,-2)
given by -P{X, =3/X, 3) <P{X, 3,X, 2) by Markov property
0 1 -03x0 04 by cquation (2)
Pa
of1/2 1/2] -0,012
1/3 2/3
P(X, =2, X, =3, X, =3,X, =2)
S/12 7/12 173/432 259/432
and p
Solution : From this P 7/18 11/18 259/432 389/432 -P{X, =2/X, =3, X, =- 3, X, =2} xP{X, -3, X, -3, X, =2)
From this we can conclude that if March Ist is a dry day then the probability that March -P{X, =2/x, =3)x P{X, =3,X, =3, X, =2) by Markov property
ird is a dry day is 5/12 are March 5 is a dry day is 173/432. -04x0.012 by equation (3)
Ianpk The transition probability matrix of a Markov chain
=0.0048
0.1 0.5 0.41
Ivample 8 Afair die is tossed repeatedly. If X, denotes the maximum of the numbers
3 is P= 0.6 0.2 0.2 and the intal
(X,1.n 1,2,3,...having thrce states 1,2 and ccuring in the first n tosses, ind the transition probability matrix P of the Markov chain
|0.3 0.4 0.3 JNTU (H) Sup May 2011(Set No. ), Dec. 2019|
X,. Find also P and P(X, =6).
distnbution is P"= (0.7,02,0 1) |JNTU(H) Dec. 2013,(A) Nov. 2015, Nov. 2019| Solution : State space =(1,2,3,4,5,6)
Find i) P(X, = 3), () PÊX, =2, X, = 3, X, =3,X, =2} Let X, =the maximum of the numbers oçcuring in the first n trails = 3, say
Salution i We have Px, =1) =07, P(X, =2) =0.2, P(X, =3) =0.1
Computer Oriented Statistical hastic,[Link] MarkOv Chains
512 513

(n+1) trail resultsis 1, 2 or 3 Iample9; Consider a communication system which


Then X.,=3, ifthe transmits the two digits 0and I
trail results is 4 stages.
several
Lct Xn21} be the digit leaving the nth stage of the system and X,
-4, if the (n +1)
ngh
digitentering the first stage (leavVing the
thattthe digit which enters will be Oth stage) At each stage there is a consta
trail results is 5
the
ttlityg transmitted
-5. ifthe (n+1) whenit leaves),
and probability potherwise (i unchanged (ie. the ddigit will remain
Ahanged e. the digit changes when it leaves).
trail results is 6
=6, if thc (n+1)" pg=l.

I,,1 3 Solution: Here (X,,n20} is a homogeneous two -


state Markov chain with unit - step
. P(X. =3/X, =3)=66 asitionnmatrix.

when i= 4,5.6 P=
P{X,., =i/X, =3)=-:
6

the chain is
The transition probabilitymatrix of Wecan prove by M
Mathematical Induction that

1/6 1/6 1/6 1/6 1/6 1/6| 1

0 2/6 1/6 1/6 1/6 1/6 P" = L+4-p


2 2;9-p"
1
0 3/6 1/6 1/6 1/6
P= 4/6 1/6 1/6
0
0 0 0 S/6 1/6
0 0 0 0 0 1 Here Po=p=5;(9-p" and
Pt=Po and
2 2
13 S 7 9 |||
0 4 5 79 11 as m ’ 0 Li poo = Lt po =LL p =Lu p ’
10 0 97 9 11
p 360 0 0 16 9 11
Suppose that the initial distribution is given by
000 0 25 11 P{X, =0} =a and P{X, =I} =b=| -a
00 0 0 0 36
then we have P{X, =0,X, =0} =P{X, =0/X, =0} P{X, =0} =a pe
Intial statc probabil1ty d1stribution is and P(Xm =0, X, = l} = bp
The probability that the digit entering the first stage is 0given that the d1g1t leaving the
p=
66'6'6'6'6 (Since all the values 1,2, ....6cqually likely) nh stage is 0 can be evaluated by applying Baye's rule.
We have

P(X, =0/X, =0}


P(X, = 0/X, = 0} P(X, = 0)
P{X, =0/X, = 0}P(X, =0} +PX, =0/X, = l)P{X, =1}

a Poo
91
Pom)+b p
6 36-x(1+11+|| +||+1I+36) 216
Computer Oriented Statistical sechastic,processes and
514 Markov Chains
515
e
nrobability that the chain starting from state i returns to i.
n4 step is state for the first time
atthe denoted by
f,n=
time probability or the 1,2,3,... and
recurrence time probability.
is called as first time return

C-S =1 the return to state i is


certain, and the state i is said to be
persistent
recurret. If E, <1.(the return to the stateiis
1+(a-b) (q - p)"
f state
.
i communicates with uncertain) the state i is said to be transient
state j and if statei is recurrent. then
AND CHAINS state iis also recurrent
10.30 CLASSIFICATION OF STATES
(JNTU(H) June 20 eter H =2"Ji is called the mean
can be classifiedin adistinctive 2012 (Set No3| recurrence time of the state i. If the mean
The states ofa Markovchain (X,.n>0)
to some fundamental properties of the system.
manner ac ording Rcurrence time u. is finite, the state iis said to be
non-null persistent or positive persistent
some n>1, then we say that the ndif H, =0, is null-perisistent.
it
1. Ifthe probability P"is non-zero for
rcached from the state i. We say that state J is accessible from
statej can be |JNTU (H) May 2013|
pOSsitive recurrent (or positive persistent) and aperiodic state is called ergodic. A
A
communicate. It is clear that Markov chain all of whose states are ergodic is set to be a ergodic chain
2. Two accessible states and jare said to icommunicates
communicates with JNTU (H) Nov. 2015)
with itself for all 20. If the state state and )
j
communicates with state k, then the state i communicates
with
state k. state 0 Atpm, P is said to be a regular matrix if all entries of p"
(m =2,3,.) are non-Zero
that communicate are in the same class. A
state is called an essential Two state positive values. Ahomogenous Markov chain is said to be regular chain if its tpm is
communicates with every state it leads to. state if n a regular matrix.
of Atpm P is said to be a stochastic matrix if the elements of each of the rows are non
3. If every state can be reached from any state (in any number transitions)
chain is said to be irreducible. Then the transition matrix 0s irreducible. then the negative and the sum of elements in each rows is equal to l.
the chain is said to be reducable or non-reducible. Otherwise,
|JNTU (H) May 2013 1. Ahomogencous Markov chain willhave atpm that is independent of initial state iand
4. Astate is said to be an absorbing state if and only if p, =1. A Markov
chain is steps n as n’0 and is called steady state probability, ie. 4, =Lt P" and
absorbing ifit has at least one absorbing state and it is possible to go
in one or more steps.
from every non-
absorbing state to atleast one absorbing state X4, =lwhere 4, is called limiting state probability and is interpreted as the long
|JNTU (H) Nov. 2015)
run proportion of time the Markov chains spends in statej.
S. A state i of aMarkov chain is called a return state if p>0 for some nbl 10.31 MARKOV ANALYSIS
6. Astate is said to be periodic with period I(>) if the retum to the state is possihe The Markov analysis is a method used to forecast the value of a variable whose predicted
only in .2,31,....steps, where t is the greatest integer with this property. In thy value is influenced only by its current state, not by any prior activity. In essence, it predicts a
random variable based solely upon the current circumstances surrounding the variable.
case p =0, unless n is an integral multiple of r The technique is named after Russian mathematician Andrei Andreyevich Markov, who
The state i is said to be aperiodic (or non-periodic) of no such 1(>I) exists. pioneered the study of stochastic processes, which are processes that involve the operation of
chance. He first used this method to predict the movements of gas particles trapped in a
container. Markov analysis is often used for predicting behaviors and decisions within large
Alternative Definition

7. Thepenod of areturn state is defincd as the grealest common divisor (GCD) of allm groups of people.
detin1ng the
Understanding Markov Analysis : The Markov analys1is process involves
such that p>0 ie., d, =GCD (m: pl>0; likelihood of a future action given the current state of a variable. Once the probabilities of
tree can be drawn. Then, then the
Astate /is said to be periodic with period d, ,if d >l and aperiodic if d =1. future actions at each state are determined, a decision
variable. Markov analysis
likelihood of a result can be calculated, given the current state of a defective
predict the number of
has several applications in the business world. It is often used to
states.
Dotolikely
the cconometrics
often events, known inare and linc 516
P Now all A. ball relatively pices
=1/4 TheP>0, States often out
1/w4
42 (X,} Solution Ivamplel:
the Show to tells
Disadvantages
chain P and Advantages of that
of states one casy it better
P>0, =|1 is that and B sample wil
1/4 1/2|1/2 is 0 a X, little cannot
to
irreducible.
1/2 0 /2 Markov depend The : are the
B at come
States
P transition
ergodic? throws the
alwaysThree estimate
about making forecasting of
/2 P>0, 1/2 0 process beof
only of
Markov off
|P chain. boys the
why Markov an
1/2
=|1/4 P 0P'= on X,B is conditional truepredictions
1/41/42 We probability A,something
EXAMPLES
SOLVED accuracy.
Analysisassembly
1/23/T/8 can >0,
states Markovian. B Analysis
model
c|1/21/2 0 A0
States
A of X, ball
and
1/2 see P |1/41/2|1/4 1/2 of 0 :
that >0 matrix(JNTU2011(H) probabilities
toare Chappened. of Sithanmple The linc,
X 1 B Find he :
1/4;P° and 1/2 1/2 0 C; tMarkov
more primary given
but C of throwing
the but modcls,
0 underlying
all not the Nov. the
0 transition
other isC basedanalysis
complicated benefits
1/83/8=|1/4 on process 2010, just a such operating
1/I/24 /4 states ball on Computer
P>0 as situation
the is as
{X,} to not those of
3/8 of
[Link]
cach current
state. models Markov status
very Markoy
usedfor
X,n-2,X-3s... to
other. cases.
most in Statistical
MethodOriented
1/2 is
given (Set andthrow useful result of
Thisanalysis the
[Link] classify throws A
as No. always
the explaining for machines
1), However, are
Dec. ball the Yes,
isit
or analysis,
simplicity
wel is
earlie, 2019states. to on
Bas that the

sates uenery
are Since
periodic
[Link],
Also, In Similarly
p=P'.P= p? P=|1/2
Solution : I Since We
.pC.
. aC.
not Thus general, The The I te ohastic
the we amplMore eover note p
ergodic. with the periodic.
the state 1 states D. processes
Markov p> observe 2 all chain th at of that
period Markov p p° p'
pn = = P=1| the 2,3
P.P=P.= p*.P' = o state
Find(ie. 1 [Link] is
2,3,5,6,.
p².P= 201|/2
finite states (ie. and
chain p*) 2. 0,chainP>0. P>0, P 0 0the
and . Markov
is p.p². = |1/2 [1/2 1/2||1/2 0 10 0 1 1nature are and B
P*! A) and -1.
finite =0 is P 0 ergodic.
P> =P 0|0 1 01/2 irreducible, is etc.. C)
irreducible. Plo p² of periodic Chains
and for = 0 I 0
states are are etc.,
>0. p". =p² 1/2|| 1/21[
each 0 0 periodic
>
irreducible, p>0,p>0,p 0, P=p are
1/2|=|
10]0) of all period
with and 0
[Link] 01/2 the >0
Therefore, its
p>0 =P states Gwith for
Markov
p>0,p*>0,
all 1 0 1 period i
>0, 1/2=||/2 0 1/21/2 0 1/2 1, D. 2,3
its are
al p>0 chain ie of
states non-null3,5,6,......
the 0 a ie.
0
are 0
1/2 with aperiodic.
states
non-null p>0 0 tpm. persistant.
-
of 1/2P 1
the
persistent. Markov and
so
on 517
chain
All for
518
from n
is
.. We p²p' (p²i =) state It : d
oSolution
[Link]
0.37 is P=
() Iamplechain
0 0.0.541 The For
Pisa can pn)=
All possible
2 P(n)
prove are the to chain any
0.8|0.2 regular is : 1 2
4: matrices |1/2 1/2 1/2 3/21/2 states state 0 0 3:
to value the checkWe
0
The that 0 go not is 0 2 0 0 0
Check
1/4 1/2 1/2 are or of 1 original
transition matrix. from 0 2 0 1 0
Is P"
without ergodic. state regular. n, 1 2
this (m 1/2 1/2 state we 0
regularity of 1 0 whetherthe
= 1/4 1. matrix. 2
matrix probability 2,3,.) zero 0 get (i)
3/4 1/2 1/2 1/2| tomatrices
elements. state Ifn 1/2 0 1/2 1/|1
2
(H)irreducible has P(n) 0 following
p'= is
matrix al orl with evenmatrices 1/2 0
the state
Dec. ?
3/4 zero 1/2
of entries 1 2 2 1/2 0 1/2 Markov
2011 a or only. for 0
Markov 1/2 3/4
3/8 3/4
3/8 trom
different 1/2| 1/2 0 Computer
(Set with
1/2 state
No. chain possible JNTU is
1 n.
We regular
3), to Oriented
Nov. is S/8 3/4| (H)
state
given non-zero
observe Dec. and
2015, .2011
tstatistical
0
by or crgodic.
Des. state that, (Set
values Methodn
2019) 3) when No3

dlso
equilibrium that, condition.
10.33 ststate won't
eady sable called aporoach 10.32 has number ergodic. r
a
(b)BothSolution:
nrmatobabirilcites.y ate
called for Ifa Steady the beStableprobab1ility |00.2 0.03
7 [0400.6
STEADY
anyMarkov (OR)
probability, changing STEADY
a of
Evamplethe The We Solution Evample Thus w7ith2 In
The it this
stable limiting transitions. observe
state
=lth. at 0.0.41
saolution pSreochcasetics esand
ongvector
lprobability State Probability the i irreducible
chain isis :
EQUILIBRIUM (a) periodic
and 6: 5: probabilitychain
range to
states 3 All 0 possible Consider
the
chain STATE probabilitites.
Condition much value STATE return Here Find is not the Consider
or anfrom 0.3 we
probability So state 3 states 0 [Link]
go
e
trend thvector with from as :
are starting
(b) absorbing gothe
to states
fixed transition VECTOR time In aperi1odic
CONDITION the do the from
probabil1ty0.5. from three Markov
of : one many 4 || 3 Is
V If goes same are in not this Markov and state
thvector
e and VECTOR l transition a state other
no
state. communicate states Chains
wisystem Markov
to wi matrix all state
markov for matrix persist state asth aperiodic the to0
or infinity. period chain to) as
period (or 0 states state
is in state irreduciablc. 0,
steady large Pis OF fo r such to chains, one l); state
A the In with I 1,2
al of [Link] states is are with
Chain. valuesregular, MARKOV other transition. cach accessible with transition
state the that next the return recurrent.
each
of tim e. each words,
probability state in
vector then transition. |JNTU
back each of other.
n, CHAIN
VP state is and probabilty
there Then in 1. to
of the The (H)
the =V. Starting state p of
is the has These far for chain Apr.
probability
system 0 following and 70
MarkovVector unique a future, a (or 2012 matr1x
lim1tingparticular in is
state l) (Set from
chain. vector V is the after
is said equal probabil1ty
values state 0{or transition irreducible No. state
called to an
This V to l), even 4)| 519
such be are wil I
to
is the inits it
transition of
n. 520
vector negative
»|0.65y Let Let Solution This From Thus Since Multiplying Sol. V
V P=0.15 matrix
Example t.
VP= V ResulProbability
-" vector (1) we By such entries,
the |00.52|
.12
0.36 0.07|
0.6258
VP" the that Ifa
probability 1: and have
+0.151, All : |0.0.135262| 0.2087
0.0.161578 0.65 V =VVP"P=VP
=VPVP" VP"=V
with definition
0.67 the Markov with Vector
0.36
0.12
0.52
y]0.650.0278 Find gives (2), V.P"+l
for P the
entries we
+0.12vy 0.15 the on chain sum :
vector 0.18 the get =V large
both of
0.67 in long long-range of A
the EXAMPLES
SOLVED VP
=V (2). . volume (1.). sides, equilibrium withprobability
the
028y range
0.18-( v, matrix transitionentries
y. trend of
+0.67v, are trend n, vector,
We positive. or it
cqualvector
vy want steady of is
matrix
+ y] the also we to is
0.36v, to state Markov have isP 1. a
find So, true regular, matrix Computer
0.07y this vector that
V
such is chain. then of
a for V.p"+l only
+0.18v, that regular there Oriented
the
onc
VP Markov cexists
+0.5213] matrix for V row,Statistical
=V
large ahaving
chain probabilty
values Methods
with noN-

Steady Thus 0 0 Solving Writing Cince wviSo


we
l We
I0 0.3515
1.33 -0.0.12| [-0.35
0.12|
0.15| 0.07v 0.28v 007v
we 0.28 V observe -0.35v 0.284
State 0 1 in is
Fquating chastic
0.651, processes
363 104 get l| 0| 0.12 using
1 +0.18V>+0.6/V) + the
probability
the considera not +0.18v) -0.33v,
Vector VË V3V2 -0.33
0.36 multipliedthby
at +0.15v, 0.15v,
corresponding
1089 532 GausS-Jordan 1 matrix at
= 0 (3) and
104 1089
(or) 245 1089 532 363 104 -0.48v,
1089 363'2
245
form is +0.36v, +0.12v +0.52v+0.36vy +0.12v, Markov
V vector this the
Equilibrium sum
I089 532 Method equation. =0 = entries Chains
0 |0| we 0 =0 =V, =1,
-|0.2865
0.0.42885250 of
have (1)
Vector, 1089 and
245 v
tvy (2)
+

=
lI

-1 (2.). (1) .
(4) (3)

521
522

R02RR,- gIves Using


+R;Gauss-Jordan
R method, Writing (3)
0.2 Also So,
V=y Let Solution: P-01 04
05
05R, 05 We is
Fample
we will the this 02
06
02 05
02
03
02||
-0.6 V 0.1 inhave
sum is
gives 0 the not ; a 2Find
0.3y
0.2v -0.5y 0.3 0.2v 0.5y
matrix y
consider of y01 regular We
0 0.2| + |0.206|0.2 0.03]50.2 observe
02|=0
0.20.05
I -060.2 y+
()& +[Link], -0.6v, + +
+[Link], 04, 0.1v, the
07||06 08 y form,
Vy (3) (2) l, matrix. that
+0.2v; +0.2v; 0.6v, ++0.21y +0.21; 04 We cquilibrium
0 wc Imultiplied -0.4v
-02 o get 05-[ want Let thael
= =
=0 =0 =0 Vy = V
; y to be
by find tentries
he vector
0.5 -1 V in
probability
such the stc(or)
stadyate
y]
that matrix
(4).. (3). . Computer
(2.). VP-V vector.
(1) are
positive vector
Statistical
forVaen Oriented
the
transition

TTatrix

GivenP= 0|0.25 0.7 R -0.6R2


givesR gi-RvR,
,es
V(P-) Solution
=0 I\anpte 3 Thus y -R, R,
0 0 [I0 gives0 -08 R; proacvtaestcsesand
=0.25,
V 1 gives
: =(0.25 0
gives 0
0.5 0.72515 We 0I010
0.5 0.7755 Vy 0.70.|6 0
take, Find V;=|0.25 1|0 0.67 0 0
=0.25, |0 0 I0
0.5 the 0.25 0.5 o 0.7|035 0 01
0.5 VP=V 105 0 I0 V Markov
steady vy
0.5| =0.5 I|0751
.()
. is Chains
state 0.5 0.25 0.757 V;=0.25
the 0.25 0.25
vector Steady 0.5 025
|0.25 [o.75 0.5 -
for State
P
= Vector
0.2
0.5 75s
or
Equilibrium

Vector.

523
written oftencereals. B
buyscereals 524
3 3/4|o
|| Using Let does Since
4
as Solution : Iample4: or in V=(0.0.275s] From(1)[" V=[)Let ]From
1 V=v), Solving. -(0.5), =0
1 P= C,
successive
she weeks. (0.7V5)}-(0.75)y
VP=V, the (1)
is
3/41/4
c|3/41/4 B3/4
0 AtransitionThe buy next
=0.75y=0.25 Vy a
0 y, 0 A each A
(0.75)(-0.75)y equilibrium + must V
we V} week it housewife (0.5)v,
have be 0 1 B of be
0 + -0.750.75
the the 0.5 a
1/4 1/4 C timesis3 asshe If =0 2
0
(0.75), (0.5)1, +
(2). . (1). . 01 steady three
probabilitycereals. vector rowed
buys -0.5|
buys =
state kinds 3 0.75 =0 we vector.
likelyCereal must
probability matrix of
to A, of x(0.75)
...4) (4) have (3) (2)
buy Athecereals
the
distribution next
purchasing as A,
totheher week B Computer
and
she
of C.
the of Oriented
the buys She
Markov In never
house t B.
However
long buys Statistical
Chain wife
run, the
can same
if e Mehods
be how she

s ny A
TPM Intial P= What
Solution arobabilities cdayloudy fair probabilityDrobability
Rainy0.35
0.420
5 Fair amnleThe 5: In Substituting From (2
for probabilities Cloudy portion isday day the
day : followed is 15 4
Cloudy
Rainy
Fair We followed is long
2 0.250.6
0.4 0.151 are followed of of in hOsticproces es
write of buying buying run,
= (4), Markov
Chainsand
p are 0.35
the 0.3, by
weather
-[0.44
0.0.232]4 =0.3 TPM days 0.3 a byby cereal cerealprobability 35 16 5
given cloudy a 1
0.25 as isand cloudy fair a in
0.3 by expected 0.4. certain
day C= B= of
Find day day 4 35 16
0.4|| =|0.3 40% 60% buying 35 4
spot (4) .. (3). .
|00..30.45025 0.4 [0.6
0.0.2155 tothe 35%
0.3 probability
be of ofof cereal
fair, the the the is
classified
0.35 0.4] time 35 4
cloudy time time
(say) and and 35
0.25 that and
or by byby as
rainy there acloudy afair,
(JNTU rainyrainy a
inwill cloudy
(H) the beday day day
Dec. long rainy 25% 25% (without 25% rain)or
2019 run. day of of of
the the the
(R15)] after time. [Link].
525
3
Cis 526
using
(a)ToThe Solution
probabilities
limiting
(b)The (a) 0.30.0.5245| 0.8 0.4 To On 7y 7 RoThat the After
B0.8 P= The Fxample6: solving Afer TPM
initial =0.33,
Marketfinthde
=0.35n0 =0.4r =0.670 ,,i=limiting long
C[[Link] A : market 0.1 0.0.3| day for
market 0.4 A The 7 (1), 0,1,2 time, 3= day
0.1 =0.33,T, +0.407,+0.357, +0.251, probabilities. 3
TPM share If(2), Day
0.1 0.0.33| B
transitionthe =p
shares and are proportion 4
of in (3) +
shares 0.1 the second the =0.33 and +0.25r, 0.25r,
+0.15n)obtained =|0.40.85123126023] = -[0.40.772306]17
are p' =|0.44
market initial of
in
n=[0.50 (4) =[0.477
second and market
probability we by the 0.34
shares third solving days
get (4). . (3). . (1). . (.2).
period 0.25
is periods share expected 0.317 0.22||
given matrix the
are set 0.40
0.25
0.35 0.15
0.20.56
0.25| to 0.206| 0.4
by 50%, market of be
(say) equations 0.35 Statistlical
Computer
Oriented
fair, [Link]|305 0.4 0.6
25% shares
three
[JNTU cloudy 0.25
and 0.25
0.35 0.151
0.25
of
25%. or
(H) rainy
Dec. find brands
obtainedbyisi
2019
(RIS)) A,
and B
hsorbing he e
Thus,
probability Here. transition
volve tot 48.13%,
P=2| 0 eg: ABSOMARKOV
RBICHAING NS
all
perioSrdd
state. once we30.60.2 1f0.30.60.1 1
Suppose Markov market e
0.6 observe 23.87%,
state of
The 1 -[0.4813
0.0.22800]
387 =[0.4875 P=|0.50
0.1 0.6 diagram is2 that
staying
0 Markov
3
matrices
a chains shares
28%
-[0.4875
sohasticproces es and
entered, in =
that are in S,(P) Markov
state P2 2nd 0.2375
0.2 shows the chain regular.
are 0.2375 0.25
2 regular. period Chains
probability
it is
is has
that
impossible 1. 0.2750|| 0.25||
transition We 0.2750]
it are 0.25|
0.0.2535 04
is is
Itnow 0:8
48.7%, 0.0.325
not of discuss 080.4
to going absorbing called 0.1 0.30.3
possibleleave. matrix. S,=
23.75 0.1 0.3
from one (say) 0.1
toFor state type %, 0.25 0.1 0.3
leave this 27.50%
of
l Markov
Markov
statereason to
state and
2.
state Chain.
2, chain in
2 is the
is 0.6 that 3rd
called and
do period
an P2 not 527
(b) 528
absorbing.
possibleitis The 2
probability Decide absorbing Absorbing
Markov
Chain
bsorbing possible
3. State only
Solution: (a) In 1. A We
General
Neither whether Ex.1. anpossible Notthate in [Link]
Ifis The Absorbi
now ng 1zing
2 0.3, non-absorbing
The of0.3. 3 2 absorbing state more
toand to 0 0.3 ldentify possible chain define
transition go
We the second
tothe than chain
n-absorbing go 4
fromThere 0 0.5 2 any has State: this
toare have l one to an
only 0 3
Markov alMarkov
absorbing itleast is idea,
absorbing. diagram the isstate 1 0.2 absorbing absorbing State
to p step) fromgo an
we
states non-absorbing
probability
state a is =l chain chain, condition absorbi
one ng i
leads 0.5 of
define
is
[Link] (bP=
) is state, any
absrobing
It states Markov a
to States
orI is
P=1. absorbing anon-absorbing
does chain Markov the
an 3, possible
of0.2 instate but
absorbing from 1 0.2 04 0.9 30 21|00.6 the which itismeannot following.
and state. the if chain. Chain
state tofor toThus Markov possible
3 going go
an 0 1 0 234 is
state. 3are not state following
that is
tostate 01 0.1 0 0 0.4 an
non-absorbing. it absorbing from chain s to to
is absorbing absorbing
called is it absorbing Computer
possible
Then 0 go is conditionsan two ,
state and I toabsorbing
state. state having
this some
2
tostate state Oriented
to
Markov go obsorbing
This 3 e
following thabsorbing
From 1 are to state if
to from a go satisfiedare
only Markov absorbing transient. Pi Statistical
chain state state from (not =1
states state [Link]
any
is 1, chain matices Methods
non or l it [Link]. n¡n-
is is

Lroperties 5sible ut
2. TPM kracohersner
The harhe comer eg
result. stepsthe
finalthe The [Link] 1. Thus toHere
l.
= transition stays
longpowers ofthis gostate P=2| Drunkard'
1,2 s
Absorbing an is to O there. or processes
term chain state and 4 3 1|1/2 of
1 4 3,
of of
absorbing 0 0 0 0 diagram which then
trend the will the state and
trom 0 1/2 he
Walk I
0 0 0 1 is Markov
depends transition enteroriginalChains. 4 is walks
state are :
Markov 0
1/2 0 1/2 0 1/2 cornerbar ora
absorbing. 2 to A
an
state 1. the man Chains
onabsorbing
matrix t 0 0 3
the of Chain. is leftwalks
an possible 4 or
initial get State 0,
rightalong
absorbing which
closes state
state. to 1, with a
and state is four
and go equal
Changing hen
tMarkov his
closes
from 2, block
state home.
probability.
stay state
Chain, stretch
the to in 3 Ifhe
some that 4are
initial from of
in non-absorbing. reaches He park
particular state. a state continuous
state finite avenue.
3. either
can number
matrix.
change home until If 529
It he
of is

You might also like