Dynamic Indeterminism in Stochastic Processes
Dynamic Indeterminism in Stochastic Processes
STOCHASTIC PROCESSES
10.1INTRODUCTION
AND MARKOV CHAINS
aspect of
In general, in the previous chapters, we have been concerned with the static
statistics of
fatistical theory. Now we shall deal with the dynamic aspect of statistics or
change. In some situations in science and technology we will be interested in studying
processes", that is, phenomena that take place with change in time. The theory of probability
died did not have either general procedures or elaborate schemes for solving problems that
mse in the study of such phenomena. Hence it is necessary to develop a general theory of
andom processes to study random variables dependent on one or several discretely or
ontinuously varying parameters. This leads to a new concept of indeterminism in dynamic
oudies. This is referred to as "dynamic indeterminism". Many phenomena occuring in physical
nd life sciences are studied now not only as random phenomena but also as those changing
with time or space.
10.2 DEFINITION
487
488
d Statistical Methods AStiC
489
wins or looses a unit (Rupce or dollor or pound) with probabil1ties pand q respectivcly Lek
initial capital be z and his opponents initial capital be (a - 2) so that the combined capital lis general solution is, 4, = A+ B(q/p)' ... (6)
increased to a h
The gamc continues until the gambler's capital is cither reduced to zero or Using boundary conditions from (4)
Is, until one of the two plavers is ruined. We are interested in the probability of the thegamblels
notice that cnitt =|-B
ruin and the probability distribution of the duration of the game. We can qo = A+B=lA
of the gambler after n stages is a random stochastic variable.
10.3 RESULT: TO FIND THE PROBABILITY OF GAMBLER RUIN
44
Let g. denote the probability of ultimate ruin of the gambler. After the first trial, the
gambier's capital is (2 ) if he wins (with probability p) or it is (:-1) if he loses (with B
probabil1ty q).
The probabilties of his ruin, after one trial, will be 4,., and q,-, respeetively
so that 4, = PA,., *99, I<2<a-1 (|) p
for : 1l, the first trial may lead to ruin and we may have From this, we get A=|-B
(2)
Smilarly for z =d-1, the first trial may result in victory, we may have From (6), 4, =A+ B
(3)
lo unity our cqualions we define (4lp) -(q/p)' (q/p)'llq/p)-) (7)
(q/p -I (q/p) -1
9o =I,q, 0 (4)
Computer Oriented Aastc
processesano Mar Chains
490
Statistical 491
In this case writing (a -2) for z, we get p, zla .. (12) sndomvariable which depends on the discrete parameter n.
The Random walk models described above is called symmetric if p=q Ifp >
so that p. *q, -listrue here.
ben we say that
there is adrift towards the right, if q > we say that there is a drift
We can reformulate our results as follows: "Let a gambler with an initial capital : nlay 2
aganst an infinitely nich adversary who is always will1ng to play, although the gambler has the ards the left. The walk is said to be unrestricted if the particle can go to anypount on the
prvilege of stopping at his plcasure. The gambler adopts the strategy of playing until he eithet
loses his capitalor incrcases it to a(with anet gain (a -2) ). Then g, is the probability of hs In two dimensional random walk the particle moves in unit steps in one of the four
los1ng, and (|-g,) the probability of his winning" irections parallel to x axis and y- axis.
Note 1. When Pq,wc have q,= For aparticle starting at the origin the possible positions are all points of the plane with
and P. =. Suppose the capital of the Megral valued coordinates. Each position has four neighbours. Similarly in three dimensions
opponent is comparably greater than the capital of theplayer ie. a-z is infinitely greater han ah position has six neighbours. The random walk is defined by spec1fying the corresponding
: which implies that a is infinitely greater than , then p. ’[Link], we can say that run
bu or six probabilities. Here for simplicity we willconsider only the symmetric case where
of the opponent is practically impossible, when the players are of equal skill and capital of
ddirections have the same probability.
opponent is comparit1vely grcater than the capital of theplayer. 10.5 SPECIFICATION OF STOCHASTIC PROCESSES
2. Whern p and p>4 and (a- z) ie. a set oe
State space : The values assumed by the random variable are called states. The
),
is rolled. Let X,
e-g. I1. Suppose afair die SiXeS we have assumed that the
time having discrete state space f hothxand f are discrete, then the Stochastie process is called aDiscrete Stochastie
range of time. Thus this is a process in continuous Process.
particular place in (a..
e.g. 14. Suppose X, represents the minimum temparature at a We put these in tabular
form
a system in continuous time kes:
then the set of possible values of X, is continuous. This is x()
ContinuIous Discrete
a continuous state space. Continuous Stochastic Discrete Stochastic
Continuous
10.6 RELATIONSHIP
Process Sequence
The relationship among the numbers of a family (X,} is of importance. The nature of Discrete Stochastic Discrete Stochastic
Discrete
the dependence varies. We will describe some Stochastic Process according to the nature of Process Sequence
dependence relationship existing among the members of the family. It can be classified as
Definition : Dependent Stochastic Processes We can classify Stochastic process in another way also.
and non-determininstic Stochastic process.
deterministic Stochastic Process
In some cases the members of the family (X,) are mutually dependent. They are calld Stochastic Process if all the future
Def. A random process is called a Deterministic
Dependent Stochastic Processes. The Bernoulli's process discribed earlier is an example of salues can be predicted from past observations.
Dependent Stochastic Processes. Non-Deterministie Stochastic Process ie future values
Astochastic process is called past observations.
Definition : Stochastic Process with Independent Increments of any sample function cannot be predicted tfrom
everywhere if their respective sample
If for all ,...44 <(, <.. <I,,the random variables X(,)- X(4 ),X() - X(,). Two random variables X(/) and Y() are equal
spaces are identical for every .
., X(,) - X(I,-) are independent, then (X(/), teT} is said to be a Stochastle Process tinte
symbols (X,,te T} or (X (), te T} where T is
with Independent Increments. In case of continuous time both the it may
rintinite interval is used. The
parameter is usually interpretted as time, though
Iepresent distance, length, thickness and so on.
chasticprocesses and Markov Chains
Computer Oriented Slatistical
495
494
for E,
occured at the preceding trial. The conditional probabilities
"Markov Process" if given the P, of
A Stochastic
proccss is saidto be a value of that E, has
not dependon the values of X(u)
for u<t. The
future Xit), the possible states of the system. P, is
possible outcomes E, are usually
the of X().v>t
valueMarkov docs
process depends only on the present value and not on the past values
process X) is said to be
Markovian if
behavlour kred toas calledthe probability of a
transition from E,
Def. A Stochastic Definition: The
Another stochastic
(x,,n=0,1,2,. proccss is Called a Markov chain
ij44aEN, Px, =j/x=iX1=h.=
ShSh...sI, <I,1
= P[XU,-)sX(,)= %, where ko P{x,=/X,=}=P,
the process.
are called the states of first number is defined.
Xo, XË, X,....x, NNeverthe
10.14 MARKOVCHAIN outcomes E, are called states of the Markov chain. If X,
if cach X, is a random
The has the outcome E, ie.
[X,,] is a Markov chain variable and if
Asequence of states -/.the process is said to be at the state E, or simply at the state j at the n trial. 1o
The 10.21 stochastic matrix Pis not regular if 1 occures in the principle main
one generation. Jagonal.
Matrix
Properties of transition probability
probability matrix has several
features. SOLVED EXAMPLES
A transition used both as Esample I:) Which of the following matrices
possible states must be rows and are stochastic
matrix, since all
entries recproluemsnesnt.
I. It is asquare
this is because all the
between0 and 1, inclusive; [/2 1/2
Z. All entries are
V2 1/2
probabilities.
3. The sum of the entries in any row must be 1, since the nunbers in the row give the
0 2
the right.
state at the left to one of (v) 1/4 1/4| |JNTU(H) May 2011 (Set No. 4)]
probability of changing from the
If P² represents the matrix product P.P, then P gives probabilities of atransition from
Solution:() is not a square matrix.
one state to another intwo repetitions of an experiment. In general pk gives the probabil1tuts it is not stochastic.
repetitions of an experiment. s2 The matrix is a square matrix with non-negative entries and sum of the elements in
of a transition from one state to another in k
Definition : AStochastic Matrix is asquare matrix with non-negativeeelements aand unit ach row is equal to 1.
. The matrix is stochastic.
row sums.
MARKOVCHAIN (i) The matrix 1s a square matrix but sum in each row is not equal to 1. So it is not
10.18 ORDER OF A for all tochastic.
said to be of order s (=1,2,..). if
Definition: A Markov chain (X. is (iv) It is a stochastic matrix.
=0} (v) The matrix is not stochastic, because it contains negative elements.
P(X, = k/X,- = j, X,-2J. Xn-s (v) The matrix is square matrix but sum in each row is not equal to 1. It is not a
=P{X, =k/X,- =j.. . sochastic matrix.
chain) if Esample 2: Test the following matrices are stochastic or not.
AMarkov chain X. is said to be
of order one (or simply a Markov
1 2 4 0
P{(X, =k/X,| =j,X4-2 =y 3 16 16
(a) (b) (c) |JNTU (H) June 2012 (Set No.4))
= P{X, =k/X,-1 =J}=PA 2 4
2
3
Markov chain, A chain of order one.
unless stated otherwise, we mean by
This implies the independence of Solution : (a) Given matrix is not a square matrix.
A chain is said to be of order zero if P, =P, Vi. It is not a stochastic matrix.
X, and X, (b) The matrix is a square matrix with non-negative entries.
is said to be a Finite Markor
Markov chain (X./n20} with k states, when kis finite
A
But sum of elements in each row is not equal to 1.
and columns.
transition matrix in this case is a square matrix with k rows
The matrix is not stochastic.
Chain. The
entries and sum of the elements
2,-1,0,1,2,..then the Markov chain is said lo be (C) The given matrix is square matrix with non-negative
If the possible values of X, are... in each row is equal to 1. Thus it is a stochastic matrix.
denumerably infinite.
Computer Oriented Statistical SOchastic processes and Markov Chains 501
500
the
Sinceall entries of some power of C are
1/3 positive, Cis regular stochastic matrix.
T0.75 0.25 0 10.75 0.25
0 is a 01 0.5625 0.3125 0.1251
Imple3: Find the value
of x,y,z if
1/3 1/4 transition 0 0.5 0.5 0
equal to 1.
|JNTU(H) Dec. proba lity 0.6 0.4 0 0.6
0.5
0.4
0.5
0
0.3
0.45
0.45
0.35
0.25
0.2
matrix. elements in each row is Since all the entries in A are positive,
Solution : Sum ofthe Ais regular.
1 [1/2 1/2 0
BB= 1/2 1/2 0
[1/2 1/2
(n) B' B =B-B= 1/2 1/2 0
3/8 3/8 1/4|
and 0+ y=]’y=-1 |7/16 7/16 1/8
1 1 12-4-3 5 1/2 1/2 0
!.!=1z=l-;
3 4
12 12
B =BB= 1/2 1/2 0
sequence of sample means from arandom 1S/32 7/32 1/16
Evample4: Let M, denote the
+X, +t... X,
process X, Since entries are zero, for all powers of B, Bis not
regular.
M, -¤
=
[0.25 0 0.75
B=
Is M, a Markov process ? (v) We can caluculate 0 1 0
0 0
|0 0
/4 1/2 1/4| |1/4 1/4 1/2|
Computer Oriented Statistical Svchaslcprocesses
p and Markov
Chains
502 503
diagonal contains1. Thus they are
matrices the principal
Solution : In allthe not X,:
PX, =2,X, =1, - 2} =P{X, = 2,X, =1/ X, =2;
matrix of t regular
E,
two possible states E, and the 31 1
Evample 7: When there are only P{X, =2; 163
in the
E, if the motion is to calelostate occurs in accordance with the scheme of a states A..A,A,.Markov
reversal probabilitty of a
positive direction and in state probability of a homogeneous transition
chain,Irom
the
reversal when the particle moves to the right and a the when moves rAISIIonprobabilities are given bythe matrix
to the left. [1/2 1/6 1/3]
with probability pfor a headi is tossed
Fvample 8: Suppose that a coin
X,, the outcome of the rh trail
be k, where k(= 0,1,2,.., n)
denotes that Le
is arun of k indefinitely.
there
7, =1/2 0 1/2
1/3 1/3 /3
uninterupted block of heads is k. {X,} constitutes sa
successes (i.e.)the length ofthe Solution: We see that if the system was in the state A,,then
chain, with unit step transition probabilities.
Markoy after a change of the state
bonestepit willremain in the same state with a probability of 1/2, and it will pass to state A,
p,k=j+1
PA =P{X, =k/X,- =j}= whaprobabilityof11/6, and to state A, with a probability of 1/3. But if the system was in the
=q,k= 0 tate A,, hen after the transition it can (with equal probability) find itself only in states A,
=0, otherwise A,;it cannot pass from state A, into A,. The last row of the matrix shows us that
Then the transition matrix is given by the state A, the system can pass to any one of the possible states with one and the same
babilityI/3.
0 1 2 kk+l..
Eample 11:The three state Markov chain is given by the transition probability matrix
1g p 0 . 0 0 ..
2q 0 p 00 .... ... 1 2
States of 0
q 0 0 . 0 2
3
1
Prove that the chain is ireducible. |JNTU (H) June 2013 (Set No.3)|
P=l= 2
1
I\mple 9 Let (X..n >0; be a Markov chain with three states {0,1,2; and uh
0 1 2
Solution : In this Markov chain, all the states communicate with cach other.
of3/4 1/4 Suppose we consider the states as 0, 1, 2.
transition matrix 1 1/4 1/2 1/4 and the initial distribution P{X, =i} =1/3,i = 0,1.2 then It is possible to go from state 0 to state 1with a probabil1ty of 12.
2 0 3/4 1/4 Again it is possible to go from state lto state 2 withaprobability of 1/3.
Thus it is possible to go from state 0 to state 2.
we have P{X, =/X, =2} =3/4; P{X, =2/X, =}=1/4
So, the chain is irreducible and all the states are recurrent.
13 3
P{X, - 2,X, =1,X, =2} =P{X, =2/ X,=I}.P{X, =l /X, =2} =4 4 16
eochasticprocesses and Markov Chains
Computer Oriented
504 Statistical Meth HIGHER TRANSITION PROBABILITIES
505
below.
P{X,mn=0/X, =i} =pm) . (1)
Solution :: Its graph is shown
Pgivesthe probability that from the state i at the h trial, the state j is reached at
1/3
+n)trial in msteps.
0.2
506 Computer Oriented
In general we can have
Slatislical Method,
Svchaslicprocesses
RESULTSIN
and
Markov Chains
TERMS OF 507
... (4)
o29
TRANSITION
D=(D,) denote the transition matrix of the MATRICES
unit sten transitions and Pm)
This equation is a special case of Chapman - Kolmogrov thetransition
matr1x ={p, )
by the transition probabilities of a Markov chain. (This can be seen in
equation, which is Anote
matrix p
of m- step
transitions. We can see that elements of P
dvanced stealxtis+siedon
ementsofthe are the
statistics) Thus p2)= P.P = p2
Ex. 1. Verify Chapman-Kolmgrov equation with the following
example, write the Similarly plm+l) =
plm).P-P. Pm and
probability matrix given by
0 2 3
transit on p(mn) =pm), p = pln) plm)
00 SOLVED EXAMPLES
10.2 0 0.8 Fample 1: An urn initially contains five black balls
P= andfive white balls.
2 0 0.2 0.8 perimentis repeated indefinitely. Aball is
drawn from the urn:, if the ball is The following
0 ackin
the urn, otherwise it is left out. Let
X, be the number of black balls
white it is put
3 0 0
Unafter
nddraws from the urm. remaining in the
2
s X, a Markov process ? If so.
0 1
2
I0.43 031 026
0
-2 0.24 042 034
o 5/8 5/16 1/16
-l|5/16 1/2 3/16 30.36 035 029
23/16 9/16 1/4
when i= 4,5.6 P=
P{X,., =i/X, =3)=-:
6
the chain is
The transition probabilitymatrix of Wecan prove by M
Mathematical Induction that
a Poo
91
Pom)+b p
6 36-x(1+11+|| +||+1I+36) 216
Computer Oriented Statistical sechastic,processes and
514 Markov Chains
515
e
nrobability that the chain starting from state i returns to i.
n4 step is state for the first time
atthe denoted by
f,n=
time probability or the 1,2,3,... and
recurrence time probability.
is called as first time return
7. Thepenod of areturn state is defincd as the grealest common divisor (GCD) of allm groups of people.
detin1ng the
Understanding Markov Analysis : The Markov analys1is process involves
such that p>0 ie., d, =GCD (m: pl>0; likelihood of a future action given the current state of a variable. Once the probabilities of
tree can be drawn. Then, then the
Astate /is said to be periodic with period d, ,if d >l and aperiodic if d =1. future actions at each state are determined, a decision
variable. Markov analysis
likelihood of a result can be calculated, given the current state of a defective
predict the number of
has several applications in the business world. It is often used to
states.
Dotolikely
the cconometrics
often events, known inare and linc 516
P Now all A. ball relatively pices
=1/4 TheP>0, States often out
1/w4
42 (X,} Solution Ivamplel:
the Show to tells
Disadvantages
chain P and Advantages of that
of states one casy it better
P>0, =|1 is that and B sample wil
1/4 1/2|1/2 is 0 a X, little cannot
to
irreducible.
1/2 0 /2 Markov depend The : are the
B at come
States
P transition
ergodic? throws the
alwaysThree estimate
about making forecasting of
/2 P>0, 1/2 0 process beof
only of
Markov off
|P chain. boys the
why Markov an
1/2
=|1/4 P 0P'= on X,B is conditional truepredictions
1/41/42 We probability A,something
EXAMPLES
SOLVED accuracy.
Analysisassembly
1/23/T/8 can >0,
states Markovian. B Analysis
model
c|1/21/2 0 A0
States
A of X, ball
and
1/2 see P |1/41/2|1/4 1/2 of 0 :
that >0 matrix(JNTU2011(H) probabilities
toare Chappened. of Sithanmple The linc,
X 1 B Find he :
1/4;P° and 1/2 1/2 0 C; tMarkov
more primary given
but C of throwing
the but modcls,
0 underlying
all not the Nov. the
0 transition
other isC basedanalysis
complicated benefits
1/83/8=|1/4 on process 2010, just a such operating
1/I/24 /4 states ball on Computer
P>0 as situation
the is as
{X,} to not those of
3/8 of
[Link]
cach current
state. models Markov status
very Markoy
usedfor
X,n-2,X-3s... to
other. cases.
most in Statistical
MethodOriented
1/2 is
given (Set andthrow useful result of
Thisanalysis the
[Link] classify throws A
as No. always
the explaining for machines
1), However, are
Dec. ball the Yes,
isit
or analysis,
simplicity
wel is
earlie, 2019states. to on
Bas that the
sates uenery
are Since
periodic
[Link],
Also, In Similarly
p=P'.P= p? P=|1/2
Solution : I Since We
.pC.
. aC.
not Thus general, The The I te ohastic
the we amplMore eover note p
ergodic. with the periodic.
the state 1 states D. processes
Markov p> observe 2 all chain th at of that
period Markov p p° p'
pn = = P=1| the 2,3
P.P=P.= p*.P' = o state
Find(ie. 1 [Link] is
2,3,5,6,.
p².P= 201|/2
finite states (ie. and
chain p*) 2. 0,chainP>0. P>0, P 0 0the
and . Markov
is p.p². = |1/2 [1/2 1/2||1/2 0 10 0 1 1nature are and B
P*! A) and -1.
finite =0 is P 0 ergodic.
P> =P 0|0 1 01/2 irreducible, is etc.. C)
irreducible. Plo p² of periodic Chains
and for = 0 I 0
states are are etc.,
>0. p". =p² 1/2|| 1/21[
each 0 0 periodic
>
irreducible, p>0,p>0,p 0, P=p are
1/2|=|
10]0) of all period
with and 0
[Link] 01/2 the >0
Therefore, its
p>0 =P states Gwith for
Markov
p>0,p*>0,
all 1 0 1 period i
>0, 1/2=||/2 0 1/21/2 0 1/2 1, D. 2,3
its are
al p>0 chain ie of
states non-null3,5,6,......
the 0 a ie.
0
are 0
1/2 with aperiodic.
states
non-null p>0 0 tpm. persistant.
-
of 1/2P 1
the
persistent. Markov and
so
on 517
chain
All for
518
from n
is
.. We p²p' (p²i =) state It : d
oSolution
[Link]
0.37 is P=
() Iamplechain
0 0.0.541 The For
Pisa can pn)=
All possible
2 P(n)
prove are the to chain any
0.8|0.2 regular is : 1 2
4: matrices |1/2 1/2 1/2 3/21/2 states state 0 0 3:
to value the checkWe
0
The that 0 go not is 0 2 0 0 0
Check
1/4 1/2 1/2 are or of 1 original
transition matrix. from 0 2 0 1 0
Is P"
without ergodic. state regular. n, 1 2
this (m 1/2 1/2 state we 0
regularity of 1 0 whetherthe
= 1/4 1. matrix. 2
matrix probability 2,3,.) zero 0 get (i)
3/4 1/2 1/2 1/2| tomatrices
elements. state Ifn 1/2 0 1/2 1/|1
2
(H)irreducible has P(n) 0 following
p'= is
matrix al orl with evenmatrices 1/2 0
the state
Dec. ?
3/4 zero 1/2
of entries 1 2 2 1/2 0 1/2 Markov
2011 a or only. for 0
Markov 1/2 3/4
3/8 3/4
3/8 trom
different 1/2| 1/2 0 Computer
(Set with
1/2 state
No. chain possible JNTU is
1 n.
We regular
3), to Oriented
Nov. is S/8 3/4| (H)
state
given non-zero
observe Dec. and
2015, .2011
tstatistical
0
by or crgodic.
Des. state that, (Set
values Methodn
2019) 3) when No3
dlso
equilibrium that, condition.
10.33 ststate won't
eady sable called aporoach 10.32 has number ergodic. r
a
(b)BothSolution:
nrmatobabirilcites.y ate
called for Ifa Steady the beStableprobab1ility |00.2 0.03
7 [0400.6
STEADY
anyMarkov (OR)
probability, changing STEADY
a of
Evamplethe The We Solution Evample Thus w7ith2 In
The it this
stable limiting transitions. observe
state
=lth. at 0.0.41
saolution pSreochcasetics esand
ongvector
lprobability State Probability the i irreducible
chain isis :
EQUILIBRIUM (a) periodic
and 6: 5: probabilitychain
range to
states 3 All 0 possible Consider
the
chain STATE probabilitites.
Condition much value STATE return Here Find is not the Consider
or anfrom 0.3 we
probability So state 3 states 0 [Link]
go
e
trend thvector with from as :
are starting
(b) absorbing gothe
to states
fixed transition VECTOR time In aperi1odic
CONDITION the do the from
probabil1ty0.5. from three Markov
of : one many 4 || 3 Is
V If goes same are in not this Markov and state
thvector
e and VECTOR l transition a state other
no
state. communicate states Chains
wisystem Markov
to wi matrix all state
markov for matrix persist state asth aperiodic the to0
or infinity. period chain to) as
period (or 0 states state
is in state irreduciablc. 0,
steady large Pis OF fo r such to chains, one l); state
A the In with I 1,2
al of [Link] states is are with
Chain. valuesregular, MARKOV other transition. cach accessible with transition
state the that next the return recurrent.
each
of tim e. each words,
probability state in
vector then transition. |JNTU
back each of other.
n, CHAIN
VP state is and probabilty
there Then in 1. to
of the The (H)
the =V. Starting state p of
is the has These far for chain Apr.
probability
system 0 following and 70
MarkovVector unique a future, a (or 2012 matr1x
lim1tingparticular in is
state l) (Set from
chain. vector V is the after
is said equal probabil1ty
values state 0{or transition irreducible No. state
called to an
This V to l), even 4)| 519
such be are wil I
to
is the inits it
transition of
n. 520
vector negative
»|0.65y Let Let Solution This From Thus Since Multiplying Sol. V
V P=0.15 matrix
Example t.
VP= V ResulProbability
-" vector (1) we By such entries,
the |00.52|
.12
0.36 0.07|
0.6258
VP" the that Ifa
probability 1: and have
+0.151, All : |0.0.135262| 0.2087
0.0.161578 0.65 V =VVP"P=VP
=VPVP" VP"=V
with definition
0.67 the Markov with Vector
0.36
0.12
0.52
y]0.650.0278 Find gives (2), V.P"+l
for P the
entries we
+0.12vy 0.15 the on chain sum :
vector 0.18 the get =V large
both of
0.67 in long long-range of A
the EXAMPLES
SOLVED VP
=V (2). . volume (1.). sides, equilibrium withprobability
the
028y range
0.18-( v, matrix transitionentries
y. trend of
+0.67v, are trend n, vector,
We positive. or it
cqualvector
vy want steady of is
matrix
+ y] the also we to is
0.36v, to state Markov have isP 1. a
find So, true regular, matrix Computer
0.07y this vector that
V
such is chain. then of
a for V.p"+l only
+0.18v, that regular there Oriented
the
onc
VP Markov cexists
+0.5213] matrix for V row,Statistical
=V
large ahaving
chain probabilty
values Methods
with noN-
-1 (2.). (1) .
(4) (3)
521
522
TTatrix
Vector.
523
written oftencereals. B
buyscereals 524
3 3/4|o
|| Using Let does Since
4
as Solution : Iample4: or in V=(0.0.275s] From(1)[" V=[)Let ]From
1 V=v), Solving. -(0.5), =0
1 P= C,
successive
she weeks. (0.7V5)}-(0.75)y
VP=V, the (1)
is
3/41/4
c|3/41/4 B3/4
0 AtransitionThe buy next
=0.75y=0.25 Vy a
0 y, 0 A each A
(0.75)(-0.75)y equilibrium + must V
we V} week it housewife (0.5)v,
have be 0 1 B of be
0 + -0.750.75
the the 0.5 a
1/4 1/4 C timesis3 asshe If =0 2
0
(0.75), (0.5)1, +
(2). . (1). . 01 steady three
probabilitycereals. vector rowed
buys -0.5|
buys =
state kinds 3 0.75 =0 we vector.
likelyCereal must
probability matrix of
to A, of x(0.75)
...4) (4) have (3) (2)
buy Athecereals
the
distribution next
purchasing as A,
totheher week B Computer
and
she
of C.
the of Oriented
the buys She
Markov In never
house t B.
However
long buys Statistical
Chain wife
run, the
can same
if e Mehods
be how she
s ny A
TPM Intial P= What
Solution arobabilities cdayloudy fair probabilityDrobability
Rainy0.35
0.420
5 Fair amnleThe 5: In Substituting From (2
for probabilities Cloudy portion isday day the
day : followed is 15 4
Cloudy
Rainy
Fair We followed is long
2 0.250.6
0.4 0.151 are followed of of in hOsticproces es
write of buying buying run,
= (4), Markov
Chainsand
p are 0.35
the 0.3, by
weather
-[0.44
0.0.232]4 =0.3 TPM days 0.3 a byby cereal cerealprobability 35 16 5
given cloudy a 1
0.25 as isand cloudy fair a in
0.3 by expected 0.4. certain
day C= B= of
Find day day 4 35 16
0.4|| =|0.3 40% 60% buying 35 4
spot (4) .. (3). .
|00..30.45025 0.4 [0.6
0.0.2155 tothe 35%
0.3 probability
be of ofof cereal
fair, the the the is
classified
0.35 0.4] time 35 4
cloudy time time
(say) and and 35
0.25 that and
or by byby as
rainy there acloudy afair,
(JNTU rainyrainy a
inwill cloudy
(H) the beday day day
Dec. long rainy 25% 25% (without 25% rain)or
2019 run. day of of of
the the the
(R15)] after time. [Link].
525
3
Cis 526
using
(a)ToThe Solution
probabilities
limiting
(b)The (a) 0.30.0.5245| 0.8 0.4 To On 7y 7 RoThat the After
B0.8 P= The Fxample6: solving Afer TPM
initial =0.33,
Marketfinthde
=0.35n0 =0.4r =0.670 ,,i=limiting long
C[[Link] A : market 0.1 0.0.3| day for
market 0.4 A The 7 (1), 0,1,2 time, 3= day
0.1 =0.33,T, +0.407,+0.357, +0.251, probabilities. 3
TPM share If(2), Day
0.1 0.0.33| B
transitionthe =p
shares and are proportion 4
of in (3) +
shares 0.1 the second the =0.33 and +0.25r, 0.25r,
+0.15n)obtained =|0.40.85123126023] = -[0.40.772306]17
are p' =|0.44
market initial of
in
n=[0.50 (4) =[0.477
second and market
probability we by the 0.34
shares third solving days
get (4). . (3). . (1). . (.2).
period 0.25
is periods share expected 0.317 0.22||
given matrix the
are set 0.40
0.25
0.35 0.15
0.20.56
0.25| to 0.206| 0.4
by 50%, market of be
(say) equations 0.35 Statistlical
Computer
Oriented
fair, [Link]|305 0.4 0.6
25% shares
three
[JNTU cloudy 0.25
and 0.25
0.35 0.151
0.25
of
25%. or
(H) rainy
Dec. find brands
obtainedbyisi
2019
(RIS)) A,
and B
hsorbing he e
Thus,
probability Here. transition
volve tot 48.13%,
P=2| 0 eg: ABSOMARKOV
RBICHAING NS
all
perioSrdd
state. once we30.60.2 1f0.30.60.1 1
Suppose Markov market e
0.6 observe 23.87%,
state of
The 1 -[0.4813
0.0.22800]
387 =[0.4875 P=|0.50
0.1 0.6 diagram is2 that
staying
0 Markov
3
matrices
a chains shares
28%
-[0.4875
sohasticproces es and
entered, in =
that are in S,(P) Markov
state P2 2nd 0.2375
0.2 shows the chain regular.
are 0.2375 0.25
2 regular. period Chains
probability
it is
is has
that
impossible 1. 0.2750|| 0.25||
transition We 0.2750]
it are 0.25|
0.0.2535 04
is is
Itnow 0:8
48.7%, 0.0.325
not of discuss 080.4
to going absorbing called 0.1 0.30.3
possibleleave. matrix. S,=
23.75 0.1 0.3
from one (say) 0.1
toFor state type %, 0.25 0.1 0.3
leave this 27.50%
of
l Markov
Markov
statereason to
state and
2.
state Chain.
2, chain in
2 is the
is 0.6 that 3rd
called and
do period
an P2 not 527
(b) 528
absorbing.
possibleitis The 2
probability Decide absorbing Absorbing
Markov
Chain
bsorbing possible
3. State only
Solution: (a) In 1. A We
General
Neither whether Ex.1. anpossible Notthate in [Link]
Ifis The Absorbi
now ng 1zing
2 0.3, non-absorbing
The of0.3. 3 2 absorbing state more
toand to 0 0.3 ldentify possible chain define
transition go
We the second
tothe than chain
n-absorbing go 4
fromThere 0 0.5 2 any has State: this
toare have l one to an
only 0 3
Markov alMarkov
absorbing itleast is idea,
absorbing. diagram the isstate 1 0.2 absorbing absorbing State
to p step) fromgo an
we
states non-absorbing
probability
state a is =l chain chain, condition absorbi
one ng i
leads 0.5 of
define
is
[Link] (bP=
) is state, any
absrobing
It states Markov a
to States
orI is
P=1. absorbing anon-absorbing
does chain Markov the
an 3, possible
of0.2 instate but
absorbing from 1 0.2 04 0.9 30 21|00.6 the which itismeannot following.
and state. the if chain. Chain
state tofor toThus Markov possible
3 going go
an 0 1 0 234 is
state. 3are not state following
that is
tostate 01 0.1 0 0 0.4 an
non-absorbing. it absorbing from chain s to to
is absorbing absorbing
called is it absorbing Computer
possible
Then 0 go is conditionsan two ,
state and I toabsorbing
state. state having
this some
2
tostate state Oriented
to
Markov go obsorbing
This 3 e
following thabsorbing
From 1 are to state if
to from a go satisfiedare
only Markov absorbing transient. Pi Statistical
chain state state from (not =1
states state [Link]
any
is 1, chain matices Methods
non or l it [Link]. n¡n-
is is
Lroperties 5sible ut
2. TPM kracohersner
The harhe comer eg
result. stepsthe
finalthe The [Link] 1. Thus toHere
l.
= transition stays
longpowers ofthis gostate P=2| Drunkard'
1,2 s
Absorbing an is to O there. or processes
term chain state and 4 3 1|1/2 of
1 4 3,
of of
absorbing 0 0 0 0 diagram which then
trend the will the state and
trom 0 1/2 he
Walk I
0 0 0 1 is Markov
depends transition enteroriginalChains. 4 is walks
state are :
Markov 0
1/2 0 1/2 0 1/2 cornerbar ora
absorbing. 2 to A
an
state 1. the man Chains
onabsorbing
matrix t 0 0 3
the of Chain. is leftwalks
an possible 4 or
initial get State 0,
rightalong
absorbing which
closes state
state. to 1, with a
and state is four
and go equal
Changing hen
tMarkov his
closes
from 2, block
state home.
probability.
stay state
Chain, stretch
the to in 3 Ifhe
some that 4are
initial from of
in non-absorbing. reaches He park
particular state. a state continuous
state finite avenue.
3. either
can number
matrix.
change home until If 529
It he
of is