Markov Chain - Wikipedia
Markov Chain - Wikipedia
Markov chain
In probability theory and statistics, a Markov chain or
Markov process is a stochastic process describing a sequence
of possible events in which the probability of each event
depends only on the state attained in the previous event.
Informally, this may be thought of as, "What happens next
depends only on the state of affairs now." A countably infinite
sequence, in which the chain moves state at discrete time steps,
gives a discrete-time Markov chain (DTMC). A continuous-time
process is called a continuous-time Markov chain (CTMC).
Markov processes are named in honor of the Russian
mathematician Andrey Markov.
The adjectives Markovian and Markov are used to describe something that is related to a Markov
process.[4]
Principles
Definition
A Markov process is a stochastic process that satisfies the Markov
property (sometimes characterized as "memorylessness"). In simpler
terms, it is a process for which predictions can be made regarding
future outcomes based solely on its present state and—most
importantly—such predictions are just as good as the ones that could
be made knowing the process's full history.[5] In other words,
conditional on the present state of the system, its future and past
states are independent.
[Link] 1/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Continuous or general
Countable state space
state space
Markov chain on a
(discrete-time) Markov
measurable state space
Discrete-time chain on a countable or
(for example, Harris
finite state space
chain)
Any continuous
Continuous-time Markov stochastic process with
Continuous-
process or Markov jump the Markov property (for
time
process example, the Wiener
process)
Note that there is no definitive agreement in the literature on the use of some of the terms that signify
special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a
discrete set of times, that is, a discrete-time Markov chain (DTMC),[11] but a few authors use the
term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit
mention.[12][13][14] In addition, there are other extensions of Markov processes that are referred to as
such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the
time index need not necessarily be real-valued; like with the state space, there are conceivable processes
that move through index sets with other mathematical constructs. Notice that the general state space
continuous-time Markov chain is general to such a degree that it has no designated term.
While the time parameter is usually discrete, the state space of a Markov chain does not have any
generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]
However, many applications of Markov chains employ finite or countably infinite state spaces, which
have a more straightforward statistical analysis. Besides time-index and state-space parameters, there
are many other variations, extensions and generalizations (see Variations). For simplicity, most of this
article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
Transitions
The changes of state of the system are called transitions. The probabilities associated with various state
changes are called transition probabilities. The process is characterized by a state space, a transition
matrix describing the probabilities of particular transitions, and an initial state (or initial distribution)
across the state space. By convention, we assume all possible states and transitions have been included
in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete-time random process involves a system which is in a certain state at each step, with the state
changing randomly between steps. The steps are often thought of as moments in time, but they can
equally well refer to physical distance or any other discrete measurement. Formally, the steps are the
integers or natural numbers, and the random process is a mapping of these to states. The Markov
property states that the conditional probability distribution for the system at the next step (and in fact at
all future steps) depends only on the current state of the system, and not additionally on the state of the
system at previous steps.
[Link] 2/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Since the system changes randomly, it is generally impossible to predict with certainty the state of a
Markov chain at a given point in the future. However, the statistical properties of the system's future can
be predicted. In many applications, it is these statistical properties that are important.
History
Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the
topic in 1906.[16][17][18] Markov Processes in continuous time were discovered long before his work in the
early 20th century in the form of the Poisson process.[19][20][21] Markov was interested in studying an
extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who
claimed independence was necessary for the weak law of large numbers to hold.[22] In his first paper on
Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes
of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers
without the independence assumption,[16][17][18] which had been commonly regarded as a requirement
for such mathematical laws to hold.[18] Markov later used Markov chains to study the distribution of
vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such
chains.[16]
In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling.
Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest
in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873,
preceding the work of Markov.[16][17] After the work of Galton and Watson, it was later revealed that
their branching process had been independently discovered and studied around three decades earlier by
Irénée-Jules Bienaymé.[23] Starting in 1928, Maurice Fréchet became interested in Markov chains,
eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24]
Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time
Markov processes.[25][26] Kolmogorov was partly inspired by Louis Bachelier's 1900 work on
fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian
movement.[25][27] He introduced and studied a particular set of Markov processes known as diffusion
processes, where he derived a set of differential equations describing the processes.[25][28] Independent
of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–
Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian
movement.[29] The differential equations are now called the Kolmogorov equations[30] or the
Kolmogorov–Chapman equations.[31] Other mathematicians who contributed significantly to the
foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene
Dynkin, starting in the 1950s.[26]
Examples
Mark V. Shaney is a third-order Markov chain program, and a Markov text generator. It ingests the
sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every
sequence of three successive words (triplet) which occurs in the text. It then chooses two words at
random, and looks for a word which follows those two in one of the triplets in its massive list. If there
is more than one, it picks at random (identical triplets count separately, so a sequence which occurs
twice is twice as likely to be picked as one which only occurs once). It then adds that word to the
generated text. Then, in the same way, it picks a triplet that starts with the second and third words in
the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third
and fourth words, and so on.[32]
[Link] 3/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Random walks based on integers and the gambler's ruin problem are examples of Markov
processes.[33][34] Some variations of these processes were studied hundreds of years earlier in the
context of independent variables.[35][36] Two important examples of Markov processes are the
Wiener process, also known as the Brownian motion process, and the Poisson process,[19] which
are considered the most important and central stochastic processes in the theory of stochastic
processes.[37][38][39] These two processes are Markov processes in continuous time, while random
walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete
time.[33][34]
A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where,
at each step, the position may change by +1 or −1 with equal probability. From any position there are
two possible transitions, to the next or previous integer. The transition probabilities depend only on
the current position, not on the manner in which the position was reached. For example, the
transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5
are 0. These probabilities are independent of whether the system was previously in 4 or 6.
A series of independent states (for example, a series of coin flips) satisfies the formal definition of a
Markov chain. However, the theory is usually applied only when the probability distribution of the next
state depends on the current one.
A non-Markov example
Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth
10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are
set on a table. If represents the total value of the coins set on the table after n draws, with ,
then the sequence is not a Markov process.
To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn.
Thus . If we know not just , but the earlier values as well, then we can determine which
coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that
with probability 1. But if we do not know the earlier values, then based only on the value
we might guess that we had drawn four dimes and two nickels, in which case it would certainly be
possible to draw another nickel next. Thus, our guesses about are impacted by our knowledge of
values prior to .
However, it is possible to model this scenario as a Markov process. Instead of defining to represent
the total value of the coins on the table, we could define to represent the count of the various coin
types on the table. For instance, could be defined to represent the state where there is one
quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be
represented by possible states, where each state represents the number of coins of each
type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose
that the first draw results in state . The probability of achieving now depends on ; for
example, the state is not possible. After the second draw, the third draw depends on which
coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since
probabilistically important information has since been added to the scenario). In this way, the likelihood
of the state depends exclusively on the outcome of the state.
Formal definition
if both
conditional probabilities are well defined, that is, if
The possible values of Xi form a countable set S called the state space of the chain.
Variations
for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.
A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that
the distribution of is a stationary distribution of the Markov chain.
A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying
In other words, the future state depends on the past m states. It is possible to construct a chain
from which has the 'classical' Markov property by taking as state space the ordered m-tuples of
X values, i.e., .
Infinitesimal definition
Let be the random variable describing the state of the process at
time t, and assume the process is in a state i at time t. Then, knowing
, is independent of previous values ,
and as h → 0 for all j and for all t,
[Link] 5/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3,
... to describe holding times in each of the states where Si follows the exponential distribution with rate
parameter −qYiYi.
where pij is the solution of the forward equation (a first-order differential equation)
Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix.
By comparing this definition with that of an eigenvector we see that the two concepts are related and that
The values of a stationary distribution are associated with the state space of P and its eigenvectors
have their relative proportions preserved. Since the components of π are positive and the constraint that
their sum is unity can be rewritten as we see that the dot product of π with a vector whose
components are all 1 is unity and that π lies on a simplex.
[Link] 6/26
1/11/25, 12:51 PM Markov chain - Wikipedia
If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution π.[41]
Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary
distribution π:
where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem.
If, by whatever means, is found, then the stationary distribution of the Markov chain in
question can be easily determined for any starting distribution, as will be explained below.
For some stochastic matrices P, the limit does not exist while the stationary distribution
does, as shown by this example:
Because there are a number of different special cases to consider, the process of finding this limit if it
exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let
P be an n×n matrix, and define
where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together
stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the
definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a
stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1
equations for determining n unknowns, so it is computationally easier if on the one hand one selects one
row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding
element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the
inverse of transformed former matrix to find Q.
Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most
column replaced with all 1's. If [f(P − In)]−1 exists then[42][41]
Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n
variables. And there are n more linear equations from the fact that Q is a right stochastic matrix
whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n)
equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the
right-most column of (P-In)" have been replaced by the n stochastic ones.
One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row
or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the
subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same
[Link] 7/26
1/11/25, 12:51 PM Markov chain - Wikipedia
positions as in P.
Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each
column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ =
diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition
Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary
distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there
is no other π which solves the stationary distribution equation above). Let ui be the i-th column of U
matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that
represents a valid probability distribution; since the eigenvectors ui span we can write
If we multiply x with P from right and continue this operation with the results, in the end we get the
stationary distribution π. In other words, π = a1 u1 ← xPP...P = xPk as k → ∞. That means
Since π is parallel to u1(normalized by L2 norm) and π(k) is a probability vector, π(k) approaches to a1 u1
= π as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because
hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is.[44] Random
noise in the state distribution π can also speed up this convergence to the stationary distribution.[45]
Harris chains
Many results for Markov chains with finite state space can be generalized to chains with uncountable
state space through Harris chains.
[Link] 8/26
1/11/25, 12:51 PM Markov chain - Wikipedia
The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows
a continuous state space.
Properties
Two states are said to communicate with each other if both are reachable from one another by a
sequence of transitions that have positive probability. This is an equivalence relation which yields a set of
communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is
irreducible if there is one communicating class, the state space.
A state i has period k if k is the greatest common divisor of the number of transitions by which i can be
reached, starting from i. That is:
A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never
return to i. It is called recurrent (or persistent) otherwise.[48] For a recurrent state i, the mean hitting
time is defined as:
State i is positive recurrent if is finite and null recurrent otherwise. Periodicity, transience,
recurrence and positive and null recurrence are class properties — that is, if one state has the property
then all states in its communicating class have the property.[49]
A state i is called absorbing if there are no outgoing transitions from the state.
Irreducibility
Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same
period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50]
If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique
stationary distribution given by .
Ergodicity
A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if
it is recurrent, has a period of 1, and has finite mean recurrence time.
If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently,
there exists some integer such that all entries of are positive.
[Link] 9/26
1/11/25, 12:51 PM Markov chain - Wikipedia
It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More
generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any
other state in any number of steps less or equal to a number N. In case of a fully connected transition
matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1.
A Markov chain with more than one state and just one out-going transition per state is either not
irreducible or not aperiodic, hence cannot be ergodic.
Terminology
Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51] In
fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic
theory.[52]
Some authors call a matrix primitive iff there exists some integer such that all entries of are
positive.[53] Some authors call it regular.[54]
Index of primitivity
The index of primitivity, or exponent, of a regular matrix, is the smallest such that all entries of
are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each
entry of is zero or positive, and therefore can be found on a directed graph with as its
adjacency matrix.
There are several combinatorial results about the exponent when there are finitely many states. Let be
the number of states, then[55]
The exponent is . The only case where it is an equality is when the graph of goes
like .
If has diagonal entries, then its exponent is .
If is symmetric, then has positive diagonal entries, which by previous proposition
means its exponent is .
(Dulmage-Mendelsohn theorem) The exponent is where is the girth of the graph.
It can be improved to , where is the diameter of the graph.[56]
Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above
construction is unambiguous for irreducible Markov chains.
In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset
such that implies or (up to a null set).
[Link] 10/26
1/11/25, 12:51 PM Markov chain - Wikipedia
The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly
positive on all states, the Markov chain is irreducible iff its corresponding measure-preserving
dynamical system is ergodic.[52]
Markovian representations
In some cases, apparently non-Markovian processes may still have Markovian representations,
constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-
Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states
of X. Mathematically, this takes the form:
Hitting times
The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set
of states. The distribution of such a time period has a phase type distribution. The simplest such
distribution is that of a single exponentially distributed transition.
Time reversal
For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process
has the same stationary distribution as the forward process.
A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's
criterion states that the necessary and sufficient condition for a process to be reversible is that the
product of transition rates around a closed loop must be the same in both directions.
[Link] 11/26
1/11/25, 12:51 PM Markov chain - Wikipedia
where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal
from the matrix Q and setting all other elements to zero.
To find the stationary probability distribution vector, we must next find such that
with being a row vector, such that all elements in are greater than 0 and = 1. From this, π may
be found as
(S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton
—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random
variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton.
Markov model
Markov models are used to model changing systems. There are 4 main types of models, that generalize
Markov chains depending on whether every sequential state is observable or not, and whether the
system is to be adjusted on the basis of observations made:
Bernoulli scheme
A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has
identical rows, which means that the next state is independent of even the current state (in addition to
being independent of the past states). A Bernoulli scheme with only two possible states is known as a
Bernoulli process.
[Link] 12/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov
chain is isomorphic to a Bernoulli scheme;[60] thus, one might equally claim that Markov chains are a
"special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The
isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is
isomorphic to a Bernoulli scheme; the Markov chain is just one such example.
Applications
Markov chains have been employed in a wide range of topics across the natural and social sciences, and
in technological applications.
Physics
Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever
probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed
that the dynamics are time-invariant, and that no relevant history need be considered which is not
already included in the state description.[61][62] For example, a thermodynamic state operates under a
probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo
method can be used to draw samples randomly from a black-box to approximate the probability
distribution of attributes over a range of objects.[62]
Chemistry
A reaction network is a chemical system involving multiple reactions
and chemical species. The simplest stochastic models of such networks
treat the system as a continuous time Markov chain with the state being
Michaelis-Menten kinetics. The the number of molecules of each species and with reactions modeled as
enzyme (E) binds a substrate (S) possible transitions of the chain.[64] Markov chains and continuous-
and produces a product (P).
time Markov processes are useful in chemistry when physical systems
Each reaction is a state
closely approximate the Markov property. For example, imagine a large
transition in a Markov chain.
number n of molecules in solution in state A, each of which can undergo
a chemical reaction to state B with a certain average rate. Perhaps the
molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a
Markov chain, and since the molecules are essentially independent of each other, the number of
molecules in state A or B at a time is n times the probability a given molecule is in that state.
The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain,
where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly
straightforward, far more complicated reaction networks can also be modeled with Markov chains.[65]
[Link] 13/26
1/11/25, 12:51 PM Markov chain - Wikipedia
An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in
silico towards a desired class of compounds such as drugs or natural products.[66] As a molecule is
grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past
(that is, it is not aware of what is already bonded to it). It then transitions to the next state when a
fragment is attached to it. The transition probabilities are trained on databases of authentic classes of
compounds.[67]
Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the
reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may
be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the
same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of
some polymer chains.
Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide
materials can be accurately described by Markov chains.[68]
Biology
Markov chains are used in various areas of biology. Notable examples include:
Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov
chains to describe the nucleotide present at a given site in the genome.
Population dynamics, where Markov chains are in particular a central tool in the theoretical study of
matrix population models.
Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex.[69]
Systems biology, for instance with the modeling of viral infection of single cells.[70]
Compartmental models for disease outbreak and epidemic modeling.
Testing
Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of
conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers
("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.
Speech recognition
Hidden Markov models have been used in automatic speech recognition systems.[77]
Information theory
Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A
Mathematical Theory of Communication, which in a single step created the field of information theory,
opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as
generated by an ergodic Markov process, where each letter may depend statistically on previous
letters.[78] Such idealized models can capture many of the statistical regularities of systems. Even
[Link] 14/26
1/11/25, 12:51 PM Markov chain - Wikipedia
without describing the full structure of the system perfectly, such signal models can make possible very
effective data compression through entropy encoding techniques such as arithmetic coding. They also
allow effective state estimation and pattern recognition. Markov chains also play an important role in
reinforcement learning.
Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse
fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition
and bioinformatics (such as in rearrangements detection[79]).
The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression
to achieve very high compression ratios.
Queueing theory
Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup
Erlang initiated the subject in 1917.[80] This makes them critical for optimizing the performance of
telecommunications networks, where messages must often compete for limited resources (such as
bandwidth).[81]
Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a
CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to
a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job
service times are exponentially distributed) and describe completed services (departures) from the
queue.
Internet applications
The PageRank of a webpage as used by Google is defined by a
Markov chain.[82][83][84] It is the probability to be at page in the
stationary distribution on the following Markov chain on all (known)
webpages. If is the number of known webpages, and a page has
links to it then it has transition probability for all
pages that are linked to and for all pages that are not linked
to. The parameter is taken to be about 0.15.[85] A state diagram that represents the
PageRank algorithm with a
Markov models have also been used to analyze web navigation transitional probability of M, or
behavior of users. A user's web link transition on a particular website .
can be modeled using first- or second-order Markov models and can
be used to make predictions regarding future navigation and to
personalize the web page for an individual user.
Statistics
Markov chain methods have also become very important for generating sequences of random numbers
to accurately reflect very complicated desired probability distributions, via a process called Markov chain
Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference
methods, allowing a wide range of posterior distributions to be simulated and their parameters found
numerically.
Markov chains are used in finance and economics to model a variety of different phenomena, including
the distribution of income, the size distribution of firms, asset prices and market crashes. D. G.
Champernowne built a Markov chain model of the distribution of income in 1953.[86] Herbert A. Simon
and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm
sizes.[87] Louis Bachelier was the first to observe that stock prices followed a random walk.[88] The
random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk
models were popular in the literature of the 1960s.[89] Regime-switching models of business cycles were
popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods
of high and low GDP growth (or, alternatively, economic expansions and recessions).[90] A more recent
example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which
builds upon the convenience of earlier regime-switching models.[91][92] It uses an arbitrarily large
Markov chain to drive the level of volatility of asset returns.
Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to
exogenously model prices of equity (stock) in a general equilibrium setting.[93]
Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit
ratings.[94]
Social sciences
Markov chains are generally used in describing path-dependent arguments, where current structural
configurations condition future outcomes. An example is the reformulation of the idea, originally due to
Karl Marx's Das Kapital, tying economic development to the rise of capitalism. In current research, it is
common to use a Markov chain to model how once a country reaches a specific level of economic
development, the configuration of structural factors, such as size of the middle class, the ratio of urban to
rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning
from authoritarian to democratic regime.[95]
Games
Markov chains can be used to model many games of chance. The children's games Snakes and Ladders
and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player
starts in a given state (on a given square) and from there has fixed odds of moving to certain other states
(squares).
Music
Markov chains are employed in algorithmic music composition, particularly in software such as Csound,
Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and
a probability vector for each note is constructed, completing a transition probability matrix (see below).
An algorithm is constructed to produce output note values based on the transition matrix weightings,
which could be MIDI note values, frequency (Hz), or any other desirable metric.[96]
1st-order matrix
Note A C ♯ E ♭
A 0.1 0.6 0.3
E♭ 0.7 0.3 0
[Link] 16/26
1/11/25, 12:51 PM Markov chain - Wikipedia
2nd-order matrix
Notes A D G
AA 0.18 0.6 0.22
AD 0.5 0.5 0
DA 0.25 0 0.75
DG 0.9 0.1 0
GG 0.4 0.4 0.2
GD 1 0 0
A second-order Markov chain can be introduced by considering the current state and also the previous
state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes
together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains
tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced
by a first-order system.[97]
Markov chains can be used structurally, as in Xenakis's Analogique A and B.[98] Markov chains are also
used in systems which use a Markov model to react interactively to music input.[99]
Usually musical systems need to enforce specific control constraints on the finite-length sequences they
generate, but control constraints are not compatible with Markov models, since they induce long-range
dependencies that violate the Markov hypothesis of limited memory. In order to overcome this
limitation, a new approach has been proposed.[100]
Baseball
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still
rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and
outs are considered. During any at-bat, there are 24 possible combinations of number of outs and
position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs
created for both individual players as well as a team.[101] He also discusses various kinds of strategies
and play conditions: how Markov chain models have been used to analyze statistics for game situations
such as bunting and base stealing and differences when playing on grass vs. AstroTurf.[102]
Probabilistic forecasting
[Link] 17/26
1/11/25, 12:51 PM Markov chain - Wikipedia
Markov chains have been used for forecasting in several areas: for example, price trends,[106] wind
power,[107] stochastic terrorism,[108][109] and solar irradiance.[110] The Markov chain forecasting models
utilize a variety of settings, from discretizing the time series,[107] to hidden Markov models combined
with wavelets,[106] and the Markov chain mixture distribution model (MCM).[110]
See also
Dynamics of Markovian particles Markov operator
Gauss–Markov process Markov random field
Markov chain approximation method Master equation
Markov chain geostatistics Quantum Markov chain
Markov chain mixing time Semi-Markov process
Markov chain tree theorem Stochastic cellular automaton
Markov decision process Telescoping Markov chain
Markov information source Variable-order Markov model
Markov odometer
Notes
1. Sean Meyn; Richard L. Tweedie (2 April 2009). Markov Chains and Stochastic Stability ([Link]
[Link]/books?id=Md7RnYEPkJwC). Cambridge University Press. p. 3. ISBN 978-0-521-
73182-9.
2. Reuven Y. Rubinstein; Dirk P. Kroese (20 September 2011). Simulation and the Monte Carlo Method
([Link] John Wiley & Sons. p. 225. ISBN 978-1-118-
21052-9.
3. Dani Gamerman; Hedibert F. Lopes (10 May 2006). Markov Chain Monte Carlo: Stochastic
Simulation for Bayesian Inference, Second Edition ([Link]
wC). CRC Press. ISBN 978-1-58488-587-0.
4. "Markovian" ([Link] Oxford English Dictionary
(Online ed.). Oxford University Press. (Subscription or participating institution membership ([Link]
[Link]/public/login/loggingin#withyourlibrary) required.)
5. Øksendal, B. K. (Bernt Karsten) (2003). Stochastic differential equations : an introduction with
applications (6th ed.). Berlin: Springer. ISBN 3540047581. OCLC 52203046 ([Link]
org/oclc/52203046).
6. Søren Asmussen (15 May 2003). Applied Probability and Queues ([Link]
d=BeYaTxesKy0C). Springer Science & Business Media. p. 7. ISBN 978-0-387-00211-8.
7. Emanuel Parzen (17 June 2015). Stochastic Processes ([Link]
QAAQBAJ). Courier Dover Publications. p. 188. ISBN 978-0-486-79688-8.
8. Samuel Karlin; Howard E. Taylor (2 December 2012). A First Course in Stochastic Processes (http
s://[Link]/books?id=dSDxjX9nmmMC). Academic Press. pp. 29 and 30. ISBN 978-0-08-
057041-9.
9. John Lamperti (1977). Stochastic processes: a survey of the mathematical theory ([Link]
[Link]/books?id=Pd4cvgAACAAJ). Springer-Verlag. pp. 106–121. ISBN 978-3-540-90275-1.
10. Sheldon M. Ross (1996). Stochastic processes ([Link]
AJ). Wiley. pp. 174 and 231. ISBN 978-0-471-12062-9.
11. Everitt, B.S. (2002) The Cambridge Dictionary of Statistics. CUP. ISBN 0-521-81099-X
12. Parzen, E. (1962) Stochastic Processes, Holden-Day. ISBN 0-8162-6664-6 (Table 6.1)
13. Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9 (entry for
"Markov chain")
14. Dodge, Y. The Oxford Dictionary of Statistical Terms, OUP. ISBN 0-19-920613-9
15. Meyn, S. Sean P., and Richard L. Tweedie. (2009) Markov chains and stochastic stability. Cambridge
University Press. (Preface, p. iii)
[Link] 18/26
1/11/25, 12:51 PM Markov chain - Wikipedia
16. Charles Miller Grinstead; James Laurie Snell (1997). Introduction to Probability ([Link]
etails/flooved3489). American Mathematical Soc. pp. 464 ([Link]
e/n473)–466. ISBN 978-0-8218-0749-1.
17. Pierre Bremaud (9 March 2013). Markov Chains: Gibbs Fields, Monte Carlo Simulation, and Queues
([Link] Springer Science & Business Media. p. ix.
ISBN 978-1-4757-3124-8.
18. Hayes, Brian (2013). "First links in the Markov chain". American Scientist. 101 (2): 92–96.
doi:10.1511/2013.101.92 ([Link]
19. Sheldon M. Ross (1996). Stochastic processes ([Link]
AJ). Wiley. pp. 235 and 358. ISBN 978-0-471-12062-9.
20. Jarrow, Robert; Protter, Philip (2004). "A short history of stochastic integration and mathematical
finance: The early years, 1880–1970". A Festschrift for Herman Rubin. pp. 75–91.
CiteSeerX [Link].632 ([Link]
doi:10.1214/lnms/1196285381 ([Link] ISBN 978-0-
940600-61-4.
21. Guttorp, Peter; Thorarinsdottir, Thordis L. (2012). "What Happened to Discrete Chaos, the
Quenouille Process, and the Sharp Markov Property? Some History of Stochastic Point Processes".
International Statistical Review. 80 (2): 253–268. doi:10.1111/j.1751-5823.2012.00181.x ([Link]
rg/10.1111%2Fj.1751-5823.2012.00181.x).
22. Seneta, E. (1996). "Markov and the Birth of Chain Dependence Theory". International Statistical
Review. 64 (3): 255–257. doi:10.2307/1403785 ([Link]
JSTOR 1403785 ([Link]
23. Seneta, E. (1998). "I.J. Bienaymé [1796–1878]: Criticality, Inequality, and Internationalization".
International Statistical Review. 66 (3): 291–292. doi:10.2307/1403518 ([Link]
403518). JSTOR 1403518 ([Link]
24. Bru B, Hertz S (2001). "Maurice Fréchet". In Heyde CC, Seneta E, Crépel P, Fienberg SE, Gani J
(eds.). Statisticians of the Centuries. New York, NY: Springer. pp. 331–334. doi:10.1007/978-1-4613-
0179-0_71 ([Link] ISBN 978-0-387-95283-3.
25. Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.;
Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich
Kolmogorov (1903–1987)". Bulletin of the London Mathematical Society. 22 (1): 33.
doi:10.1112/blms/22.1.31 ([Link]
26. Cramér, Harald (1976). "Half a Century with Probability Theory: Some Personal Recollections" (http
s://[Link]/10.1214%2Faop%2F1176996025). The Annals of Probability. 4 (4): 509–546.
doi:10.1214/aop/1176996025 ([Link]
27. Marc Barbut; Bernard Locker; Laurent Mazliak (23 August 2016). Paul Lévy and Maurice Fréchet: 50
Years of Correspondence in 107 Letters ([Link]
Springer London. p. 5. ISBN 978-1-4471-7262-8.
28. Valeriy Skorokhod (5 December 2005). Basic Principles and Applications of Probability Theory (http
s://[Link]/books?id=dQkYMjRK3fYC). Springer Science & Business Media. p. 146.
ISBN 978-3-540-26312-8.
29. Bernstein, Jeremy (2005). "Bachelier". American Journal of Physics. 73 (5): 395–398.
Bibcode:2005AmJPh..73..395B ([Link]
doi:10.1119/1.1848117 ([Link]
30. William J. Anderson (6 December 2012). Continuous-Time Markov Chains: An Applications-Oriented
Approach ([Link] Springer Science &
Business Media. p. vii. ISBN 978-1-4612-3038-0.
31. Kendall, D. G.; Batchelor, G. K.; Bingham, N. H.; Hayman, W. K.; Hyland, J. M. E.; Lorentz, G. G.;
Moffatt, H. K.; Parry, W.; Razborov, A. A.; Robinson, C. A.; Whittle, P. (1990). "Andrei Nikolaevich
Kolmogorov (1903–1987)". Bulletin of the London Mathematical Society. 22 (1): 57.
doi:10.1112/blms/22.1.31 ([Link]
32. Subramanian, Devika (Fall 2008). "The curious case of Mark V. Shaney" ([Link]
evika/comp140/[Link]) (PDF). Computer Science. Comp 140 course notes, Fall 2008. William
Marsh Rice University. Retrieved 30 November 2024.
33. Ionut Florescu (7 November 2014). Probability and Stochastic Processes ([Link]
books?id=Z5xEBQAAQBAJ&pg=PR22). John Wiley & Sons. pp. 373 and 374. ISBN 978-1-118-
59320-2.
[Link] 19/26
1/11/25, 12:51 PM Markov chain - Wikipedia
34. Samuel Karlin; Howard E. Taylor (2 December 2012). A First Course in Stochastic Processes (http
s://[Link]/books?id=dSDxjX9nmmMC). Academic Press. p. 49. ISBN 978-0-08-057041-
9.
35. Weiss, George H. (2006). "Random Walks". Encyclopedia of Statistical Sciences. p. 1.
doi:10.1002/0471667196.ess2180.pub2 ([Link]
ISBN 978-0471667193.
36. Michael F. Shlesinger (1985). The Wonderful world of stochastics: a tribute to Elliott W. Montroll (http
s://[Link]/books?id=p6fvAAAAMAAJ). North-Holland. pp. 8–10. ISBN 978-0-444-86937-
1.
37. Emanuel Parzen (17 June 2015). Stochastic Processes ([Link]
QAAQBAJ). Courier Dover Publications. p. 7, 8. ISBN 978-0-486-79688-8.
38. Joseph L. Doob (1990). Stochastipoic processes ([Link]
J). Wiley. p. 46, 47.
39. Donald L. Snyder; Michael I. Miller (6 December 2012). Random Point Processes in Time and Space
([Link] Springer Science & Business Media. p. 32.
ISBN 978-1-4612-3166-0.
40. Norris, J. R. (1997). "Continuous-time Markov chains I". Markov Chains. pp. 60–107.
doi:10.1017/CBO9780511810633.004 ([Link]
ISBN 9780511810633.
41. Serfozo, Richard (2009). Basics of Applied Stochastic Processes. Probability and Its Applications.
Berlin: Springer. doi:10.1007/978-3-540-89332-5 ([Link]
ISBN 978-3-540-89331-8.
42. "Chapter 11 "Markov Chains" " ([Link]
obability_book/[Link]) (PDF). Retrieved 2017-06-02.
43. Schmitt, Florian; Rothlauf, Franz (2001). "On the Importance of the Second Largest Eigenvalue on
the Convergence Rate of Genetic Algorithms". Proceedings of the 14th Symposium on Reliable
Distributed Systems. CiteSeerX [Link].6191 ([Link]
[Link].6191).
44. Rosenthal, Jeffrey S. (1995). "Convergence Rates for Markov Chains" ([Link]
132659). SIAM Review. 37 (3): 387–405. doi:10.1137/1037083 ([Link]
3). JSTOR 2132659 ([Link] Retrieved 2021-05-31.
45. Franzke, Brandon; Kosko, Bart (1 October 2011). "Noise can speed convergence in Markov chains".
Physical Review E. 84 (4): 041112. Bibcode:2011PhRvE..84d1112F ([Link]
bs/2011PhRvE..84d1112F). doi:10.1103/PhysRevE.84.041112 ([Link]
E.84.041112). PMID 22181092 ([Link]
46. Spitzer, Frank (1970). "Interaction of Markov Processes" ([Link]
70%2990034-4). Advances in Mathematics. 5 (2): 246–290. doi:10.1016/0001-8708(70)90034-4 (htt
ps://[Link]/10.1016%2F0001-8708%2870%2990034-4).
47. Dobrushin, R. L.; Kryukov, V.I.; Toom, A. L. (1978). Stochastic Cellular Systems: Ergodicity, Memory,
Morphogenesis ([Link]
+chains+toom+Dobrushin&pg=PA181). Manchester University Press. ISBN 9780719022067.
Retrieved 2016-03-04.
48. Heyman, Daniel P.; Sobel, Mathew J. (1982). Stochastic Models in Operations Research, Volume 1.
New York: McGraw-Hill. p. 230. ISBN 0-07-028631-0.
49. Peres, Yuval. "Show that positive recurrence is a class property" ([Link]
uestions/4572155/show-that-positive-recurrence-is-a-class-property). Mathematics Stack Exchange.
Retrieved 2024-02-01.
50. Lalley, Steve (2016). "Markov Chains: Basic Theory" ([Link]
[Link]) (PDF). Retrieved 22 June 2024.
51. Parzen, Emanuel (1962). Stochastic Processes. San Francisco: Holden-Day. p. 145. ISBN 0-8162-
6664-6.
52. Shalizi, Cosma (1 Dec 2023). "Ergodic Theory" ([Link]
[Link]. Retrieved 2024-02-01.
53. Seneta, E. (Eugene) (1973). Non-negative matrices; an introduction to theory and applications (htt
p://[Link]/details/nonnegativematri00esen_0). Internet Archive. New York, Wiley. ISBN 978-0-
470-77605-6.
[Link] 20/26
1/11/25, 12:51 PM Markov chain - Wikipedia
[Link] 21/26
1/11/25, 12:51 PM Markov chain - Wikipedia
69. George, Dileep; Hawkins, Jeff (2009). Friston, Karl J. (ed.). "Towards a Mathematical Theory of
Cortical Micro-circuits" ([Link] PLOS Comput
Biol. 5 (10): e1000532. Bibcode:2009PLSCB...5E0532G ([Link]
CB...5E0532G). doi:10.1371/[Link].1000532 ([Link]
PMC 2749218 ([Link] PMID 19816557 ([Link]
[Link]/19816557).
70. Gupta, Ankur; Rawlings, James B. (April 2014). "Comparison of Parameter Estimation Methods in
Stochastic Chemical Kinetic Models: Examples in Systems Biology" ([Link]
mc/articles/PMC4946376). AIChE Journal. 60 (4): 1253–1268. Bibcode:2014AIChE..60.1253G (http
s://[Link]/abs/2014AIChE..60.1253G). doi:10.1002/aic.14409 ([Link]
2%2Faic.14409). PMC 4946376 ([Link]
PMID 27429455 ([Link]
71. Aguiar, R. J.; Collares-Pereira, M.; Conde, J. P. (1988). "Simple procedure for generating sequences
of daily radiation values using a library of Markov transition matrices". Solar Energy. 40 (3): 269–279.
Bibcode:1988SoEn...40..269A ([Link]
doi:10.1016/0038-092X(88)90049-7 ([Link]
72. Ngoko, B. O.; Sugihara, H.; Funaki, T. (2014). "Synthetic generation of high temporal resolution solar
radiation data using Markov models". Solar Energy. 103: 160–170. Bibcode:2014SoEn..103..160N (h
ttps://[Link]/abs/2014SoEn..103..160N). doi:10.1016/[Link].2014.02.026 ([Link]
[Link]/10.1016%[Link].2014.02.026).
73. Bright, J. M.; Smith, C. I.; Taylor, P. G.; Crook, R. (2015). "Stochastic generation of synthetic minutely
irradiance time series derived from mean hourly weather observation data" ([Link]
[Link].2015.02.032). Solar Energy. 115: 229–242. Bibcode:2015SoEn..115..229B ([Link]
[Link]/abs/2015SoEn..115..229B). doi:10.1016/[Link].2015.02.032 ([Link]
16%[Link].2015.02.032).
74. Munkhammar, J.; Widén, J. (2018). "An N-state Markov-chain mixture distribution model of the clear-
sky index". Solar Energy. 173: 487–495. Bibcode:2018SoEn..173..487M ([Link]
du/abs/2018SoEn..173..487M). doi:10.1016/[Link].2018.07.056 ([Link]
ner.2018.07.056).
75. Morf, H. (1998). "The stochastic two-state solar irradiance model (STSIM)". Solar Energy. 62 (2):
101–112. Bibcode:1998SoEn...62..101M ([Link]
doi:10.1016/S0038-092X(98)00004-8 ([Link]
76. Munkhammar, J.; Widén, J. (2018). "A Markov-chain probability distribution mixture approach to the
clear-sky index". Solar Energy. 170: 174–183. Bibcode:2018SoEn..170..174M ([Link]
[Link]/abs/2018SoEn..170..174M). doi:10.1016/[Link].2018.05.055 ([Link]
solener.2018.05.055).
77. Mor, Bhavya; Garhwal, Sunita; Kumar, Ajay (May 2021). "A Systematic Review of Hidden Markov
Models and Their Applications" ([Link] Archives of
Computational Methods in Engineering. 28 (3): 1429–1448. doi:10.1007/s11831-020-09422-4 (http
s://[Link]/10.1007%2Fs11831-020-09422-4). ISSN 1134-3060 ([Link]
-3060).
78. Thomsen, Samuel W. (2009), "Some evidence concerning the genesis of Shannon's information
theory", Studies in History and Philosophy of Science, 40 (1): 81–91, Bibcode:2009SHPSA..40...81T
([Link] doi:10.1016/[Link].2008.12.011 (https://
[Link]/10.1016%[Link].2008.12.011)
79. Pratas, D; Silva, R; Pinho, A; Ferreira, P (May 18, 2015). "An alignment-free method to find and
visualise rearrangements between pairs of DNA sequences" ([Link]
es/PMC4434998). Scientific Reports. 5 (10203): 10203. Bibcode:2015NatSR...510203P ([Link]
[Link]/abs/2015NatSR...510203P). doi:10.1038/srep10203 ([Link]
ep10203). PMC 4434998 ([Link] PMID 25984837
([Link]
80. O'Connor, John J.; Robertson, Edmund F., "Markov chain" ([Link]
graphies/[Link]), MacTutor History of Mathematics Archive, University of St Andrews
81. S. P. Meyn, 2007. Control Techniques for Complex Networks ([Link]
pm_files/CTCN/[Link]) Archived ([Link]
p://[Link]/archive/spm_files/CTCN/[Link]) 2015-05-13 at the
Wayback Machine, Cambridge University Press, 2007.
82. U.S. patent 6,285,999 ([Link]
[Link] 22/26
1/11/25, 12:51 PM Markov chain - Wikipedia
83. Gupta, Brij; Agrawal, Dharma P.; Yamaguchi, Shingo (16 May 2016). Handbook of Research on
Modern Cryptographic Solutions for Computer and Cyber Security ([Link]
d=Ctk6DAAAQBAJ&pg=PA448). IGI Global. pp. 448–. ISBN 978-1-5225-0106-0.
84. Langville, Amy N.; Meyer, Carl D. (2006). "A Reordering for the PageRank Problem" ([Link]
[Link]/Meyer/PS_Files/[Link]) (PDF). SIAM Journal on Scientific
Computing. 27 (6): 2112–2113. Bibcode:2006SJSC...27.2112L ([Link]
06SJSC...27.2112L). CiteSeerX [Link].8652 ([Link]
[Link].8652). doi:10.1137/040607551 ([Link]
85. Page, Lawrence; Brin, Sergey; Motwani, Rajeev; Winograd, Terry (1999). The PageRank Citation
Ranking: Bringing Order to the Web (Technical report). CiteSeerX [Link].1768 ([Link]
[Link]/viewdoc/summary?doi=[Link].1768).
86. Champernowne, D (1953). "A model of income distribution". The Economic Journal. 63 (250): 318–
51. doi:10.2307/2227127 ([Link] JSTOR 2227127 ([Link]
rg/stable/2227127).
87. Simon, Herbert; C Bonini (1958). "The size distribution of business firms". Am. Econ. Rev. 42: 425–
40.
88. Bachelier, Louis (1900). "Théorie de la spéculation". Annales Scientifiques de l'École Normale
Supérieure. 3: 21–86. doi:10.24033/asens.476 ([Link]
hdl:2027/coo.31924001082803 ([Link]
89. [Link], E (1965). "The behavior of stock market prices". Journal of Business. 38.
90. Hamilton, James (1989). "A new approach to the economic analysis of nonstationary time series and
the business cycle". Econometrica. 57 (2): 357–84. CiteSeerX [Link].3582 ([Link]
[Link]/viewdoc/summary?doi=[Link].3582). doi:10.2307/1912559 ([Link]
912559). JSTOR 1912559 ([Link]
91. Calvet, Laurent E.; Fisher, Adlai J. (2001). "Forecasting Multifractal Volatility" ([Link]
handle/2451/26894). Journal of Econometrics. 105 (1): 27–58. doi:10.1016/S0304-4076(01)00069-0
([Link]
92. Calvet, Laurent; Adlai Fisher (2004). "How to Forecast long-run volatility: regime-switching and the
estimation of multifractal processes". Journal of Financial Econometrics. 2: 49–83.
CiteSeerX [Link].8334 ([Link]
doi:10.1093/jjfinec/nbh003 ([Link]
93. Brennan, Michael; Xiab, Yihong. "Stock Price Volatility and the Equity Premium" ([Link]
org/web/20081228200849/[Link] (PDF).
Department of Finance, the Anderson School of Management, UCLA. Archived from the original (htt
p://[Link]/uploadImages/[Link]) (PDF) on 2008-12-28.
94. "A Markov Chain Example in Credit Risk Modelling" ([Link]
tp://[Link]/~ww2040/4106S11/MC_BondRating.pdf) (PDF). Columbia University.
Archived from the original ([Link] (PDF)
on March 24, 2016.
95. Acemoglu, Daron; Georgy Egorov; Konstantin Sonin (2011). "Political model of social evolution" (http
s://[Link]/pmc/articles/PMC3271566). Proceedings of the National Academy of
Sciences. 108 (Suppl 4): 21292–21296. Bibcode:2011PNAS..10821292A ([Link]
du/abs/2011PNAS..10821292A). CiteSeerX [Link].6090 ([Link]
ummary?doi=[Link].6090). doi:10.1073/pnas.1019454108 ([Link]
9454108). PMC 3271566 ([Link] PMID 22198760
([Link]
96. K McAlpine; E Miranda; S Hoggar (1999). "Making Music with Algorithms: A Case-Study System".
Computer Music Journal. 23 (2): 19–30. doi:10.1162/014892699559733 ([Link]
014892699559733).
97. Curtis Roads, ed. (1996). The Computer Music Tutorial. MIT Press. ISBN 978-0-262-18158-7.
98. Xenakis, Iannis; Kanach, Sharon (1992) Formalized Music: Mathematics and Thought in
Composition, Pendragon Press. ISBN 1576470792
99. "Continuator" ([Link]
Archived from the original ([Link] on July 13, 2012.
[Link] 23/26
1/11/25, 12:51 PM Markov chain - Wikipedia
100. Pachet, F.; Roy, P.; Barbieri, G. (2011) "Finite-Length Markov Processes with Constraints" ([Link]
[Link]/downloads/papers/2011/[Link]) Archived ([Link]
4183247/[Link] 2012-04-14 at the Wayback
Machine, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, IJCAI,
pages 635–642, Barcelona, Spain, July 2011
101. Pankin, Mark D. "MARKOV CHAIN MODELS: THEORETICAL BACKGROUND" ([Link]
org/web/20071209122054/[Link] Archived from the original (htt
p://[Link]/markov/[Link]) on 2007-12-09. Retrieved 2007-11-26.
102. Pankin, Mark D. "BASEBALL AS A MARKOV CHAIN" ([Link]
Retrieved 2009-04-24.
103. "Poet's Corner – Fieralingue" ([Link]
[Link]?name=Content&pa=list_pages_categories&cid=111). Archived from the original (http://
[Link]/[Link]?name=Content&pa=list_pages_categories&cid=111) on December
6, 2010.
104. Kenner, Hugh; O'Rourke, Joseph (November 1984). "A Travesty Generator for Micros". BYTE. 9
(12): 129–131, 449–469.
105. Hartman, Charles (1996). Virtual Muse: Experiments in Computer Poetry ([Link]
irtualmuseexper00hart). Hanover, NH: Wesleyan University Press. ISBN 978-0-8195-2239-9.
106. de Souza e Silva, E.G.; Legey, L.F.L.; de Souza e Silva, E.A. (2010). "Forecasting oil price trends
using wavelets and hidden Markov models" ([Link]
988310001271). Energy Economics. 32 (6): 1507. Bibcode:2010EneEc..32.1507D ([Link]
[Link]/abs/2010EneEc..32.1507D). doi:10.1016/[Link].2010.08.006 ([Link]
[Link].2010.08.006).
107. Carpinone, A; Giorgio, M; Langella, R.; Testa, A. (2015). "Markov chain modeling for very-short-term
wind power forecasting" ([Link] Electric Power Systems
Research. 122: 152–158. Bibcode:2015EPSR..122..152C ([Link]
PSR..122..152C). doi:10.1016/[Link].2014.12.025 ([Link]
108. Woo, Gordon (2002-04-01). "Quantitative Terrorism Risk Assessment" ([Link]
ght/content/doi/10.1108/eb022949/full/html). The Journal of Risk Finance. 4 (1): 7–14.
doi:10.1108/eb022949 ([Link] Retrieved 5 October 2023.
109. Woo, Gordon (December 2003). "Insuring Against Al-Quaeda" ([Link]
03/insurance03/[Link]) (PDF). Cambridge: National Bureau of Economic Research. Retrieved
26 March 2024.
110. Munkhammar, J.; van der Meer, D.W.; Widén, J. (2019). "Probabilistic forecasting of high-resolution
clear-sky index time-series using a Markov-chain mixture distribution model". Solar Energy. 184:
688–695. Bibcode:2019SoEn..184..688M ([Link]
doi:10.1016/[Link].2019.04.014 ([Link]
References
A. A. Markov (1906) "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug ot
druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete, 2-ya seriya, tom
15, pp. 135–156.
A. A. Markov (1971). "Extension of the limit theorems of probability theory to a sum of variables
connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic Systems,
volume 1: Markov Chains. John Wiley and Sons.
Classical Text in Translation: Markov, A. A. (2006). "An Example of Statistical Investigation of the Text
Eugene Onegin Concerning the Connection of Samples in Chains". Science in Context. 19 (4).
Translated by Link, David: 591–600. doi:10.1017/s0269889706001074 ([Link]
0269889706001074).
Leo Breiman (1992) [1968] Probability. Original edition published by Addison-Wesley; reprinted by
Society for Industrial and Applied Mathematics ISBN 0-89871-296-3. (See Chapter 7)
J. L. Doob (1953) Stochastic Processes. New York: John Wiley and Sons ISBN 0-471-52369-0.
S. P. Meyn and R. L. Tweedie (1993) Markov Chains and Stochastic Stability. London: Springer-
Verlag ISBN 0-387-19832-6. online: MCSS ([Link]
[Link] 24/26
1/11/25, 12:51 PM Markov chain - Wikipedia
[Link]/meyn/www/spm_files/[Link]) . Second edition to appear, Cambridge University
Press, 2009.
Dynkin, Eugene Borisovich (1965). Markov Processes ([Link]
001dynk). Grundlehren der mathematischen Wissenschaften. Vol. I (121). Translated by Fabius,
Jaap; Greenberg, Vida Lazarus; Maitra, Ashok Prasad; Majone, Giandomenico. Berlin: Springer-
Verlag. doi:10.1007/978-3-662-00031-1 ([Link] ISBN 978-
3-662-00033-5. Title-No. 5104.; Markov Processes ([Link]
dynk). Grundlehren der mathematischen Wissenschaften. Vol. II (122). 1965. doi:10.1007/978-3-662-
25360-1 ([Link] ISBN 978-3-662-23320-7. Title-No. 5105.
(NB. This was originally published in Russian as Марковские процессы (Markovskiye protsessy) by
Fizmatgiz in 1963 and translated to English with the assistance of the author.)
S. P. Meyn. Control Techniques for Complex Networks. Cambridge University Press, 2007.
ISBN 978-0-521-88441-9. Appendix contains abridged Meyn & Tweedie. online: CTCN ([Link]
[Link]/web/20100619011046/[Link]
Booth, Taylor L. (1967). Sequential Machines and Automata Theory (1st ed.). New York, NY: John
Wiley and Sons, Inc. Library of Congress Card Catalog Number 67-25924. ] Extensive, wide-ranging
book meant for specialists, written for both theoretical computer scientists as well as electrical
engineers. With detailed explanations of state minimization techniques, FSMs, Turing machines,
Markov processes, and undecidability. Excellent treatment of Markov processes pp. 449ff. Discusses
Z-transforms, D transforms in their context.
Kemeny, John G.; Hazleton Mirkil; J. Laurie Snell; Gerald L. Thompson (1959). Finite Mathematical
Structures ([Link] (1st ed.). Englewood Cliffs,
NJ: Prentice-Hall, Inc. Library of Congress Card Catalog Number 59-12841. Classical text. cf
Chapter 6 Finite Markov Chains pp. 384ff.
John G. Kemeny & J. Laurie Snell (1960) Finite Markov Chains, D. van Nostrand Company ISBN 0-
442-04328-7
E. Nummelin. "General irreducible Markov chains and non-negative operators". Cambridge
University Press, 1984, 2004. ISBN 0-521-60494-X
Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover
Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973) ISBN 978-0-
387-29765-1
Kishor S. Trivedi, Probability and Statistics with Reliability, Queueing, and Computer Science
Applications, John Wiley & Sons, Inc. New York, 2002. ISBN 0-471-33341-7.
K. S. Trivedi and [Link], SHARPE at the age of twenty-two, vol. 36, no. 4, pp. 52–57, ACM
SIGMETRICS Performance Evaluation Review, 2009.
R. A. Sahner, K. S. Trivedi and A. Puliafito, Performance and reliability analysis of computer systems:
an example-based approach using the SHARPE software package, Kluwer Academic Publishers,
1996. ISBN 0-7923-9650-2.
G. Bolch, S. Greiner, H. de Meer and K. S. Trivedi, Queueing Networks and Markov Chains, John
Wiley, 2nd edition, 2006. ISBN 978-0-7923-9650-5.
External links
"Markov chain" ([Link] Encyclopedia
of Mathematics, EMS Press, 2001 [1994]
Markov Chains chapter in American Mathematical Society's introductory probability book ([Link]
[Link]/~chance/teaching_aids/books_articles/probability_book/[Link]) Archived (http
s://[Link]/web/20080522131917/[Link]
articles/probability_book/[Link]) 2008-05-22 at the Wayback Machine
Introduction to Markov Chains ([Link] on YouTube
A visual explanation of Markov Chains ([Link]
Original paper by A.A Markov (1913): An Example of Statistical Investigation of the Text Eugene
Onegin Concerning the Connection of Samples in Chains (translated from Russian) ([Link]
[Link]/research/markov/DavidLink_AnExampleOfStatistical_MarkovTrans_2007.pdf)
[Link] 25/26
1/11/25, 12:51 PM Markov chain - Wikipedia
[Link] 26/26