Continuoustime Markov process

In probability theory, a continuoustime Markov process is a stochastic process { X(t) : t ≥ 0 } that satisfies the Markov property and takes values from a set called the state space; it is the continuoustime version of a Markov chain. The Markov property states that at any times s > t > 0, the conditional probability distribution of the process at time s given the whole history of the process up to and including time t, depends only on the state of the process at time t. In effect, the state of the process at time s is conditionally independent of the history of the process before time t, given the state of the process at time t.
Contents
Mathematical definitions
A Markov process, like a Markov chain, can be thought of as a directed graph of states of the system. The difference is that, rather than transitioning to a new (possibly the same) state at each time step, the system will remain in the current state for some random (in particular, exponentially distributed) amount of time and then transition to a different state. The process is characterized by "transition rates" q_{ij} between states i and j. Let X(t) be the random variable describing the state of the process at time t, and assume that the process is in a state i at time t. q_{ij} (for ) measures how quickly that transition happens. Precisely, after a tiny amount of time h, the probability the state is now j is given by ^{[1]}
where o(h) represents a quantity that goes to zero faster than h goes to zero (see the article on order notation). Hence, over a sufficiently small interval of time, the probability of a particular transition (between different states) is roughly proportional to the duration of that interval. The q_{ij} are called transition rates because if we have a large ensemble of n systems in state i, they will switch over to state j at an average rate of nq_{ij} until n decreases appreciably.
The transition rates q_{ij} are typically given as the ijth elements of the transition rate matrix Q (also known as an intensity matrix^{[2]}). As the transition rate matrix contains rates, the rate of departing from one state to arrive at another should be positive, and the rate that the system remains in a state should be negative. The rates for a given state should sum to zero, yielding the diagonal elements to be
With this notation, and letting p_{t} = Pr(X(t) = j), the evolution of a continuoustime Markov process is given by the firstorder differential equation
The probability that no transition happens in some time r is
That is, the probability distribution of the waiting time until the first transition is an exponential distribution with rate parameter − q_{ii}, and continuoustime Markov processes are thus memoryless processes.
A time dependent (time heterogeneous) Markov process is a Markov process as above, but with the qrate a function of time, denoted q_{ij}(t).
Embedded Markov chain
One method of finding the stationary probability distribution, π, of an ergodic continuoustime Markov process, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discretetime Markov chain, sometimes referred to as a jump process. Each element of the onestep transition probability matrix of the EMC, S, is denoted by s_{ij}, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by
From this, S may be written as
where D_{Q} = diag{Q} is the diagonal matrix of Q.
To find the stationary probability distribution vector, we must next find φ such that
with φ being a row vector, such that all elements in φ are greater than 0 and φ_{1} = 1, and the 0 on the right side also being a row vector of 0's. From this, π may be found as
Note that S may be periodic, even if Q is not. Once π is found, it must be normalized to a unit vector.
Another discretetime process that may be derived from a continuoustime Markov chain is a δskeleton—the (discretetime) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δskeleton.
Applications
Markov processes are used to describe physical processes where a system evolves in constant time. Sometimes, rather than a single systems, they are applied to an ensemble of identical, independent systems, and the probabilities are used to find how many members of the ensemble are in a given state.
Chemical reactions
Imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov process, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state.
Queueing theory
As a simple example, imagine a firstcomefirstserved queue where the jobs have exponential size distribution with average size μ and a Poisson arrival rate λ. Then the number of jobs in the queue is a continuous time markov process on the nonnegative integers. The transition rate from one number to the next (meaning a new job arrives within the next instant) is λ, and the transition rate to the next lowest number (meaning a job completes in the next instant) is 1 / μ.
See also
 Markov chain
 Master equation (physics)
 Queueing theory
 Phasetype distribution
 SemiMarkov process
 Variableorder Markov model
References
 ^ Parzen, E. (1962) Stochastic Processes, HoldenDay. ISBN 0816266646
 ^ R. Syski (1992). Passage times for Markov chains. IOS Press. ISBN 905199060X.
 William J. Stewart (1994). Introduction to the Numerical Solution of Markov Chains. Princeton University Press. pp. 17–23. ISBN 0691036993.
External links
Categories: Markov processes

Wikimedia Foundation. 2010.
Look at other dictionaries:
Markov process — In probability theory and statistics, a Markov process, named after the Russian mathematician Andrey Markov, is a time varying random phenomenon for which a specific property (the Markov property) holds. In a common description, a stochastic… … Wikipedia
Markov process — Statistics. a process in which future values of a random variable are statistically determined by present events and dependent only on the event immediately preceding. Also, Markoff process. [1935 40; after Russian mathematician Andrei Andreevich … Universalium
Continuoustime quantum walk — A Continuous time quantum walk (CTQW) is a walk on a given connected graph that is dictated by a time varying unitary matrix that relies on the Hamiltonian of the quantum system and the adjacency matrix. CTQW belongs to what is known as Quantum… … Wikipedia
SemiMarkov process — A continuous time stochastic process is called a semi Markov process or Markov renewal process if the embedded jump chain (the discrete process registering what values the process takes) is a Markov chain, and where the holding times (time… … Wikipedia
Markov decision process — Markov decision processes (MDPs), named after Andrey Markov, provide a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for… … Wikipedia
Markov chain — A simple two state Markov chain. A Markov chain, named for Andrey Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process characterized … Wikipedia
Time series — Time series: random data plus trend, with best fit line and different smoothings In statistics, signal processing, econometrics and mathematical finance, a time series is a sequence of data points, measured typically at successive times spaced at … Wikipedia
Markov switching multifractal — In financial econometrics, the Markov switching multifractal (MSM) is a model of asset returns that incorporates stochastic volatility components of heterogeneous durations.[1][2] MSM captures the outliers, log memory like volatility persistence… … Wikipedia
Process — In anatomy, a process is a projection from a structure. The process of the mandible is the part of the lower jaw that projects forward. In a more general sense, a process is a series of actions or events that are part of a system or of a… … Medical dictionary
Poisson process — A Poisson process, named after the French mathematician Siméon Denis Poisson (1781 ndash; 1840), is the stochastic process in which events occur continuously and independently of one another (the word event used here is not an instance of the… … Wikipedia