Fact-checked by Grok 2 weeks ago

Continuous-time Markov chain

A continuous-time Markov chain (CTMC) is a \{X(t): t \geq 0\} taking values in a discrete state space S, typically finite or countable, that satisfies the : the conditional distribution of the process at any future time depends only on the current state and is independent of the past history. Formally, for states x_0, x_1, \dots, x_{n+1} \in S and times $0 \leq t_0 < t_1 < \dots < t_{n+1}, P(X(t_{n+1}) = x_{n+1} \mid X(t_i) = x_i \text{ for } i = 0, \dots, n) = P(X(t_{n+1}) = x_{n+1} \mid X(t_n) = x_n). This memoryless property extends the discrete-time Markov chain framework to continuous time, where state changes occur at random, exponentially distributed holding times rather than fixed intervals. The behavior of a CTMC is fully specified by its infinitesimal generator matrix Q = (q_{ij})_{i,j \in S}, also known as the rate matrix, where q_{ij} for i \neq j represents the instantaneous transition rate from state i to state j, and the diagonal entries satisfy q_{ii} = -\sum_{j \neq i} q_{ij} to ensure the rows sum to zero. The holding time in state i is exponentially distributed with parameter -\ q_{ii}, reflecting the memoryless nature of exponential distributions, and upon departure, the next state is chosen according to the embedded discrete-time Markov chain with transition probabilities proportional to the off-diagonal rates. The transition probability matrix P(t) = (p_{ij}(t)), where p_{ij}(t) = P(X(t+s) = j \mid X(s) = i), evolves according to the Kolmogorov forward and backward equations: \frac{d}{dt} P(t) = P(t) Q = Q P(t), with P(0) = I. CTMCs also satisfy the Chapman-Kolmogorov equations P(t+s) = P(t) P(s) for t, s > 0, enabling the structure of transition probabilities and facilitating long-term analysis. For irreducible, positive recurrent chains, a unique \pi exists satisfying \pi Q = 0 and \sum_i \pi_i = 1, to which the distribution converges as t \to \infty regardless of the initial state. Special cases include the Poisson process, a CTMC on the non-negative integers with constant upward transition rate, modeling event arrivals. CTMCs are foundational in modeling systems with continuous-time dynamics and discrete states, such as birth-death processes, where transitions are restricted to increments or decrements by one unit, characterized by birth rates \lambda_i and death rates \mu_i. These processes underpin , including the M/M/1 queue, where customer arrivals follow a Poisson process (births) and service times are (deaths), allowing computation of steady-state queue lengths and waiting times via balance equations. Beyond queueing, CTMCs apply to reliability analysis of repairable systems, in , and networks, where states represent system configurations and rates capture event frequencies.

Introduction

Overview

A continuous-time Markov chain is a stochastic process \{X(t), t \geq 0\} taking values in a countable state space S, where the future evolution depends only on the current state, satisfying the Markov property: P(X(t+s) = j \mid X(u) \text{ for all } u \leq t) = P(X(t+s) = j \mid X(t) = i) for all t, s \geq 0 and i, j \in S. This property ensures that the process is memoryless, with the probability distribution of future states conditional on the present state alone. Continuous-time Markov chains extend the framework of discrete-time Markov chains, which operate over fixed time steps, to scenarios where transitions occur at arbitrary times in a continuous . The foundational theory was developed by in his 1931 paper, where he formalized analytical methods for such processes, building on earlier discrete-time concepts to handle continuous parameterizations. At their core, these chains model systems where the system remains in a state for a random duration governed by exponential holding times, after which it jumps to another state according to specified probabilities, preserving the memoryless nature of exponential distributions. They find broad applications in modeling phenomena such as customer flows in queueing systems, component failure patterns in , and species interactions in .

Prerequisites and notation

To understand continuous-time Markov chains (CTMCs), readers should have a foundational knowledge of , including and the properties of the , along with basic concepts in stochastic processes and discrete-time Markov chains. These prerequisites enable comprehension of how CTMCs extend discrete-time models to continuous time while preserving the . The state space S of a CTMC is a countable set, which can be finite or countably infinite, such as the set of non-negative integers. The process is denoted by \{X(t) : t \geq 0\}, where X(t) represents the state at time t. Paths of the process are right-continuous with left limits, known as paths. Conditional probabilities and expectations starting from an initial state i \in S are denoted P_i and E_i, respectively. CTMCs are generally assumed to be time-homogeneous, with transition rates independent of absolute time, though more general time-inhomogeneous cases exist. Specific assumptions, such as irreducibility (where every state is reachable from every other), may apply in certain analyses but are not universal. The paths of a CTMC almost surely feature only finitely many jumps over any finite time interval, reflecting the process's piecewise constant nature. Holding times between jumps follow distributions, underpinning the memoryless transitions central to CTMCs./16%3A_Markov_Processes/16.15%3A_Introduction_to_Continuous-Time_Markov_Chains)

Definitions

Transition probability approach

A continuous-time Markov chain (CTMC) can be defined through its transition probability matrix P(t), where the entry P_{ij}(t) = \mathbb{P}(X(t) = j \mid X(0) = i) represents the probability of being in state j at time t \geq 0, given the initial state i at time 0, for a \{X(t): t \geq 0\} with countable state space. The in continuous time states that for any $0 \leq t_0 < t_1 < \cdots < t_n, the conditional distribution of X(t_n) given the history up to t_{n-1} depends only on X(t_{n-1}), formally:
\mathbb{P}(X(t_n) = j \mid X(t_k) = x_k, \, k = 0, \dots, n-1) = \mathbb{P}(X(t_n) = j \mid X(t_{n-1}) = x_{n-1}) = P_{x_{n-1} j}(t_n - t_{n-1}).
This implies that future states are independent of the past given the present, enabling the use of time-homogeneous transition probabilities.
The transition matrices satisfy the semigroup property: for all s, t \geq 0, P(s + t) = P(s) P(t), where matrix multiplication is used, reflecting the composition of probabilities over disjoint time intervals. The Chapman-Kolmogorov equations follow from the semigroup property and provide a way to compute transitions over longer intervals via summation: for $0 \leq s < u < t,
P_{ij}(t - s) = \sum_k P_{ik}(u - s) P_{kj}(t - u),
or in matrix form, P(t) = P(s) P(t - s) for t > s \geq 0. These equations ensure consistency in probability calculations across intermediate times.
The family \{P(t): t \geq 0\} is uniquely determined by the initial condition P(0) = I, the identity matrix, and right-continuity at t = 0, meaning \lim_{t \to 0^+} P(t) = I, which guarantees the process starts correctly and evolves continuously in probability. As an example, consider a simple pure death process on states \{0, 1, \dots, N\}, where from state i, the process moves to i-1 at rate i \mu for constant \mu > 0, modeling independent deaths of i individuals each dying at rate \mu. The transition probabilities are explicitly solvable as binomial probabilities: for j \leq i,
P_{ij}(t) = \binom{i}{j} (e^{-\mu t})^j (1 - e^{-\mu t})^{i - j},
reflecting the number of survivors following independent exponential waiting times, with P_{ij}(t) = 0 for j > i. This satisfies the semigroup property since the product of two such matrices yields the distribution after combined time.
The infinitesimal generator arises as the right-derivative of P(t) at t = 0, connecting to rate-based descriptions.

Infinitesimal generator approach

The infinitesimal generator provides a fundamental rate-based characterization of continuous-time Markov chains (CTMCs), focusing on the instantaneous transition rates between states rather than the global transition probabilities over time. For a CTMC \{X(t): t \geq 0\} with countable state space S, the infinitesimal generator is represented by the matrix Q = (q_{ij})_{i,j \in S}, where q_{ij} \geq 0 for i \neq j denotes the jump rate from state i to state j, and the diagonal entries satisfy q_{ii} = -\sum_{j \neq i} q_{ij} to ensure conservation of probability mass. This off-diagonal structure captures the positive rates of direct transitions, while the negative diagonal reflects the total outflow rate from each state. The Q relates directly to the transition probability P(t) = (p_{ij}(t)), where p_{ij}(t) = \Pr(X(t) = j \mid X(0) = i), through the initial derivative condition \frac{d}{dt} P(t) \big|_{t=0} = [Q](/page/Q). This leads to the Kolmogorov backward and forward differential s, which govern the evolution of P(t): the backward is \frac{d}{dt} P(t) = Q P(t), emphasizing the influence of the initial state, while the forward is \frac{d}{dt} P(t) = P(t) Q, focusing on the distribution's propagation. For the state probability row vector \mu(t) at time t, starting from \mu(0), it evolves as \mu(t) = \mu(0) P(t), satisfying the forward \mu'(t) = \mu(t) Q, analogous to the Fokker-Planck for discrete-state processes. The backward , in turn, applies to expectations of functions from initial states, such as \frac{d}{dt} \mathbb{E}_i [f(X(t))] = \sum_j q_{ij} (f(j) - f(i)) for a f. Solutions to these Kolmogorov equations form a unique transition semigroup \{P(t): t \geq 0\}, satisfying P(0) = I, P(t+s) = P(t) P(s) for t,s \geq 0, and the generator property, under standard conditions like the row-finite Q ensuring finite jumps in finite time. The conservativeness of Q requires that each row sums to zero, \sum_{j \in S} q_{ij} = 0 for all i, which guarantees that the semigroup preserves probability (\sum_j p_{ij}(t) = 1 for all t \geq 0). In finite-state cases, P(t) can be expressed via the matrix exponential P(t) = e^{tQ}, though this form is derived from the semigroup properties rather than assumed a priori.

Jump process approach

A continuous-time Markov chain (CTMC) can be constructed as a jump process by decomposing its sample paths into alternating holding periods in states and instantaneous jumps between them. This approach emphasizes the piecewise constant nature of the process, where it remains in a state for a random holding time before transitioning to another state. The holding times are independent exponential random variables conditioned on the current state, with rate parameter \lambda_i = -q_{ii} for state i, where Q = (q_{ij}) is the infinitesimal generator matrix. Thus, the expected holding time in state i is $1/\lambda_i, and the distribution is P(H_i > t) = e^{-\lambda_i t} for t \geq 0. The ensures the memoryless property, which is crucial for the Markovian nature of the process: the expected remaining holding time in state i, given that at least time s has already elapsed, equals $1/\lambda_i regardless of s. This property implies that the future evolution depends only on the current state, not on the time spent there. Upon expiration of the holding time, the process jumps to a different state j \neq i according to the embedded jump , a \{Y_n\}_{n=0}^\infty that records the sequence of states visited immediately after each jump, starting with Y_0 = X(0). The transition probabilities of the jump chain are given by \pi_{ij} = q_{ij}/\lambda_i for j \neq i, with \pi_{ii} = 0. To construct the CTMC \{X(t): t \geq 0\} from this decomposition, begin at initial X(0) = i. Generate the first holding time H_1 \sim \exp(\lambda_i), set the first time T_1 = H_1, and let X(t) = i for $0 \leq t < T_1. Then, sample the next Y_1 from the jump chain distribution starting at i, generate H_2 \sim \exp(\lambda_{Y_1}), set T_2 = T_1 + H_2, and let X(t) = Y_1 for T_1 \leq t < T_2. Repeat this process indefinitely, yielding times $0 = T_0 < T_1 < T_2 < \cdots and X(t) = Y_n for T_n \leq t < T_{n+1}. The resulting paths are right-continuous with left limits, ensuring the process is cadlag (continue à droite, limite à gauche). This jump process construction yields a CTMC that satisfies the infinitesimal generator Q, as the holding rates \lambda_i and jump probabilities \pi_{ij} recover the off-diagonal entries q_{ij} = \lambda_i \pi_{ij} for i \neq j and diagonals q_{ii} = -\lambda_i. In state spaces that are countably infinite, the process exhibits finitely many jumps almost surely in any finite time interval, preventing explosions where infinitely many jumps accumulate in finite time. For finite state spaces, this holds with probability 1. A special case is the Poisson process, but more generally as a pure jump process from state n to n+1 at constant rate \lambda, with holding times \exp(\lambda) independent of n. Here, the jump chain is deterministic (\pi_{n,n+1} = 1), and the process counts the cumulative jumps.

Key Properties

Communication classes and recurrence

In a continuous-time Markov chain (CTMC), two states i and j communicate with each other if there exist times t > 0 and s > 0 such that the transition probability P_{ij}(t) > 0 and P_{ji}(s) > 0. This relation is symmetric, reflexive, and transitive, forming an that partitions the state space into disjoint communicating classes. Within a communicating class, every pair of states can reach each other with positive probability in finite time, while transitions between different classes are restricted by the class structure. A communicating class is closed if, starting from any state in the class, the process cannot reach any state outside the class with positive probability; otherwise, the class is open. Closed classes trap the process indefinitely once entered, whereas open classes allow escape to other classes. A CTMC is irreducible if its state space consists of a single closed communicating class, meaning all states communicate with each other. The concept of periodicity in CTMCs is adapted from discrete-time chains through analysis of the embedded jump chain, which tracks state changes at jump times; a class is aperiodic if the greatest common divisor of the lengths of possible return paths in the jump chain is 1. A state i in a CTMC is recurrent if, starting from i, the probability of returning to i at some time t > 0 is 1; otherwise, it is transient. Recurrence implies that the expected number of visits to i is infinite, quantified by the G_{ii}(\alpha) = \int_0^\infty e^{-\alpha t} P_{ii}(t) \, dt, which diverges as \alpha \to 0^+ for recurrent states and remains finite for transient ones. Among recurrent states, i is positive recurrent if the mean return time \mathbb{E}_i[T_i^+] < \infty, where T_i^+ = \inf\{t > 0 : X(t) = i \mid X(0) = i\}, and null recurrent otherwise, in which case \mathbb{E}_i[T_i^+] = \infty. In an irreducible CTMC, all states are either recurrent or transient together, and if recurrent, all are positive or null recurrent together. Criteria for recurrence and transience can be established using the infinitesimal generator Q; for instance, spectral properties of -Q determine the behavior, with the chain recurrent if the of the leads to non-explosion of the Green function. Alternatively, solving for the Green function via the (\alpha I - Q) G(\alpha) = I and analyzing its as \alpha \to 0^+ provides a direct test, where finiteness indicates transience and divergence indicates recurrence.

Transient and absorption behavior

In continuous-time Markov chains (CTMCs), transient behavior describes the short-term evolution of the process before any long-run patterns emerge. For small times t > 0, the transition probability matrix P(t), whose (i,j)-entry gives the probability of being in state j at time t starting from state i, admits the approximation P(t) \approx I + t Q, where I is the and Q is the infinitesimal generator matrix of the chain. This linear approximation follows from the Kolmogorov backward differential equations P'(t) = Q P(t) with initial condition P(0) = I, and it highlights the instantaneous transition rates encoded in the off-diagonal entries of Q. Such small-time asymptotics are essential for analyzing initial state changes and approximating behavior over brief intervals. A key aspect of transient dynamics involves hitting times, defined as T_j = \inf \{ t \geq 0 : X(t) = j \} for a target state j, starting from an initial state i \neq j. The distribution of T_j can be characterized by solving integral equations derived from the strong and the Q, typically via s or conditioning on the first out of i. For instance, the f_i(s) = \mathbb{E}[e^{-s T_j} \mid X(0) = i] (with f_j(s) = 1) satisfies the derived from first-step : for i \neq j, f_i(s) = \frac{-q_{ii}}{s - q_{ii}} \sum_{k \neq i} \frac{q_{ik}}{-q_{ii}} f_k(s). Expected hitting times, in particular, solve linear systems Q m = - \mathbf{1} over transient states, with boundary conditions m_j = 0, yielding finite values when or recurrence ensures hitting with probability 1. These quantities quantify the typical duration to reach targets like boundaries in approximations or failure states in reliability models. Absorption behavior arises when the state space includes , characterized by q_{ii} = 0 (zero outgoing , so the process remains there indefinitely once entered). are those from which the probability of eventual is 1, often identified within communicating classes lacking closed recurrent subsets. The probability b_i of in a specific absorbing state (or set B) starting from transient state i is found via first-step analysis: conditioning on the exponential holding time in i with \lambda_i = -q_{ii} > 0 and the subsequent jump to k \neq i with probabilities q_{ik}/\lambda_i, it satisfies the linear b_i = \delta_{iB} + \sum_{k \in \tilde{T}} \frac{q_{ik}}{\lambda_i} b_k for transient states i \in \tilde{T}, where \delta_{iB} = 1 if i \in B and 0 otherwise (with b_i = \delta_{iB} for absorbing i). This is solved directly for finite state spaces, providing ultimate profiles essential in applications like queueing or . Similarly, the mean time m_i to from transient i obeys m_i = \frac{1}{\lambda_i} + \sum_{k \neq i} \frac{q_{ik}}{\lambda_i} m_k, with m_i = 0 for absorbing states; the solution scales with holding times and jump structure, remaining finite under certainty. The full distribution of the time to absorption from transient states forms a phase-type (PH) distribution, a versatile class closed under convolutions and mixtures. Specifically, for a CTMC with transient states governed by subgenerator T (the restriction of Q to transients) and initial distribution \alpha over transients, the absorption time \tau follows \mathrm{PH}(\alpha, T), with survival function P(\tau > t) = \alpha \exp(T t) \mathbf{1} (where \mathbf{1} is the column vector of ones) and density f(t) = \alpha \exp(T t) (-T) \mathbf{1}, \quad t > 0. Phase-type distributions arise naturally as absorption times in finite-state CTMCs with one or more absorbing states and are dense among all distributions on [0, \infty), making them ideal for fitting empirical waiting times in stochastic modeling. As an illustrative example, consider a simple two-state CTMC with states 0 (absorbing, q_{00} = 0) and 1 (transient, q_{11} = -\lambda < 0, q_{10} = \lambda > 0). Starting from state 1, absorption in 0 occurs with probability b_1 = 1, as the only jump leads there. The mean time to absorption is m_1 = 1/\lambda, obtained directly from the equation m_1 = 1/\lambda + (q_{11}/\lambda) m_1 (though the self-term vanishes). The distribution is exponential with rate \lambda, a special case of phase-type \mathrm{PH}(1, [-\lambda]), with density f(t) = \lambda e^{-\lambda t}. This models basic decay processes, such as radioactive disintegration.

Stationary distributions

A stationary distribution for a continuous-time Markov chain (CTMC) with infinitesimal generator matrix Q is a row \pi = (\pi_i)_{i \in S} satisfying \pi Q = 0 and \sum_{i \in S} \pi_i = 1, where \pi_i \geq 0 for all i. For a finite-state irreducible CTMC, such a stationary distribution exists and is unique. The satisfies the global balance equations \pi_i \lambda_i = \sum_{j \neq i} \pi_j q_{ji} for each state i, where \lambda_i = -q_{ii} is the total departure rate from i and q_{ji} are the transition rates. If the chain is reversible, these reduce to the equations \pi_i q_{ij} = \pi_j q_{ji} for all i, j. A CTMC is reversible with respect to \pi if the generator Q is with respect to the \pi-weighted inner product \langle f, g \rangle_\pi = \sum_i \pi_i f(i) g(i), meaning \sum_i \pi_i f(i) (Q g)(i) = \sum_i \pi_i (Q f)(i) g(i) for suitable functions f, g. Kolmogorov's criterion provides a necessary and sufficient condition for reversibility: for every cycle i_0 \to i_1 \to \cdots \to i_n \to i_0, the product of forward rates equals the product of backward rates, \prod_{k=0}^{n-1} q_{i_k i_{k+1}} = \prod_{k=0}^{n-1} q_{i_{k+1} i_k}, with i_n = i_0. For an irreducible positive recurrent CTMC, the ergodic theorem states that the transition probabilities converge to the : p_{ij}(t) \to \pi_j as t \to \infty for all i, j. Moreover, for any bounded f: S \to \mathbb{R}, the time average converges to the stationary expectation: \frac{1}{T} \int_0^T f(X(s)) \, ds \to \sum_{i \in S} \pi_i f(i) as T \to \infty, regardless of the initial distribution. To compute \pi, solve the \pi Q = 0 subject to the \sum_i \pi_i = 1; for birth-death processes, this yields recursive relations \pi_{k} = \pi_0 \prod_{j=1}^k \frac{\lambda_{j-1}}{\mu_j} for k \geq 1, where \lambda_j and \mu_j are birth and rates, with \pi_0 determined by normalization. In the infinite-state case, existence of a stationary distribution requires additional conditions beyond irreducibility and positive recurrence. The Foster-Lyapunov drift criterion ensures positive recurrence (and thus existence of \pi) if there exists a non-negative V: S \to [0, \infty) (with V(i) = 0 for i in some finite petite set C), constants K < \infty and b > 0, such that (Q V)(i) \leq -b + K \mathbf{1}_{\{V \leq K\}}(i) for all i \in S. This drift condition bounds the expected change in V, implying geometric ergodicity under further aperiodicity assumptions.

Embedded jump chain

The embedded jump chain of a continuous-time Markov chain (CTMC) \{X(t): t \geq 0\} with countable state space and infinitesimal generator matrix Q = (q_{ij}) is the discrete-time Markov chain \{Y_n: n \geq 0\} defined by Y_n = X(T_n), where T_0 = 0 and T_n = \sum_{k=1}^n S_k for n \geq 1. Here, the holding times S_k are exponential random variables with parameter \lambda_{Y_{k-1}}, where \lambda_i = -q_{ii} > 0 denotes the total departure rate from state i. The holding times are of the sequence \{Y_n\}. The matrix \Pi = (\pi_{ij}) of the embedded jump chain is stochastic and given by \pi_{ij} = \begin{cases} \frac{q_{ij}}{\lambda_i} & \text{if } i \neq j, \\ 0 & \text{if } i = j. \end{cases} This follows because, conditional on being in i, the next after the holding time is j \neq i with probability proportional to the q_{ij}, normalized by the total \lambda_i. If the CTMC is irreducible on its state space (meaning there exists a of positive rates between any pair of states) and \lambda_i > 0 for all i, then the embedded jump chain is also irreducible. Moreover, if the CTMC is positive recurrent (i.e., it admits a unique \pi with \pi Q = 0 and \sum_i \pi_i = 1), then the embedded jump chain inherits positive recurrence, ensuring the existence of its own unique . The stationary distribution \mu = (\mu_i) of the embedded jump chain satisfies \mu \Pi = \mu and \sum_i \mu_i = 1. It is given explicitly by \mu_i = \frac{\pi_i \lambda_i}{\sum_j \pi_j \lambda_j} for each i, where \pi is the of the CTMC. This relation arises because the long-run proportion of jumps originating from i equals the product of the long-run time proportion in i (given by \pi_i) and the rate from i (\lambda_i), normalized across all states. The global balance equations for the CTMC, \sum_i \pi_i q_{ij} = \pi_j \lambda_j for each j, confirm that \mu is under \Pi. In the positive recurrent case, the mean number of jumps between consecutive visits to i is precisely $1/\mu_i. This mean return time quantifies the average steps traversed in the jump chain before returning to i. Under positive recurrence, the ergodic theorem for discrete-time Markov chains implies that the asymptotic frequency of visits to state j in the embedded jump chain is \nu_j = \lim_{n \to \infty} \frac{1}{n} \sum_{k=1}^n \mathbf{1}_{\{Y_k = j\}} = \mu_j = \frac{\pi_j \lambda_j}{\sum_k \pi_k \lambda_k almost surely, regardless of the initial state. Conditionally on starting from state i, the long-run frequency of visits to j converges to the same \mu_j due to irreducibility. In specific cases, such as computing transition flows, the frequency can be expressed as \frac{\pi_j q_{ji}}{\pi_i \lambda_i} for directed paths, but the global form governs the overall asymptotics. The embedded jump chain facilitates exact simulation of CTMC paths via the : starting in i, sample the holding time S \sim \text{Exp}(\lambda_i), then select the next Y_{n+1} \sim categorical with probabilities \{\pi_{i j}\}_{j \neq i}, and repeat. This procedure generates unbiased trajectories by directly sampling jumps according to \Pi and holding times from the , avoiding approximations for stiff systems.

Time-reversed chains

In a continuous-time Markov chain (CTMC) \{X(t): t \geq 0\} with finite or countable state space and infinitesimal generator matrix Q = (q_{ij}), the time-reversed is defined as \{X'(t) = X(T - t): 0 \leq t \leq T\} for some fixed large T > 0. As T \to \infty, under the assumption that the original chain is stationary with unique \pi, the reversed \{X'(t)\} converges in distribution to another homogeneous CTMC that also satisfies the . This reversed chain has the same \pi, establishing a duality between the forward and reversed processes in equilibrium. The infinitesimal generator Q' = (q'_{ij}) of the reversed chain is given by q'_{ij} = \frac{\pi_j}{\pi_i} q_{ji} \quad \text{for } i \neq j, \quad q'_{ii} = -\sum_{k \neq i} q'_{ik}, where \pi satisfies the global balance equations \pi Q = 0 and \sum_i \pi_i = 1. This form ensures that the reversed process preserves the stationary measure \pi, as the balance equations for Q' coincide with those for Q. In the non-reversible case, where Q' \neq Q, the reversed generator still defines a valid CTMC, which finds applications in analyzing approximations of queueing networks or deterministic models derived from limits. A CTMC is reversible if the reversed process has the same distribution as the original, i.e., Q' = Q. This holds if and only if the detailed balance equations \pi_i q_{ij} = \pi_j q_{ji} are satisfied for all states i, j. Equivalently, reversibility is characterized by Kolmogorov's cycle condition: for every cycle i = i_0 \to i_1 \to \cdots \to i_n \to i_0 in the state space, \prod_{k=0}^{n} q_{i_k i_{k+1}} = \prod_{k=0}^{n} q_{i_{k+1} i_k}, with the additional requirement that q_{ij} = 0 implies q_{ji} = 0 for undirected edges. Many CTMCs exhibit reversibility, including birth-death processes (where transitions occur only between adjacent states) and tree-structured networks without cycles, due to the absence of directed loops violating the cycle condition. Reversibility simplifies analysis in stochastic networks, such as queueing systems, by allowing the to be deduced from reversed es via product-form solutions. For instance, in an M/M/1 , the reversed equates arrivals and departures, both with rate \lambda, enabling duality arguments to verify equilibrium behavior without solving full equations. In tandem queueing networks, reversibility implies that the departure from the first matches the arrival to the second, facilitating throughput calculations and performance bounds.

Uniformization technique

The uniformization technique, originally introduced by Jensen in 1953, provides a method to analyze and simulate continuous-time Markov chains (CTMCs) by embedding them into a dominating process. This approach transforms the CTMC into an equivalent (DTMC) with uniform jump times, facilitating numerical computations and exact simulations. The infinitesimal generator Q of the CTMC, where the diagonal entries satisfy q_{ii} = -\lambda_i and \lambda_i is the exit rate from state i, is used to define the embedding. To apply uniformization, select a rate \gamma \geq \max_i \lambda_i. The CTMC is then subordinated to a Poisson process with rate \gamma, where each event in the Poisson process triggers a potential jump. With probability $1 + q_{ii}/\gamma, the process remains in the current state (a "fictitious" or dummy self-jump), while actual transitions to other states occur according to the off-diagonal rates scaled by $1/\gamma. This yields a discrete skeleton: a DTMC with transition matrix R = I + Q / \gamma, where the self-loops in R account for the dummy jumps and ensure the chain is well-defined. The transition probability matrix P(t) of the original CTMC can be expressed exactly as a Poisson-compounded sum over the powers of R: P(t) = \sum_{n=0}^\infty e^{-\gamma t} \frac{(\gamma t)^n}{n!} R^n. This representation arises because the number of Poisson events up to time t follows a with mean \gamma t, and conditional on n events, the state evolves according to n steps of the DTMC R. The formula enables computation via matrix exponentiation or iterative matrix multiplications, making it suitable for numerical solution of the Kolmogorov forward or backward equations. For practical computation, the infinite sum is truncated after N terms, yielding an P^{(N)}(t) with error bounded by the tail: \| P(t) - P^{(N)}(t) \| \leq \sum_{n=N+1}^\infty e^{-\gamma t} \frac{(\gamma t)^n}{n!} = O\left( e^{-\gamma t} \frac{(\gamma t)^{N+1}}{(N+1)!} \right). The choice of N can be guided by ensuring the tail is below a desired , often scaling with \gamma t. This decreases rapidly for moderate t and appropriate \gamma, providing an efficient alternative to direct integration of the differential equations governing P(t). Applications of uniformization include numerical solving of the Kolmogorov equations through the series expansion and steady-state analysis via the DTMC framework, where stationary distributions can be obtained using (I - R)^{-1} for relevant quantities in non-singular cases. Additionally, it supports without : generate Poisson arrival times at rate \gamma, then apply transitions from R at each time, producing paths exactly distributed as the original CTMC. Compared to direct simulation methods that use state-dependent interarrivals (potentially requiring acceptance-rejection for uniform proposals), uniformization offers a fixed step size in the sense of constant rate generation, enabling exact finite-horizon simulations with controlled computational effort.

Examples and Applications

Simple exponential examples

A continuous-time Markov chain with a single state is the simplest possible example, where the process remains constant over time, with no transitions occurring. In this trivial case, the state space consists of one element, say {0}, and the infinitesimal generator matrix is Q = {{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}}, reflecting zero transition rates. The transition probability matrix is then P(t) = {{grok:render&&&type=render_inline_citation&&&citation_id=1&&&citation_type=wikipedia}} for all t \geq 0, meaning the probability of staying in the state is always 1. A more illustrative example is the two-state continuous-time Markov chain with states {0, 1}, where transitions from 0 to 1 occur at rate \alpha > 0 and from 1 to 0 at rate \beta > 0. The infinitesimal is Q = \begin{pmatrix} -\alpha & \alpha \\ \beta & -\beta \end{pmatrix}, capturing the exponential holding times in each . The transition probabilities P(t) = (p_{ij}(t)) can be obtained explicitly via the matrix P(t) = e^{Qt}, yielding, for instance, p_{00}(t) = \frac{\beta + \alpha e^{-(\alpha + \beta)t}}{\alpha + \beta}. This formula arises from solving the and demonstrates how the process relaxes toward over time. The Poisson process provides another fundamental example, serving as a counting process N(t) that starts at 0 and increments by 1 at interarrival times with rate \lambda > 0. Modeled as a continuous-time Markov chain on the countable state space \{0, 1, 2, \dots\}, the has entries q_{i,i+1} = \lambda for all i \geq 0 and q_{ii} = -\lambda on the diagonal, with all other off-diagonal entries zero. This structure reflects the pure birth nature of the process, where jumps only increase the state by 1. An race illustrates competing transitions in a multi- setting, where the chain starts in an initial and the time to the first is the minimum of random variables \operatorname{[Exp](/page/Exp)}(\lambda_1), \dots, \operatorname{[Exp](/page/Exp)}(\lambda_k) for k possible target states. Due to the memoryless property, the probability that the transition goes to the j-th is \lambda_j / \sum_{i=1}^k \lambda_i, of the total \sum \lambda_i. This setup underlies the in general continuous-time Markov chains. For the two-state chain, the stationary distribution \pi = (\pi_0, \pi_1) satisfies \pi Q = 0 and \pi_0 + \pi_1 = 1, giving \pi = \left( \frac{\beta}{\alpha + \beta}, \frac{\alpha}{\alpha + \beta} \right). This proportion reflects the long-run fraction of time spent in each state, equivalent to the ratio of mean holding times: the mean time in state 0 is $1/\alpha and in state 1 is $1/\beta, with the mean cycle time $1/\alpha + 1/\beta. Transient behavior is captured by the time-dependent probabilities; starting from state 1, the probability of being in state 0 at time t is p_{10}(t) = \frac{\beta (1 - e^{-(\alpha + \beta)t})}{\alpha + \beta}, showing initial departure from the starting state followed by approach to stationarity.

Birth-death processes

A birth-death process is a continuous-time Markov chain on the state space {0, 1, 2, \dots }, where the only possible transitions from state n \geq 1 are to n+1 (a "birth") at rate \lambda_n > 0 or to n-1 (a "death") at rate \mu_n > 0, while from state 0 the only possible transition (if any) is to 1 at rate \lambda_0 \geq 0 and \mu_0 = 0. These processes model phenomena such as population dynamics, where states represent population sizes, and births/deaths change the size by one unit. The infinitesimal generator matrix Q has entries q_{n,n+1} = \lambda_n, q_{n,n-1} = \mu_n for n \geq 1, q_{0,1} = \lambda_0, q_{nn} = -(\lambda_n + \mu_n), and all other off-diagonal entries zero. The Kolmogorov forward equations describe the time evolution of the transition probabilities p_{mn}(t) = \Pr(X(t) = n \mid X(0) = m). For fixed initial state m, letting p_n(t) = p_{mn}(t), these are the recursive system of ordinary differential equations: \frac{d}{dt} p_0(t) = -\lambda_0 p_0(t) + \mu_1 p_1(t), \quad p_0(0) = \mathbb{I}_{\{m=0\}}, \frac{d}{dt} p_n(t) = \lambda_{n-1} p_{n-1}(t) - (\lambda_n + \mu_n) p_n(t) + \mu_{n+1} p_{n+1}(t), \quad n \geq 1, \quad p_n(0) = \mathbb{I}_{\{m=n\}}. This infinite system can be solved recursively or via transforms in special cases. Under the global balance equations \pi Q = 0 with \sum \pi_n = 1, the (if it exists) satisfies the detailed balance relations \pi_n \lambda_n = \pi_{n+1} \mu_{n+1} for n \geq 0, yielding \pi_n = \pi_0 \prod_{k=1}^n \frac{\lambda_{k-1}}{\mu_k}, \quad n \geq 1, where \pi_0 = \left( 1 + \sum_{n=1}^\infty \prod_{k=1}^n \frac{\lambda_{k-1}}{\mu_k} \right)^{-1} provided the sum converges. The process is positive recurrent (and thus has a ) if and only if \sum_{n=1}^\infty \prod_{k=1}^n \frac{\lambda_{k-1}}{\mu_k} < \infty; it is recurrent if this sum diverges but \sum_{n=1}^\infty \prod_{k=1}^n \frac{\mu_k}{\lambda_k} = \infty, and transient otherwise. These convergence conditions, established by Karlin and McGregor, determine long-term behavior such as ergodicity in population models. The probability generating function P(z, t) = \sum_{n=0}^\infty p_n(t) z^n provides a compact way to analyze the distribution in solvable cases, such as constant rates \lambda_n = \lambda, \mu_n = \mu > 0 for n \geq 1. In this simple birth-death process, P(z, t) satisfies the \frac{\partial P}{\partial t} = \lambda (z - 1) \frac{\partial P}{\partial z} + \mu (1 - z) \frac{\partial P}{\partial z}, with P(z, 0) = z^m. The explicit solution is P(z, t) = \left[ \frac{\mu (z-1) + (\lambda z - \mu) e^{-(\lambda - \mu) t} }{\lambda (z-1) + (\lambda z - \mu) e^{-(\lambda - \mu) t} } \right]^m if \lambda \neq \mu, and a limiting form if \lambda = \mu. If \lambda_0 = 0 (making state 0 absorbing, as in models), the probability of eventual at 0 (or "ruin") starting from initial state i > 0, denoted \eta_i, satisfies the \eta_0 = 1, \mu_i \eta_{i-1} + \lambda_i \eta_{i+1} = (\lambda_i + \mu_i) \eta_i, \quad i \geq 1, with \lim_{i \to \infty} \eta_i = 0 if the process is transient to . The minimal non-negative solution is \eta_i = \frac{\sum_{j=i}^\infty \prod_{k=1}^j \frac{\mu_k}{\lambda_k} }{\sum_{j=0}^\infty \prod_{k=1}^j \frac{\mu_k}{\lambda_k} } if the denominator is finite (transient case), and \eta_i = 1 otherwise (recurrent case). In the constant-rate case with \lambda_0 = 0, this simplifies to \eta_i = 1 if \lambda \leq \mu, and \eta_i = (\mu / \lambda)^i if \lambda > \mu. A prominent special case is the M/M/1 queue, where birth rates \lambda_n = \lambda and death rates \mu_n = \mu are constant for n \geq 1, modeling customer arrivals and service completions; its stationary distribution is geometric when the traffic intensity \rho = \lambda / \mu < 1, though full queueing applications are treated separately.

Queueing systems

Queueing systems model the number of customers or jobs in a system using continuous-time Markov chains (CTMCs), where states represent the queue length and transitions correspond to arrivals and departures. A foundational example is the M/M/1 queue, which features a single server, Poisson arrivals at rate \lambda, and exponentially distributed service times with rate \mu > \lambda. The state space is the non-negative integers \{0, 1, 2, \dots\}, with birth rates \lambda_n = \lambda for all n \geq 0 and death rates \mu_n = \mu for n \geq 1 (and \mu_0 = 0). Under the stability condition \rho = \lambda / \mu < 1, the stationary distribution is geometric: \pi_n = (1 - \rho) \rho^n for n \geq 0. The mean queue length (number of customers in the system) is then L = \rho / (1 - \rho). The busy period, defined as the time from an empty system until it next becomes empty, has a distribution that can be derived by solving integral equations involving the service time distribution. An extension to multiple servers is the M/M/c queue, with c parallel servers, Poisson arrivals at rate \lambda, and exponential service at rate \mu per server. States again denote the total number in the system, with birth rates \lambda_n = \lambda for all n \geq 0 and death rates \mu_n = \min(n, c) \mu. The chain is stable for \lambda < c \mu. The stationary distribution \pi_n for n \leq c is \pi_n = \pi_0 \frac{(\lambda / \mu)^n}{n!}, while for n > c, it is \pi_n = \pi_0 \frac{(\lambda / \mu)^n}{c^{n-c} c!}, where \pi_0 is the obtained recursively or via modified Bessel functions to ensure \sum \pi_n = 1. Performance measures in these queues often leverage , which relates the long-run average number of customers in the system L to the arrival rate \lambda and average time in the system W via L = \lambda W, applicable to stable systems without further proof required here. For networks of queues, Jackson networks consist of open tandem or structures of M/M/1 queues, where customers route probabilistically after service, modeled as a multidimensional CTMC with states as the queue lengths at each node. Under independence of routing probabilities and external arrivals, the stationary distribution takes a product form, where the marginal for each queue is geometric as in the isolated M/M/1 case, scaled by effective arrival rates.

References

  1. [1]
    [PDF] CONTINUOUS-TIME MARKOV CHAINS Definition 1. Acontinuous ...
    The transition probabilities of an infinite-state continuous-time Markov chain satisfy the back- ward equations, but not always the forward equations. Note ...
  2. [2]
    [PDF] Continuous Time Markov Chains - MIT OpenCourseWare
    As a result, if the M.c. Xn has the property X0 = ˇ, then Xn = ˇ for all n. ... X(t) is defined to be a continuous time Markov chain if for every j, i1 ...
  3. [3]
    [PDF] Notes for Math 450 Continuous-time Markov chains and Stochastic ...
    The basic data specifying a continuous-time Markov chain is contained in a matrix Q = (qij), i, j ∈ S, which we will sometimes refer to as the infinitesimal.
  4. [4]
    [PDF] Continuous-time Markov Chains - San Jose State University
    Assuming customers arrive according to a Poisson process with rate λ, this is a birth and death process with parameters: • arrival rates: λi = λ for each i ≥ 0, ...
  5. [5]
    [PDF] markov chains and queueing theory - UChicago Math
    Aug 12, 2011 · Birth-Death Processes. One example of a continuous-time Markov chain is the birth-death process. The birth-death process is a stochastic ...
  6. [6]
    Markov Chains - Cambridge University Press & Assessment
    J. R. Norris, University of Cambridge. Publisher: Cambridge University Press. Online publication date: June 2012. Print publication year: 1997. Online ISBN ...
  7. [7]
    Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung
    Über die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Published: December 1931. Volume 104, pages 415–458, (1931); Cite this article. Download PDF.
  8. [8]
    [PDF] A Mathematical Introduction to Markov Chains1 - Virginia Tech
    May 13, 2018 · ... Prerequisites, in addition to the standard freshman-sophomore ... continuous time Markov chain for which where it jumps to is also ...
  9. [9]
    [PDF] Markov Chains - Chris Wells
    Here is the definition of a continuous-time Markov chain in terms of its jump chain and holding times. Recall that a minimal process is one which is set ...
  10. [10]
    [PDF] UNIT 5 CONTINUOUS TIME MARKOV PROCESSES-I - eGyanKosh
    Before, we start continuous time Markov Process, let us discuss the prerequisites. 5.2.1 Continuous Time Markov Chain. Before we proceed to discuss ...
  11. [11]
    [PDF] CONTINUOUS TIME MARKOV CHAINS 1. Introduction Discrete-time ...
    Definition 1.1. A continuous-time Markov chain with finite or countable state space X is a family {Xt = X(t)}t≥0 of X ...
  12. [12]
    [PDF] MATH858D MARKOV CHAINS Contents 1. Discrete ... - UMD MATH
    A right-continuous process (Xt)t≥0 on S is a continuous-time Markov chain with initial distribution λ and generator matrix L if its jump chain is discrete-time.Missing: prerequisites | Show results with:prerequisites
  13. [13]
    [PDF] 1 Continuous Time Processes - 1.1 Continuous Time Markov Chains
    May 1, 2011 · PsPt = Ps+t = PtPs. A continuous time Markov chain is determined by the matrices Pt. The fact that we now have a continuous parameter for time ...
  14. [14]
    [PDF] CONTINUOUS-TIME MARKOV CHAINS - Columbia University
    Dec 13, 2012 · We now turn to continuous-time Markov chains (CTMC's), which are a natural sequel to the study of discrete-time Markov chains (DTMC's), ...<|separator|>
  15. [15]
    3. The Markov Property
    A continuous time stochastic process is said to have the Markov property if its past and future are independent given the current state.
  16. [16]
    [PDF] Lecture 14 : Continuous Time Markov Chains - ECE, IISc
    Transition probability matrix has the semi group property, i.e.. P(t + s) = P(t)P(s), s, t ≥ 0. 2.2 Chapman Kolmogorov Equation for CTMC. Theorem 2.6 ...<|control11|><|separator|>
  17. [17]
    Continuous time Markov chains
    Suppose there exists a Markov chain ( X t ) t ≥ 0 with state space I = { 1 , 2 } such that, for all t > 0 , p 12 ( t ) = P ( X t = 2 ∣ X 0 = 1 ) = 1 2 − 1 2 e − ...
  18. [18]
    [PDF] 6.9 Continuous-time Markov chains
    The Chapman–Kolmogorov equations of part (c) are also known as the semigroup property. Proof. Part (a) is obvious. Page 2. DRAFT. 218.
  19. [19]
    [PDF] 6 Continuous-Time Birth and Death Chains - TTU Math
    Simple Death Process. ▷ The p.g.f had the form P(z,t)=(q + pz)N where p = e−µt and q = 1 − e−µt. ▷ This corresponds to a binomial distribution b(N,p).
  20. [20]
    [PDF] Birth-death processes
    Pure death process... λi. = 0. µi. = iµ i = 0,1,2,... πi. (0) ... Binomial distribution: the survival probability at time t is e. −µt inde ...
  21. [21]
    [PDF] 1 IEOR 6711: Continuous-Time Markov Chains
    Continuous-time Markov chains allow a chain to spend continuous time in a state, with holding times that are exponentially distributed, unlike discrete-time ...
  22. [22]
    15. Continuous-Time Chains - Random Services
    Continuous-time Markov chains are Markov processes in continuous time with discrete state spaces, studied with holding times and embedded discrete-time chains.
  23. [23]
    [PDF] Markov Chains and Stochastic Stability S.P. Meyn and R.L. Tweedie ...
    This book can serve as the text in most of these environments for a one-semester course on more general space applied Markov chain theory, pro- vided that some ...
  24. [24]
    [PDF] Continuous-time homogeneous Markov chains
    May 11, 2009 · Theorem 30 (Stationary distribution criteria) An irreducible continuous-time. Markov chain with infinitesimal generator Q is positive recurrent ...
  25. [25]
    [PDF] Integrals for Continuous-time Markov chains
    ... hitting time of state 0, starting in state i, for the Markov chain with transition rates Q∗ = (q∗ ij, i,j ∈ S) given by q∗ ij = qij/fi, for i ≥ 1, and q∗.
  26. [26]
    [PDF] Absorbing states in Markov chains. Mean time to absorption. Wright ...
    Dec 18, 2007 · Many functionals (including absorption probabilities) on Markov Chain are evaluated by a technique called first step analysis. This method ...
  27. [27]
    [PDF] Lecture notes on phase–type distributions for 02407 Stochastic ...
    A phase–type distribution is the distribution of the time to absorption in a finite Markov jump process (continuous time Markov chain) of dimension m+1 ...
  28. [28]
    [PDF] Markov Chains - CAPE
    Norris, J. R. (James R.) Markov chains / J. R. Norris. p. cm. – (Cambridge series on statistical and probabilistic mathematics ; no. 2).
  29. [29]
    [PDF] Lecture 4: Continuous-time Markov Chains
    From the jump chain and holding times we obtain a method to simulate exact realizations of X. Gillespie algorithm. Also known as the stochastic simulation ...Missing: source | Show results with:source<|control11|><|separator|>
  30. [30]
    Reversibility and Stochastic Networks - Statistical Laboratory
    Reversibility and Stochastic Networks. F. P. Kelly. This is an experiment to compare different formats for the distribution of the text of this book, ...
  31. [31]
    [PDF] Continuous-time Markov Chains (CTMC)
    Jan 6, 2012 · Remark 6.1.2 The infinitesimal generator Q is often referred to as the rate matrix of the Markov chain and plays the same function as the ...
  32. [32]
    [PDF] STABILITY OF MARKOVIAN PROCESSES III: FOSTER- LYAPUNOV ...
    In this paper we develop criteria for these forms of stability for continuous-parameter Markovian processes on general state spaces, based on Foster-Lyapunov ...
  33. [33]
    [PDF] Book (PDF) - Continuous Time Markov Chains
    Oct 20, 2021 · Authors: Thomas J. Sargent and John Stachurski. These lectures provides a short introduction to continuous time Markov chains.
  34. [34]
    [PDF] CONTINUOUS-TIME MARKOV CHAINS - Columbia University
    Dec 4, 2013 · We now turn to continuous-time Markov chains (CTMC's), which are a natural sequel to the study of discrete-time Markov chains (DTMC's), ...
  35. [35]
    [PDF] Performance Evaluation Uniformization: Basics, extensions and ...
    Oct 16, 2017 · Uniformization, also referred to as randomization or Jensen's method, transfers continuous-time Markov chains (CTMCs) into discrete-time Markov ...Missing: seminal | Show results with:seminal
  36. [36]
    The Generator Matrix - Probability Course
    Here, we introduce the generator matrix. The generator matrix, usually shown by G, gives us an alternative way of analyzing continuous-time Markov chains.
  37. [37]
    [PDF] 5 Continuous-Time Markov Chains - TTU Math
    A Continuous-Time Markov Chain (CTMC) is a stochastic process where jumps can occur at any time t ≥ 0, with discrete random variables.
  38. [38]
    [PDF] Continuous Time Markov Chains (CTMCs)
    A continuous time Markov chain. (CTMC) has the following property: A CTMC is completely described by the initial probability distribution π(0) and the.
  39. [39]
    11.3.2 Stationary and Limiting Distributions - Probability Course
    Let {X(t),t≥0} be a continuous-time Markov chain with an irreducible positive recurrent jump chain. Suppose that the unique stationary distribution of the jump ...
  40. [40]
    [1301.1305] Birth-death processes - arXiv
    Jan 7, 2013 · In this review, we outline the basic mathematical theory for BDPs and demonstrate new tools for statistical inference using data from BDPs.
  41. [41]
    On the Generalized "Birth-and-Death" Process - Project Euclid
    In this paper, I shall give the complete solution of the equations governing the generalised birth-and-death process in which the birth and death rates λ(t) λ ...
  42. [42]
    The Classification of Birth and Death Processes - jstor
    THE CLASSIFICATION OF BIRTH AND DEATH PROCESSES. BY. SAMUEL KARLIN AND JAMES McGREGOR. The transition probability matrix P(t) = (Pii(t)),. Pij(t) = Pr {X(t + s) ...
  43. [43]
    [PDF] Continuous time Markov chains - Penn Engineering
    Oct 16, 2017 · ... continuous time Markov chain (CTMC) if. P [X(t + s) = j \\ X(s) = i, X(u) = x(u), u < s]. = P [X(t + s) = j \\ X(s) = i]. ▻ Memoryless property ...
  44. [44]
    [PDF] Lecture 34
    Nov 20, 2015 · Then the stationary distribution π is the shifted Geometric (p = 1 − λ/µ) distribution. This is the M/M/1 queue model, as follows. Customers ...