Fact-checked by Grok 2 weeks ago

Stochastic matrix

A stochastic matrix is a square with non-negative real entries in which the sum of the entries in each row equals 1. These matrices, also referred to as row-stochastic matrices, arise prominently in as transition matrices for finite-state Markov chains, where the entry in row i and column j denotes the probability of transitioning from state i to state j. An analogous structure, known as a column-stochastic matrix, has columns summing to 1 instead, though the row-stochastic convention is standard in most probabilistic contexts. Stochastic matrices exhibit several important algebraic and spectral properties. The product of two row-stochastic matrices is again row-stochastic, forming a monoid under matrix multiplication. The eigenvalue 1 always exists, with all other eigenvalues satisfying |\lambda| \leq 1, and the powers of a stochastic matrix P^k represent k-step transition probabilities. For primitive stochastic matrices—those that are irreducible and aperiodic—the Perron–Frobenius theorem ensures a unique positive left eigenvector corresponding to the eigenvalue 1, normalized to sum to 1, which represents the long-term steady-state distribution of the Markov chain. Doubly stochastic matrices, where both rows and columns sum to 1, form a special subclass with additional symmetries, such as the uniform vector being a steady state. Beyond Markov chains, stochastic matrices find applications in diverse fields, including search engine algorithms like Google's , which modifies a stochastic matrix to model web surfer behavior and compute page importance via . They also model in ecology, queueing systems in , and random walks on graphs in .

Historical Development

Origins in Probability

The foundations of stochastic matrices trace back to early 19th-century advancements in , particularly through Siméon Denis Poisson's explorations of random events and limiting behaviors in large populations. In his 1837 treatise Recherches sur la probabilité des jugements en matière criminelle et en matière civile, Poisson developed approximations for rare occurrences, introducing what became known as the to model the probability of independent random events over time or space. This work established key concepts for analyzing random processes, contributing to the broader development of . By the late 19th century, extended probabilistic methods to dynamical systems, bridging probability with deterministic models in . In his 1890 memoir on the , Poincaré introduced coefficients \beta_{j,h,n} to describe transition probabilities between molecular states or orbital configurations, ensuring these coefficients summed to unity for each fixed state, akin to row-stochastic entries. This approach, applied to assess the rarity of non-recurrent trajectories in planetary motion, marked an early adaptation of matrix-like notation for probabilistic transitions in discrete states, influenced by his recurrence theorem and efforts to quantify stability in celestial systems. The explicit precursor to stochastic matrices emerged in 1906 with Andrey Markov's investigation of dependent sequences. Analyzing letter patterns in Alexander Pushkin's , Markov demonstrated that the holds for non-independent variables through chains of conditional probabilities, implicitly defining transition structures that later formalized as matrices. Although Markov did not use at the time—favoring recursive probability calculations—this framework provided the first rigorous model of sequential dependencies, paving the way for matrix formulations in subsequent developments.

Evolution in Markov Chain Theory

The foundational work on Markov chains, which introduced the concept of sequences with dependent probabilities representable via matrices, was developed by Andrey Markov between 1906 and 1913. In his 1906 paper, Markov analyzed chains of dependent random variables to challenge the necessity of independence in the law of large numbers, demonstrating that limit theorems could extend to such sequences. By 1913, he had formalized these ideas into a framework of transition probabilities between states, which were later represented as entries of a matrix to enable computation of multi-step probabilities through matrix multiplication—a development that laid the groundwork for stochastic matrix theory in discrete processes. In , advanced the theory by embedding discrete Markov chains into continuous-time processes, providing a rigorous analytical foundation that influenced the matrix-based modeling of discrete systems; his 1931 work explicitly employed matrix notation for transition probabilities in developing the theory of . Kolmogorov's 1931 monograph on analytical methods in and subsequent works up to 1936 established the Kolmogorov forward and backward equations for , bridging discrete transition matrices with infinitesimal generators for continuous evolutions. This development, later termed in 1934 by , allowed discrete stochastic matrices to be viewed as approximations of continuous dynamics, enhancing their applicability in probabilistic modeling. Post-World War II, Joseph Doob's 1953 treatise on stochastic processes integrated martingale theory with frameworks, using stochastic matrices to compute conditional expectations in non-independent sequences. Doob defined martingales as a class of processes where expectations remain constant under conditioning, applying this to via their transition matrices to analyze stopping times and convergence behaviors. This synthesis elevated stochastic matrices from mere transition descriptors to tools for expectation-based predictions in broader stochastic settings. By the 1950s, stochastic matrices gained prominence in for modeling queueing systems, where matrix revealed long-term steady-state behaviors in service networks. Pioneering applications, such as James Jackson's 1957 analysis of open queueing networks, employed transition matrices to compute asymptotic distributions through powers of the matrix, addressing congestion in and . This era marked the shift of stochastic matrices toward practical optimization in dynamic systems.

Definitions

Row-Stochastic Matrices

A row-stochastic is a square with non-negative real entries such that the sum of the entries in each row equals 1. In the context of Markov chains, the entries p_{ij} of a row-stochastic P are constructed to represent the conditional probabilities of transitioning from i to j, with the row sums ensuring that the total probability of leaving i and entering some is exactly 1. To verify that a given matrix is row-stochastic, confirm that all entries are non-negative and compute the row sums. For instance, the matrix \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix} has first-row sum $0.7 + 0.3 = 1 and second-row sum $0.4 + 0.6 = 1, satisfying the conditions. This normalization distinguishes row-stochastic matrices from general non-negative matrices, as the row-sum constraint endows them with a direct probabilistic interpretation, where each row corresponds to a probability distribution over the possible next states. The row-sum property is equivalently stated in vector form as the equation P \mathbf{1} = \mathbf{1}, where \mathbf{1} is the column of all ones with the same dimension as the number of states.

Column-Stochastic Matrices

A column-stochastic matrix, also known as a left stochastic matrix, is a square P = (p_{ij}) with non-negative real entries such that the sum of the entries in each column equals 1, that is, \sum_{i} p_{ij} = 1 for every column index j. This normalization ensures that the columns can be interpreted as probability distributions. In probabilistic models, such as certain formulations of Markov chains, the entry p_{ij} represents the probability of transitioning from state j to state i, often in the context of backward or processes where the focus is on incoming transitions to a state. This convention contrasts with forward transition matrices and is useful in dual representations of stochastic systems. Consider the 2×2 P = \begin{pmatrix} 0.3 & 0.6 \\ 0.7 & 0.4 \end{pmatrix}. The first column sums to $0.3 + 0.7 = 1, and the second column sums to $0.6 + 0.4 = 1, verifying that P is column-stochastic. The of a row-stochastic is column-stochastic, since transposing swaps rows and columns, thereby converting row sums of 1 into column sums of 1. This relationship is expressed algebraically by the equation \mathbf{1}^T P = \mathbf{1}^T, where \mathbf{1} is the all-ones of appropriate , confirming the column-sum condition.

Properties

Algebraic Properties

A row-stochastic matrix P satisfies P \mathbf{1} = \mathbf{1}, where \mathbf{1} is the column vector of all ones, ensuring each row sums to 1. The set of all n \times n row-stochastic matrices is closed under matrix multiplication: if P and Q are row-stochastic, then PQ is row-stochastic because PQ \mathbf{1} = P(Q \mathbf{1}) = P \mathbf{1} = \mathbf{1}. The set of row-stochastic matrices forms a , as any \sum \lambda_k P_k with \lambda_k \geq 0, \sum \lambda_k = 1, and each P_k row-stochastic satisfies \left( \sum \lambda_k P_k \right) \mathbf{1} = \sum \lambda_k (P_k \mathbf{1}) = \sum \lambda_k \mathbf{1} = \mathbf{1}. A principal submatrix of a row-stochastic has row sums at most 1 but generally not equal to 1, making it substochastic rather than . However, similarities preserve row-stochasticity: if S is a and P is row-stochastic, then S P S^T is row-stochastic because S P S^T \mathbf{1} = S P (S^T \mathbf{1}) = S P \mathbf{1} = S \mathbf{1} = \mathbf{1}, as are themselves row-stochastic. A is both row-stochastic and column-stochastic, meaning it satisfies P \mathbf{1} = \mathbf{1} and \mathbf{1}^T P = \mathbf{1}^T. The Birkhoff-von Neumann theorem states that every doubly stochastic matrix is a of matrices.

Spectral Properties

The Perron-Frobenius theorem provides fundamental insights into the spectral properties of stochastic matrices, which are nonnegative matrices with row or column sums equal to 1. For an irreducible stochastic matrix, the theorem guarantees a unique positive real eigenvalue of largest , known as the Perron root, which equals 1, and this eigenvalue is simple with a strictly positive corresponding eigenvector. All other eigenvalues have strictly less than 1 if the matrix is (irreducible and aperiodic). Regardless of irreducibility, every stochastic has 1 as an eigenvalue, with the all-ones vector \mathbf{1} serving as the right eigenvector. For a row-stochastic matrix P, this satisfies the equation P \mathbf{1} = \mathbf{1}. The corresponding left eigenvector \pi^T, normalized such that \pi^T \mathbf{1} = 1, satisfies \pi^T P = \pi^T and represents the measure. The of any stochastic matrix is 1, ensuring that all eigenvalues lie within or on the unit disk in the . In the case of periodic irreducible matrices, the theorem's strict dominance does not hold, and there may be additional eigenvalues on the unit circle, corresponding to the periodicity structure. For a with d > 1, there are d eigenvalues of magnitude 1, uniformly spaced around the unit circle. Aperiodicity ensures that 1 is the only eigenvalue on the unit circle.

Applications in Markov Chains

Transition Matrices

In discrete-time Markov chains with a finite space, the one-step transition probabilities are represented by a row-stochastic P, where the entry p_{ij} denotes the probability of moving from i to j, formally p_{ij} = \Pr(X_{n+1} = j \mid X_n = i) for all states i, j. This serves as the transition , capturing the chain's dynamics through : the distribution after one step is obtained by left-multiplying the current distribution vector by P. The n-step transition probabilities are given by the powers of P, such that the entry (P^n)_{ij} = \Pr(X_n = j \mid X_0 = i), allowing of multi-step behaviors via repeated application of the . Key structural properties of the chain are reflected in P. Irreducibility of the chain means that from any , every other is reachable with positive probability in some finite number of steps; this holds P is an irreducible , i.e., there exists no of states that block-diagonalizes P into disconnected components. Periodicity characterizes cyclic behavior: for a i, the is the of all positive integers n such that p_{ii}^{(n)} > 0, where p_{ii}^{(n)} = (P^n)_{ii}; an irreducible chain is periodic with d > 1 if returns to states occur only at multiples of d, detectable by the pattern of zero entries in powers of P along diagonals d. Absorbing states occur when a state j satisfies p_{jj} = 1 and p_{jk} = 0 for all k \neq j, meaning the corresponding row of P is the unit vector with 1 in position j; once entered, the chain remains there indefinitely. For absorbing Markov chains, which have at least one absorbing and possibly transient states, the transition matrix can be rearranged into canonical form by partitioning states into m transient states followed by r absorbing states: P = \begin{pmatrix} Q & R \\ 0 & I \end{pmatrix}, where Q is the m \times m submatrix of transient-to-transient transitions, R is the m \times r submatrix of transient-to-absorbing transitions, $0 is the r \times m zero matrix, and I is the r \times r identity matrix. This form facilitates analysis of absorption probabilities and expected times to absorption.

Stationary Distributions and Convergence

A stationary distribution of a Markov chain with row-stochastic transition matrix P is a probability row vector \pi satisfying \pi P = \pi, where \pi \geq 0 and \sum_i \pi_i = 1. This equation implies that if the chain's state distribution is \pi at some time, it remains \pi under further transitions governed by P. For an irreducible and aperiodic finite-state , the \pi exists and is unique. Irreducibility ensures that every is reachable from every other, while aperiodicity prevents periodic cycling, guaranteeing that mixes without oscillating. In such chains, the unique \pi captures the long-term proportion of time spent in each , independent of the initial distribution. Convergence to the stationary distribution occurs for ergodic Markov chains, defined as finite-state, irreducible, aperiodic, and positive recurrent chains. In these cases, raising the transition matrix to the power n, denoted P^n, converges as n \to \infty to a matrix where every row equals \pi. This limit can be expressed as \lim_{n \to \infty} P^n = \mathbf{1} \pi, where \mathbf{1} is the column vector of all ones (the outer product form). Consequently, starting from any initial distribution \mu_0, the distribution after n steps \mu_0 P^n approaches \pi as n \to \infty. To compute the stationary distribution \pi for a row-stochastic P, solve the \pi (P - I) = 0 subject to the \sum_i \pi_i = 1, or equivalently in column form, (I - P^T) \mu = 0 with \sum_i \mu_i = 1 where \mu = \pi^T. This system arises from the eigenvalue equation for the dominant eigenvalue 1 and is solvable via standard linear algebra methods for finite states, such as , after replacing one equation with the normalization to handle the . The to \pi is quantified by the mixing time, which bounds how many steps are needed for the distribution to be close to \pi in distance. For reversible chains, the mixing time is asymptotically determined by the second-largest eigenvalue modulus \lambda_2 of P, with the rate governed by $1 / (1 - |\lambda_2|), where |\lambda_2| < 1 due to aperiodicity and irreducibility. This eigenvalue gap provides a spectral measure of how quickly the chain forgets its initial state.

Examples

Simple Transition Examples

A simple yet illustrative example of a stochastic matrix arises in modeling daily patterns with two s: rainy ( 1) and sunny ( 2). The row-stochastic transition matrix P is given by P = \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix}, where the entry p_{ij} represents the probability of transitioning from state i to state j in one day. Thus, if it is rainy today, there is a 70% it remains rainy tomorrow and a 30% it becomes sunny; if sunny today, there is a 40% of tomorrow and a 60% it stays sunny. The one-step transitions are directly given by the rows of P. For two-step transitions, compute the matrix power P^2 = P \cdot P: P^2 = \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix} \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix} = \begin{pmatrix} 0.7 \cdot 0.7 + 0.3 \cdot 0.4 & 0.7 \cdot 0.3 + 0.3 \cdot 0.6 \\ 0.4 \cdot 0.7 + 0.6 \cdot 0.4 & 0.4 \cdot 0.3 + 0.6 \cdot 0.6 \end{pmatrix} = \begin{pmatrix} 0.61 & 0.39 \\ 0.52 & 0.48 \end{pmatrix}. This shows, for instance, that starting from a rainy day, the probability of rain two days later is 0.61. Another fundamental setup involves the problem, modeled as a finite-state with states representing the gambler's fortune from 0 to N, where 0 and N are absorbing barriers (ruin and , respectively). Assuming fair odds (p = q = 0.5), the row-stochastic P has 1s on the diagonal for the absorbing states 0 and N, and for transient states i = 1, \dots, N-1, p_{i,i-1} = 0.5, p_{i,i+1} = 0.5, with all other entries zero. This tridiagonal structure (except for the absorbing rows) captures the probability flows between transient states until . The submatrix Q consisting of the transient-to-transient transitions is used to compute the probabilities of eventual absorption or the distribution after n steps before . To demonstrate numerical computation of multi-step transitions via , consider the weather matrix P above and compute the three-step P^3 = P^2 \cdot P: P^3 = \begin{pmatrix} 0.61 & 0.39 \\ 0.52 & 0.48 \end{pmatrix} \begin{pmatrix} 0.7 & 0.3 \\ 0.4 & 0.6 \end{pmatrix} = \begin{pmatrix} 0.61 \cdot 0.7 + 0.39 \cdot 0.4 & 0.61 \cdot 0.3 + 0.39 \cdot 0.6 \\ 0.52 \cdot 0.7 + 0.48 \cdot 0.4 & 0.52 \cdot 0.3 + 0.48 \cdot 0.6 \end{pmatrix} = \begin{pmatrix} 0.427 + 0.156 & 0.183 + 0.234 \\ 0.364 + 0.192 & 0.156 + 0.288 \end{pmatrix} = \begin{pmatrix} 0.583 & 0.417 \\ 0.556 & 0.444 \end{pmatrix}. These powers reveal how probabilities evolve over time, with flows shifting gradually toward the in irreducible cases. A specific example highlighting periodicity is a three-state cyclic with states 1, 2, 3 and P = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}, where the deterministically cycles: from 1 to 2, 2 to 3, and 3 to 1. The one-step transitions follow this cycle, while P^2 = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} shifts by two states, and P^3 = I (the ) returns to the starting state with probability 1. This periodicity of order 3 means returns to any state occur only at multiples of 3 steps, illustrating how matrices can encode deterministic cycles within probabilistic frameworks.

Cat and Mouse Problem

The cat and mouse problem serves as a classic illustration of an absorbing Markov chain modeled via a stochastic matrix. In this setup, the state space captures the relative positions of the mouse with respect to the chasing cat on a discrete linear grid of five positions, with transient states representing non-coincident positions and an absorbing state when the cat catches the mouse (positions coincide). The mouse starts at the farthest position from the cat, and both move independently to an adjacent position chosen uniformly at random among possible moves each step; at the ends of the grid, they move to the only adjacent (inward) position with probability 1. The process terminates upon capture. The P takes the form for absorbing chains: P = \begin{pmatrix} Q & R \\ 0 & I \end{pmatrix}, where Q is the square submatrix (4×4) governing transitions among the transient states, R captures transitions from transient to absorbing states (4×1), the ensures no escape from , and I is the 1×1 for the single absorbing state. This 5×5 stochastic matrix encodes the probabilistic chase dynamics, with rows summing to 1. To compute absorption probabilities—the likelihood of eventual capture starting from each —the fundamental matrix N = (I - Q)^{-1} is formed, yielding the matrix of expected visits to transient states before . The absorption probabilities are then B = N R, where each entry b_{ij} gives the probability of absorption in the j-th absorbing state (here, the single capture state) from initial transient state i. For the standard five-position setup, the probability of eventual capture is 1 from every transient state, reflecting the inevitability of capture in finite space. The mean time to absorption, or expected steps until the cat catches the mouse, is obtained as t = N \mathbf{1}, where \mathbf{1} is the column of ; the i-th entry of t is the expected duration from i. In the five-state model, this time increases quadratically with initial separation, highlighting the scale of the chase. These computations via the stochastic matrix demonstrate the problem's utility in analyzing pursuit dynamics.

References

  1. [1]
    [PDF] MARKOV CHAINS: BASIC THEORY 1.1. Definition and First ...
    A stochastic matrix is a square nonnegative matrix all of whose row sums are 1. A substochastic matrix is a square nonnegative matrix all of whose row sums are ...
  2. [2]
    [PDF] Stochastic matrices, PageRank, and the Google matrix
    In the following, let P be an n × n (row) stochastic matrix corresponding to some Markov chain with n states. Definition. A Markov chain is said to have a ...
  3. [3]
    [PDF] PERRON FROBENIUS THEOREM Definition 1. A n×n matrix M with ...
    A stochastic matrix M is called regular or eventually positive provided there is a q0 > 0 such that Mq0 has all positive entries. This means that for this ...
  4. [4]
  5. [5]
    Siméon-Denis Poisson - Physics Today
    Jun 21, 2018 · His 1837 treatise on the deliberations of juries introduced the Poisson distribution, which describes the probability of a random event ...
  6. [6]
    [PPT] Siméon Denis Poisson - Rice Statistics
    He first published his Poisson distribution in 1837 in Recherches sur la probabilité des jugements en matière criminelle et matière civile. Although this was ...
  7. [7]
    [PDF] Poincaré's Odds - HAL
    Dec 24, 2012 · Abstract. This paper is devoted to Poincaré's work in probability. Though the subject does not represent a large part of the mathematician's ...<|control11|><|separator|>
  8. [8]
    First Links in the Markov Chain | American Scientist
    In 1906, when Markov began developing his ideas about chains of linked probabilities, he was 50 years old and had already retired, although he still taught ...Missing: stochastic | Show results with:stochastic
  9. [9]
    [PDF] Markov Chains Handout for Stat 110 1 Introduction
    Markov chains were first introduced in 1906 by Andrey Markov, with the goal of showing that the Law of Large Numbers does not necessarily require the random.
  10. [10]
    [PDF] Markov and the creation of Markov chains
    Jun 2, 2025 · The matrix method for finite Markov chains was subsequently exposited very much from Markov's post-1906 standpoint, in monograph form in ...
  11. [11]
    Kolmogorov and the Theory of Markov Processes - jstor
    (Kolmogorov called them stochastically determined processes. The name Markov process was suggested in 1934 by Khintchine.) Today we distinguish Markov ...
  12. [12]
    Stochastic Processes - Joseph L. Doob - Google Books
    This book fills that need. While even elementary definitions and theorems are stated in detail, this is not recommended as a first text in probability.
  13. [13]
    [PDF] J. L. Doob:Foundations of stochastic processes and probabilistic ...
    Sep 23, 2009 · In Sections 1 and 2 of Chapter II of his 1953 book [36], Doob gave an expanded treatment of this material. Here he adopted as the definition of ...
  14. [14]
    Queueing Models - INFORMS.org
    A leading subject of study in operations research, queueing theory is the mathematical study of queues and waiting lines.
  15. [15]
    Stochastic Matrix -- from Wolfram MathWorld
    A stochastic matrix, also called a probability matrix, probability transition matrix, transition matrix, substitution matrix, or Markov matrix, is matrix used ...
  16. [16]
    Stochastic Matrix - an overview | ScienceDirect Topics
    A stochastic matrix is a square matrix whose columns are probability vectors. A probability vector is a numerical vector whose entries are real numbers ...
  17. [17]
    [PDF] Finite Markov Chains
    Markov chains,suitable as an undergraduate introduction to probability theory and as a. reference. Examples from physics,·economjcs and the life sciences. The ...
  18. [18]
    [PDF] 1 Discrete Markov Chain - Chihao Zhang
    Perron-Frobenius Theorem answers Question 1: Let P be a stochastic matrix. Then P·1 = 1. Thus. Fact 1 implies that ρ(P) = 1. So PT has eigenvalue 1 and there ...
  19. [19]
    Stochastic Matrices
    A square matrix A is stochastic if all of its entries are nonnegative, and the entries of each column sum to 1. A matrix is positive if all of its entries are ...Missing: mathematics | Show results with:mathematics
  20. [20]
    [PDF] Convex sets - CMU School of Computer Science
    (i) The set Dn of all n × n doubly stochastic matrices is a convex set. (ii) Dn is closed under multiplication (and the adjoint operation). However, Dn is not a ...
  21. [21]
    An eigenvalue localization theorem for stochastic matrices and its ...
    Jan 28, 2016 · Here, we constitute an eigenvalue localization theorem for a stochastic matrix, by using its principal submatrices.
  22. [22]
    Which similarity transformations preserve stochasticity of a matrix.
    Oct 21, 2025 · To see that conjugation by S preserves stochasticity it suffices to show that conjugation by S sends the following 0/1 stochastic matrices to ...How do we prove that Stochastic matrices preserve l1 norm?Similarity between stochastic matrices - Math Stack ExchangeMore results from math.stackexchange.com
  23. [23]
    Doubly Stochastic Matrix -- from Wolfram MathWorld
    A doubly stochastic matrix is a matrix A=(a_(ij)) such that a_(ij)>=0 and sum_(i)a_(ij)=sum_(j)a_(ij)=1 is some field for all i and j.
  24. [24]
    [PDF] Lecture 17 Perron-Frobenius Theory
    Perron-Frobenius theorem for nonnegative matrices suppose A ∈ R n×n and A ≥ 0 then. • there is an eigenvalue λpf of A that is real and nonnegative, with.
  25. [25]
    39. The Perron-Frobenius Theorem
    39.1.​​ A primitive matrix is both irreducible and aperiodic. We can also verify other properties hinted by Perron-Frobenius in these stochastic matrices. ...
  26. [26]
    [PDF] MARKOV CHAINS AND THEIR APPLICATIONS
    Apr 28, 2021 · We use a transition matrix to list the transition probabilities. The transition matrix will be an n × n matrix when the chain has n possible ...
  27. [27]
    [PDF] Lecture 8 1 Last Class 2 Stationary Distributions of Markov Chains
    Mar 5, 2012 · Such a matrix is called stochastic; all transition matrices of. Markov chains are stochastic. If the columns also sum to one, we say the ...
  28. [28]
    [PDF] CS265/CME309: Randomized Algorithms and Probabilistic Analysis ...
    The Fundamental Theorem of Markov chains states that finite, irreducible, aperiodic Markov chains have a unique stationary distribution.
  29. [29]
    [PDF] 6.896: Probability and Computation - People | MIT CSAIL
    If a Markov chain P is irreducible and aperiodic then it has a unique stationary distribution π. In particular, π is the unique (normalized such that the ...
  30. [30]
    [PDF] Lecture 19 1 Markov Chains
    Oct 31, 2024 · This means that once the Markov chain reaches the stationary distribution, applying the tran- sition matrix P leaves the distribution unchanged.
  31. [31]
    [PDF] Lecture 7: Markov Chains and Random Walks - cs.Princeton
    Definition 2 A Markov chain M is ergodic if there exists a unique stationary distribution π and for every (initial) distribution x the limit limt→∞ xMt = π.
  32. [32]
    [PDF] Convergence Theorem for finite Markov chains
    Aug 28, 2017 · Definition 4.3. An irreducible Markov chain is called aperiodic if its period is equal to 1, or equivalently, gcd T (x)=1 ∀x ∈ Ω.
  33. [33]
    [PDF] 1.3 Markov Chains
    Given an irreducible transition matrix P, there is a unique stationary distribu- tion π satisfying π = πP, which we constructed in Section 1.5. We now consider.
  34. [34]
    MarkovChains - Computer Science
    The basic version of the ergodic theorem says that if an aperiodic Markov chain has a stationary distribution π (i.e., it's ergodic), then it converges to π if ...
  35. [35]
    [PDF] Markov Chains and Stationary Distributions - West Virginia University
    Mar 19, 2012 · One way to compute the stationary distribution of a finite Markov chain is to solve ... Solving ¯π · P = ¯π corresponds to solving the the system.
  36. [36]
    [PDF] Markov Chains: stationary distribution
    If P is irreducible and finite all its states are positive recurrent, then the Markov chain has a unique stationary distribution.
  37. [37]
    [PDF] Fastest mixing Markov chain on a path ∗ - Stanford University
    The speed of convergence of q(t) to q1 is given by the second-largest eigenvalue modulus µ(P).
  38. [38]
    [PDF] Numerical Estimation of the Second Largest Eigenvalue of a ...
    In this thesis, we propose a novel Krylov subspace type method to estimate the second largest eigenvalue from the simulation data of the Markov chain using ...
  39. [39]
    [PDF] Comparison Inequalities and Fastest-mixing Markov Chains
    Sep 27, 2011 · (b) The SLEM (second-largest eigenvalue in modulus) is an asymptotic measure. (in the worst case over starting states) of distance from ...<|control11|><|separator|>
  40. [40]
    [PDF] Chapter 4 - Markov Chains
    For Example 4.4, calculate the proportion of days that it rains. 14. A transition probability matrix P is said to be doubly stochastic if the sum over each ...
  41. [41]
    5. Periodicity - Random Services
    A state in a discrete-time Markov chain is periodic if the chain can return to the state only at multiples of some integer larger than 1.