Fact-checked by Grok 2 weeks ago

Nonnegative matrix

A nonnegative matrix is a matrix whose entries are all real numbers greater than or equal to zero. This article primarily concerns square nonnegative matrices. These matrices form a fundamental class in linear algebra, generalizing positive matrices (where all entries are strictly positive) and playing a central role in the study of , , and . Key subclasses include irreducible nonnegative matrices, which cannot be permuted into a block upper triangular form with more than one block, and primitive matrices, for which some power has all positive entries. The spectral properties of nonnegative matrices are governed by the Perron-Frobenius theorem, first established by Oskar Perron in 1907 for positive matrices and extended by in 1912 to irreducible nonnegative matrices. For an irreducible nonnegative matrix A, the theorem asserts that the \rho(A) is a positive real eigenvalue of algebraic and geometric multiplicity one, with a corresponding positive eigenvector; all other eigenvalues \lambda satisfy |\lambda| \leq \rho(A), and for primitive matrices, the strict inequality |\lambda| < \rho(A) holds and the powers of A normalized by \rho(A)^m converge to a rank-one matrix of the form \mathbf{u} \mathbf{v}^T, where \mathbf{u} is the right Perron eigenvector and \mathbf{v} the left Perron eigenvector (normalized such that \mathbf{v}^T \mathbf{u} = 1), whose columns are multiples of \mathbf{u}. Nonnegative matrices arise naturally in modeling phenomena involving nonnegative quantities, such as population dynamics in , where the dominant eigenvalue determines growth rates. They also underpin through (nonnegative with row or column sums equal to one), whose stationary distributions are given by the . In computer science, the employs a primitive to rank web pages, relying on the convergence guaranteed by the theorem. Further applications include ranking in tournaments and congestion control in .

Fundamentals

Definition

A nonnegative matrix is a matrix whose entries are nonnegative real numbers. Formally, an n \times m matrix A = (a_{ij}) is nonnegative if a_{ij} \geq 0 for all i = 1, \dots, n and j = 1, \dots, m. The notation A \geq 0 is commonly used to indicate that A is nonnegative, with the inequality understood entrywise; this extends to the partial ordering on matrices where A \geq B if and only if a_{ij} \geq b_{ij} for all i, j. When the matrix is square (n = m), it is often simply called a nonnegative square matrix. A related class is the positive matrix, defined as a matrix with all entries strictly positive (a_{ij} > 0 for all i, j). The study of nonnegative matrices originated in the context of the , introduced by in 1907 for positive matrices.

Basic Examples

A fundamental example of a nonnegative matrix is the identity matrix scaled by a nonnegative scalar c \geq 0. For n=2, the matrix cI_2 = \begin{pmatrix} c & 0 \\ 0 & c \end{pmatrix} has all entries nonnegative, satisfying the definition directly. Another common instance arises as the adjacency matrix of a directed graph where edge weights are nonnegative real numbers. Consider the 2×2 matrix A = \begin{pmatrix} 0 & 2 \\ 1 & 1 \end{pmatrix}, which encodes a graph with a directed edge of weight 2 from vertex 1 to 2, weight 1 from 2 to 1, and a self-loop of weight 1 at vertex 2; all entries are nonnegative. For a positive matrix, where every entry exceeds zero, the all-ones matrix J_n provides a clear case. The 2×2 version is J_2 = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, with all entries positive. Nonnegative matrices need not be square; a non-square example is the of a , which records the presence (1) or absence (0) of edges between partite sets. For a with partite sets of sizes 2 and 3, and edges connecting vertex 1 to the first two vertices of the second set, and vertex 2 to the last two, the resulting 2×3 matrix is \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}, featuring only nonnegative entries.

Algebraic Properties

Sums and Products

A fundamental property of nonnegative matrices is their under entrywise and nonnegative . If A and B are m \times n nonnegative matrices, then each entry of A + B is the sum of two nonnegative real numbers and hence nonnegative, making A + B nonnegative. Similarly, if c \geq 0 is a scalar and A is nonnegative, then cA has entries c a_{ij} \geq 0, preserving nonnegativity. These operations form the basis of the structure of the set of nonnegative matrices. The set of nonnegative matrices is also closed under matrix multiplication. If A is an m \times p nonnegative matrix with entries a_{ij} and B is a p \times n nonnegative matrix with entries b_{jk}, then the product AB is an m \times n matrix whose (i,k)-th entry is given by (AB)_{ik} = \sum_{j=1}^p a_{ij} b_{jk} \geq 0, as each term a_{ij} b_{jk} \geq 0 and the sum of nonnegative terms is nonnegative. This property ensures that the product remains entrywise nonnegative, establishing the nonnegative matrices as a under multiplication. For square nonnegative matrices, the trace provides another nonnegative scalar invariant. If A = (a_{ij}) is an n \times n nonnegative matrix, the trace is \operatorname{tr}(A) = \sum_{i=1}^n a_{ii} \geq 0, since it sums the nonnegative diagonal entries. This follows directly from the definition of the trace as a sum over the diagonal. Row and column sums offer useful summaries of the magnitude of nonnegative matrices. For an m \times n nonnegative matrix A = (a_{ij}), the i-th row sum is s_i = \sum_{j=1}^n a_{ij} \geq 0, and the j-th column sum is t_j = \sum_{i=1}^m a_{ij} \geq 0, as each is a finite sum of nonnegative entries. The maximum row sum, \max_i s_i, defines the induced infinity norm \|A\|_\infty, which bounds the action of A on vectors with infinity norm at most 1, satisfying \|Ax\|_\infty \leq \|A\|_\infty \|x\|_\infty for any vector x.

Powers and Norms

For a square nonnegative matrix A \geq 0, the powers A^k for positive integers k are also nonnegative, as preserves nonnegativity entrywise: each entry of A^k is a finite sum of products of nonnegative entries from A. The entries of A^k exhibit growth governed by the \rho(A), with typical asymptotic behavior where (A^k)_{ij} \sim c_{ij} \rho(A)^k for some nonnegative constants c_{ij} depending on the structure of A. The \rho(A) of a nonnegative matrix A admits the Collatz–Wielandt characterization: \rho(A) = \max_{x > 0} \min_i \frac{(Ax)_i}{x_i}, where the maximum is over positive vectors x and the minimum is over indices i. This variational formula provides a for or bounding \rho(A) without eigenvalues. For positive matrices A > 0, an equivalent formulation is \rho(A) = \inf \{ \lambda > 0 : \exists x > 0 \text{ with } Ax \leq \lambda x \}, highlighting the smallest scaling factor for which A maps some positive vector into a scaled version of itself. Matrix norms play a key role in bounding powers and the spectral radius of nonnegative matrices. The infinity norm is defined as \|A\|_\infty = \max_i \sum_j |a_{ij}|; for nonnegative A, this simplifies to the maximum absolute row sum \max_i \sum_j a_{ij}, since all entries are nonnegative. For any consistent matrix norm \|\cdot\|, the spectral radius satisfies \rho(A) \leq \|A\|, with equality possible for the infinity norm when A has a positive eigenvector aligned with the all-ones vector. This inequality extends to powers: \rho(A^k) = \rho(A)^k \leq \|A^k\| \leq \|A\|^k. Gelfand's formula relates the spectral radius to the growth of powers via norms: \rho(A) = \lim_{k \to \infty} \|A^k\|^{1/k} for any matrix norm \|\cdot\|. In the general case, the proof relies on the spectral theorem or resolvent estimates, showing \rho(A) \leq \liminf_{k \to \infty} \|A^k\|^{1/k} \leq \limsup_{k \to \infty} \|A^k\|^{1/k} \leq \max_i |\lambda_i(A)|, where equality follows from the fact that the spectral radius equals the maximum eigenvalue modulus. For nonnegative matrices, the formula holds identically, but the nonnegativity simplifies verification: since A^k \geq 0, the powers remain in the nonnegative cone, and the Collatz–Wielandt formula ensures the limit exists and equals \rho(A) by bounding the Rayleigh quotients for normalized powers. For instance, using the infinity norm, \|A^k\|_\infty^{1/k} converges to \rho(A) because row sums of A^k grow like \rho(A)^k.

Spectral Theory

Perron-Frobenius Theorem for Positive Matrices

The Perron-Frobenius theorem for positive matrices, originally established by Oskar Perron in , provides fundamental insights into the spectral properties of matrices with strictly positive entries. In his seminal paper "Zur Theorie der Matrizen," Perron proved that such matrices possess a dominant positive eigenvalue with distinctive characteristics that dominate the . This result laid the groundwork for broader applications in linear algebra and related fields, emphasizing the role of positivity in ensuring the existence of a real eigenvalue larger in magnitude than all others. For a square matrix A > 0 with all entries strictly positive, the theorem states that there exists a positive real eigenvalue \rho, known as the Perron root, which is simple and satisfies |\lambda| < \rho for every other eigenvalue \lambda of A. Moreover, \rho is the spectral radius of A, and it admits a corresponding positive eigenvector x > 0 such that A x = \rho x. The algebraic and geometric multiplicity of \rho is one, ensuring uniqueness in the eigenspace for positive eigenvectors. This Perron root \rho is greater than the of any other eigenvalue, establishing strict dominance in the . A standard proof outline employs the Collatz-Wielandt , which leverages variational principles on the positive . Define the f(y) = \min_{i=1}^n \frac{(A y)_i}{y_i} for y > 0; the maximum value of f(y) over all positive vectors y equals \rho, achieved precisely when y is a positive eigenvector for \rho. Similarly, the minimum of g(y) = \max_{i=1}^n \frac{(A y)_i}{y_i} also yields \rho. To establish , consider the compactness of the in the of positive vectors and apply the Brouwer to a suitable map derived from A, confirming a fixed point corresponding to the eigenvector equation. Uniqueness and simplicity follow by contradiction: supposing another eigenvalue \lambda with |\lambda| \geq \rho leads to inconsistencies with the positivity and the variational maximum, while multiplicity greater than one would imply a non-positive direction in the eigenspace, violating the positive eigenvector property.

Extensions to Irreducible Nonnegative Matrices

A nonnegative matrix A \in \mathbb{R}^{n \times n} with A \geq 0 is defined to be irreducible if its associated G(A), where vertices correspond to indices and a directed edge from i to j exists if a_{ij} > 0, is strongly connected. This means there is a directed from every to every other in G(A). Equivalently, for every pair of indices i, j, there exists a positive k (depending on i and j) such that (A^k)_{ij} > 0. This extension of the Perron-Frobenius theorem to irreducible nonnegative matrices, originally due to Frobenius, states that if A \geq 0 is irreducible of size n \times n, then the \rho(A) is a positive that is a simple eigenvalue of A, with a corresponding positive eigenvector x > 0 such that Ax = \rho(A) x. Moreover, \rho(A) has strictly positive left and right eigenvectors, and for any other eigenvalue \lambda of A, |\lambda| \leq \rho(A), with equality holding \lambda = \rho(A) \omega for some h-th \omega, where h is the imprimitivity index of A. There are exactly h distinct eigenvalues of modulus \rho(A), each simple. An irreducible nonnegative matrix A is called primitive if there exists a positive k such that [A^k](/page/Glossary_of_dance_moves) > 0 (i.e., all entries of [A^k](/page/Glossary_of_dance_moves) are positive). For primitive matrices, the Perron-Frobenius eigenvalue \rho(A) is strictly dominant: |\lambda| < \rho(A) for all other eigenvalues \lambda. In this case, the imprimitivity index h = 1, so \rho(A) is the only eigenvalue on the circle |\lambda| = \rho(A) in the complex plane. Additionally, since [A^k](/page/Glossary_of_dance_moves) > 0, the matrix [A^k](/page/Glossary_of_dance_moves) is invertible with a positive , as positive matrices are nonsingular and possess positive inverses by the Perron-Frobenius for positive matrices. The imprimitivity index h of an irreducible nonnegative matrix A is defined as the greatest common divisor of the lengths of all directed cycles in its graph G(A), or equivalently, the number of distinct eigenvalues of modulus \rho(A). If h > 1, the matrix is imprimitive, and there are exactly h eigenvalues on the circle |\lambda| = \rho(A), located at \rho(A) e^{2\pi i j / h} for j = 0, 1, \dots, h-1. This periodicity reflects the cyclic structure in G(A), where the graph decomposes into h disjoint subclasses with transitions only between consecutive classes.

Invertibility

Conditions for Invertibility

A square nonnegative matrix A \geq 0 is invertible if and only if its determinant is nonzero, \det(A) \neq 0. However, nonnegativity does not guarantee invertibility, as demonstrated by the all-ones matrix J, where every entry is 1; this matrix has rank 1 and \det(J) = 0 for dimensions greater than 1. The rank of a nonnegative matrix coincides with its conventional rank over the reals, and full rank is necessary for invertibility. A necessary condition for full rank is the absence of zero rows or columns, since such a row or column implies linear dependence. For irreducible nonnegative matrices, the Perron root \rho > 0, but this alone does not ensure full rank, as irreducible matrices can still be singular. The provides a sufficient condition for invertibility adapted to nonnegative matrices. All eigenvalues of A lie in the union of disks centered at the diagonal entries a_{ii} \geq 0 with radii R_i = \sum_{j \neq i} a_{ij} \geq 0. If the union of these disks excludes 0 (e.g., if A is strictly diagonally dominant with positive diagonals, so a_{ii} > R_i for all i, placing all disks in the open right half-plane), then 0 is not an eigenvalue and A is invertible. For nonnegative matrices, the permanent \operatorname{per}(A) \geq 0, and the van der Waerden inequality states that |\det(A)| \leq \operatorname{per}(A). The property \det(A) \geq 0 holds only for subclasses like totally nonnegative matrices; in general, \det(A) can be negative or zero (with zero indicating singularity), though nonnegativity does not force positivity of the determinant. Counterexamples include nonnegative matrices, such as the forward N with 1's on the superdiagonal and zeros elsewhere, where N^k = 0 for k equal to the , ensuring \det(N) = 0.

Nonnegative Inverses

A nonnegative A is said to have a nonnegative inverse if A is invertible and A^{-1} \geq 0, meaning all entries of the inverse are nonnegative. Such matrices arise in applications like Markov chains and optimization where preserving nonnegativity under inversion is desirable. Unlike general nonnegative matrices, which may have inverses with negative entries, this property imposes a strict structural on A. The class of invertible nonnegative matrices with nonnegative inverses is precisely the set of positive monomial matrices. A positive monomial matrix is the product of a and a with positive diagonal entries, i.e., A = D P, where D = \operatorname{diag}(d_1, \dots, d_n) with d_i > 0 for all i, and P is a . In this form, each row and each column of A contains exactly one positive entry, with the rest being zero. The is then A^{-1} = P^{-1} D^{-1}, which is also a positive monomial matrix, featuring nonnegative entries (specifically, positive in the permuted diagonal positions and zero elsewhere). This characterization ensures that the matrix is invertible, as the determinant is the product of the positive diagonal entries (up to the sign from the , but absolute value positive). For instance, a with positive entries is a trivial positive (with the ), and its is the with reciprocal entries, all positive. Similarly, a scaled , such as A = \begin{pmatrix} 0 & 2 \\ 3 & 0 \end{pmatrix}, has A^{-1} = \begin{pmatrix} 0 & 1/2 \\ 1/3 & 0 \end{pmatrix}, both nonnegative. These examples illustrate how the structure limits off-diagonal interactions, preventing negative entries in the inverse that could arise from solving systems with coupled positive terms.

Special Classes

Stochastic Matrices

A is a square matrix whose entries are nonnegative real numbers and whose rows each sum to 1; such matrices are termed row-stochastic. This normalization ensures that the matrix can represent transition probabilities in a , where the entry p_{ij} denotes the probability of transitioning from state i to state j. The defining equation for an n \times n row-stochastic matrix P = (p_{ij}) is \sum_{j=1}^n p_{ij} = 1 for each row index i = 1, \dots, n. Column-stochastic matrices are defined analogously, with each column summing to 1 instead. Row-stochastic matrices possess several key spectral properties arising from their nonnegative entries and normalization. The spectral radius \rho(P) equals 1, and 1 is an eigenvalue with right eigenvector the all-ones vector \mathbf{1} = (1, \dots, 1)^T, satisfying P \mathbf{1} = \mathbf{1}. This follows directly from the row-sum condition, as the i-th entry of P \mathbf{1} is \sum_j p_{ij} = 1. For irreducible row-stochastic matrices—meaning the associated is strongly connected—the Perron-Frobenius theorem, originally established by Oskar Perron for positive matrices and extended by to irreducible nonnegative ones, implies that 1 is a simple eigenvalue and the dominant one in modulus, with a unique (up to scaling) positive right eigenvector \mathbf{1} and a unique positive left eigenvector \pi normalized such that \pi \mathbf{1} = 1, representing the of the corresponding . If the irreducible row-stochastic matrix is primitive—i.e., some power P^k has all positive entries—then the powers P^k converge as k \to \infty to the rank-1 matrix whose rows are all equal to the stationary distribution \pi. This convergence reflects the long-term behavior in Markov chains, where the state distribution approaches the unique stationary measure regardless of the initial distribution (assuming positivity of \pi). Irreducibility, as discussed in the spectral theory of nonnegative matrices, is essential for these uniqueness and convergence guarantees. A special subclass consists of doubly stochastic matrices, which are both row- and column-stochastic, so that \mathbf{1}^T P = \mathbf{1}^T as well. Such matrices preserve the as a stationary measure. The Birkhoff-von Neumann theorem asserts that every is a of matrices, providing a combinatorial interpretation of these matrices as probabilistic mixtures of s. It is known that at most n^2 - 2n + 2 matrices suffice for such a . This result, proved by , has applications in assignment problems and optimal transport.

M-Matrices

M-matrices, while not nonnegative themselves due to potentially negative off-diagonal entries, are closely related to nonnegative matrices via representations such as A = sI - B where B is a and s > \rho(B). An M-matrix is a A = (a_{ij}) such that a_{ij} \leq 0 for all i \neq j (i.e., nonpositive off-diagonal entries) and all principal minors of A are nonnegative. Equivalently, A is a Z-matrix (nonpositive off-diagonals) with all eigenvalues having positive real parts. The term "M-matrix" was first introduced by Alexander Ostrowski in 1937, in reference to Hermann Minkowski's work on related sign patterns and stability, though systematic characterizations emerged in the early 1950s through contributions by Lothar Collatz on monotone matrices and Hans Schneider on eigenvalue inequalities. A key representation of an M-matrix is A = sI - B, where B is a nonnegative matrix, s \geq 0, and s \geq \rho(B) (the of B); for nonsingularity, the inequality is strict: s > \rho(B). If A is nonsingular, its satisfies A^{-1} \geq 0 (entrywise nonnegative), a property that distinguishes M-matrices among Z-matrices and links them to inverse-positive operators. When s > \rho(B), the inverse can be expressed via the Neumann series: A^{-1} = \frac{1}{s} \sum_{k=0}^{\infty} \left( \frac{B}{s} \right)^k, which converges due to \rho(B/s) < 1. This series expansion underscores the stability inherent in M-matrices, as their positive real-part eigenvalues ensure asymptotic stability in linear systems like \dot{x} = -Ax. The Z-signature of a Z-matrix, defined as the maximum number of negative eigenvalues across all its principal submatrices, provides another : an M-matrix has Z-signature zero, meaning no principal submatrix admits negative eigenvalues. This property, formalized in works by Fiedler and Pták in 1962, reinforces the all-positive-real-parts eigenvalue condition and facilitates comparisons with other matrix classes in . Overall, these features position M-matrices as a foundational class in the study of sign-patterned matrices, with characterizations compiled in surveys like those by Plemmons in 1977.

References

  1. [1]
    [PDF] The Perron-Frobenius theorem.
    9.1 Non-negative and positive matrices. We begin with some definitions. We say that a real matrix T is non-negative (or positive) if all the ...
  2. [2]
    [PDF] NonNegative Matrices and Related Topics 1. The Perron-Frobenius ...
    To introduce the second part of the PFT we need the definition of order of cyclicity. The order of cyclicity of an irreducible nonnegative matrix is the number ...
  3. [3]
    [PDF] Non-negative matrices - Purdue Math
    In other sections of this course, notation A ≥ 0, A > 0 may have completely different meaning! For a square matrix A we denote by ρ(A) the largest absolute ...Missing: entrywise | Show results with:entrywise
  4. [4]
    Nonnegative Combined Matrices - Bru - 2014 - Wiley Online Library
    Apr 7, 2014 · To avoid confusion with other matrices we will say that A is a nonnegative (positive) matrix if it is entrywise nonnegative (positive); that is ...
  5. [5]
    [PDF] notes on the perron-frobenius theory of nonnegative matrices
    Introduction. By a nonnegative matrix we mean a matrix whose entries are nonnegative real numbers. By positive matrix we mean a matrix all of whose entries ...Missing: history | Show results with:history
  6. [6]
    The Many Proofs and Applications of Perron's Theorem
    Abstract. This paper chronicles the wide dispersal of Perron's 1907 result on positive matrices into many fields of science. The many proofs given during ...Missing: history | Show results with:history
  7. [7]
    Matrix Analysis - Cambridge University Press & Assessment
    Roger A. Horn, The Johns Hopkins University, Charles R. Johnson, College of William and Mary, Virginia. Publisher: Cambridge University Press.
  8. [8]
    [PDF] notes on the perron-frobenius theory of nonnegative matrices
    By a nonnegative matrix we mean a matrix whose entries are nonnegative real ... For example, with p = 4, we have a block matrix. B = ⎛. ⎢. ⎢. ⎝. 0. A1. 0. 0. 0. 0.
  9. [9]
    [PDF] Graphs and Matrices - Arizona Math
    Let G be a bipartite graph with incidence matrix M. Then there exists a 0−1 ... a positive semidefinite matrix, must be nonnegative. Hence, µ ≥ −2 ...
  10. [10]
  11. [11]
    Zur Theorie der Matrices - EuDML
    Perron, Oskar. "Zur Theorie der Matrices." Mathematische Annalen 64 (1907): 248-263. <http://eudml.org/doc/158317>.
  12. [12]
    The Many Proofs and Applications of Perron's Theorem | SIAM Review
    A nonnegative matrix 𝐴 has its spectral radius 𝜌 as an eigenvalue. Moreover, the eigenspace of 𝜆 = 𝜌 ...<|control11|><|separator|>
  13. [13]
    [PDF] perron-frobenius theory
    A matrix A is defined to be a primitive matrix when A is a nonnegative irre- ducible matrix that has only one eigenvalue, r = p (A), on its spectral circle. • A ...
  14. [14]
    [PDF] Über Matrizen aus nicht negativen Elementen.
    FROBENIUS: Über Matrizen aus nicht negativen Elementen. 457. In § 11 dehne ich die Untersuchung auf zerlegbare Matrizen aus, und in § 12 zeige ich, daß eine ...
  15. [15]
    When a Matrix and Its Inverse Are Nonnegative - Project Euclid
    Theorem 5.1. A matrix and its inverse are nonnegative matrices if and only if it is the product of a diagonal matrix with all positive diagonal entries and ...
  16. [16]
    NONNEGATIVE MATRICES WITH NONNEGATIVE INVERSES 307
    In [1] the authors show that a nonnegative square matrix has a non- negative inverse if and only if its entries are all zero except for a single positive entry ...
  17. [17]
    Stochastic Matrix - an overview | ScienceDirect Topics
    A stochastic matrix is a matrix describing the transitions of a Markov chain. It is also called a Markov matrix. 2. A right stochastic matrix is a square matrix ...
  18. [18]
    39. The Perron-Frobenius Theorem
    Since A is a nonnegative irreducible matrix, find the Perron-Frobenius eigenvalue of A . Use the Neumann Series Lemma to find the solution x ∗ if it exists ...
  19. [19]
    [PDF] Birkhoff's Theorem
    Theorem(Birkhoff) Every doubly stochastic matrix is a convex combination of permutation matrices. The proof of Birkhoff's theorem uses Hall's marriage theorem.
  20. [20]
    A Survey on M-Matrices | SIAM Review
    Nonnegative matrices arise in many applications, including economics, queuing theory, chemical engineering, and image processing. They play a crucial role in ...
  21. [21]
    [PDF] A Survey of M-Matrix Characterizations. I. Nonsingular M-Matrices.
    essential background for Ostrowski's work and the work of others on M-matrices. For example, it is easy to see from the Perron-Frobenius theory of nonnegative.