Spectral radius
The spectral radius of a square matrix A \in \mathbb{C}^{n \times n} is defined as the largest absolute value among its eigenvalues, mathematically expressed as \rho(A) = \max \{ |\lambda| : \lambda is an eigenvalue of A \}.[1] This quantity serves as a fundamental invariant in linear algebra, bounding the growth or decay of matrix powers and informing the convergence properties of iterative algorithms.[1] For any consistent matrix norm \|\cdot\|, the spectral radius satisfies \rho(A) \leq \|A\|, highlighting its role as the tightest possible bound across all induced norms.[1] Gelfand's formula provides a norm-independent characterization: \rho(A) = \lim_{k \to \infty} \|A^k\|^{1/k}, which holds for any matrix norm and underscores the spectral radius's intrinsic nature.[1] A pivotal theorem states that the sequence of powers A^k converges to the zero matrix if and only if \rho(A) < 1, making the spectral radius a key criterion for asymptotic stability in linear dynamical systems and the convergence of fixed-point iterations.[1] For Hermitian or normal matrices, the spectral radius equals the 2-norm, \rho(A) = \|A\|_2, while for unitary matrices it is exactly 1, and for nilpotent matrices it is 0.[1] In the theory of nonnegative matrices, the Perron–Frobenius theorem asserts that \rho(A) is itself a real eigenvalue with a positive eigenvector, facilitating applications in Markov chains, population dynamics, and optimization problems where matrix entries represent transitions or interactions.[1] Beyond matrices, the concept extends to graph theory, where the spectral radius of a graph is the largest eigenvalue of its adjacency matrix; for a d-regular graph, this equals d, linking it to connectivity, expansion properties, and spectral graph theory.[2]Definitions
Square matrices
The spectral radius of a square matrix A \in \mathbb{C}^{n \times n} is defined as \rho(A) = \max \{ |\lambda| : \lambda \text{ is an eigenvalue of } A \}.[1] This measures the largest absolute value among the eigenvalues of A, capturing the scale of the matrix's spectral behavior in finite dimensions. Since A is finite-dimensional, the eigenvalues are finite in number and can be found as the roots of its characteristic polynomial. The eigenvalues of A are the complex numbers \lambda satisfying \det(A - \lambda I) = 0, where I is the n \times n identity matrix.[3] The characteristic polynomial p_A(\lambda) = \det(A - \lambda I) is a monic polynomial of degree n, and its roots constitute the spectrum of A, denoted \sigma(A).[4] Thus, the spectral radius can equivalently be expressed as \rho(A) = \sup \{ |\lambda| : \lambda \in \sigma(A) \}.[1] In the complex case, the supremum is attained as a maximum due to the compactness of the spectrum. The concept of the spectral radius traces its origins to David Hilbert's foundational work on integral equations during 1904–1906, where spectral considerations arose in solving Fredholm-type problems, though the precise term and its formalization for finite square matrices developed later in the context of linear algebra.[5] For example, consider the $2 \times 2 rotation matrix R(\theta) = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}. The characteristic polynomial is p_{R(\theta)}(\lambda) = \lambda^2 - 2\cos\theta \, \lambda + 1 = 0, with roots \lambda = e^{i\theta} and \lambda = e^{-i\theta}./05%3A_Eigenvalues_and_Eigenvectors/5.04%3A_Complex_Eigenvalues) Both eigenvalues have modulus 1, so \rho(R(\theta)) = 1, reflecting the norm-preserving nature of rotations./05%3A_Eigenvalues_and_Eigenvectors/5.04%3A_Complex_Eigenvalues)Bounded linear operators
In the context of infinite-dimensional spaces, the spectral radius extends naturally to bounded linear operators on Banach spaces. For a bounded linear operator T on a complex Banach space X, the spectral radius \rho(T) is defined as \rho(T) = \sup \{ |\lambda| : \lambda \in \sigma(T) \}, where \sigma(T) denotes the spectrum of T, the set of all complex numbers \lambda such that T - \lambda I is not invertible in the algebra of bounded operators on X.[6] This definition captures the growth rate of the operator's powers in a manner analogous to the finite-dimensional case, but relies on the resolvent set rather than direct eigenvalue computation.[7] Unlike the point spectrum, which consists solely of eigenvalues (points \lambda for which T - \lambda I has a non-trivial kernel), the full spectrum \sigma(T) may include additional components such as the approximate point spectrum (where T - \lambda I is injective but not bounded below) and the residual spectrum. Thus, \rho(T) is determined by the entire spectrum, not just the eigenvalues, which can lead to \rho(T) exceeding the maximum modulus of any actual eigenvalues in infinite dimensions.[6] This broader scope is essential for operators without point spectrum, such as shifts or multiplication operators on function spaces.[7] A key property is that \rho(T) \leq \|T\|, where \|T\| is the operator norm induced by the norm on X, reflecting the fact that the spectrum is contained in the closed disk of radius \|T\| centered at the origin. Equality holds for normal operators on Hilbert spaces, where the spectral theorem ensures the norm equals the supremum of the moduli of spectral values.[6] Further details on normal operators appear in the section on symmetric matrices. As an illustrative example, consider the multiplication operator T on the Hilbert space L^2[0,1] defined by (Tf)(x) = f(x) \cdot g(x), where g is a continuous complex-valued function on [0,1]. The spectrum \sigma(T) coincides with the image g([0,1]), a compact subset of \mathbb{C}, so \rho(T) = \max_{x \in [0,1]} |g(x)|. This case highlights how the spectral radius directly reflects the essential range of the multiplier function.[7]Graphs
In graph theory, the spectral radius of a finite undirected graph G is defined as \rho(G) = \rho(A), where A is the adjacency matrix of G, a real symmetric $0-$1 matrix with zeros on the diagonal (for simple graphs without loops) and A_{ij} = 1 if vertices i and j are adjacent. Since A is symmetric and nonnegative, its eigenvalues are real, and \rho(G) coincides with the largest eigenvalue of A, which is simple and positive by the Perron–Frobenius theorem.[8] For directed graphs, the adjacency matrix A is defined similarly but need not be symmetric, so its eigenvalues may be complex; the spectral radius \rho(G) remains the maximum modulus among these eigenvalues.[9] The spectral radius \rho(G) has significant combinatorial interpretations, particularly in bounding the growth of walks in the graph. The total number of walks of length n in G equals the sum of the entries of A^n, which is \sum_k \lambda_k^n over the eigenvalues \lambda_k of A; for large n, this quantity grows asymptotically as c \cdot \rho(G)^n for some constant c > 0 depending on the Perron eigenvector.[10] A representative example is the cycle graph C_n on n vertices, whose adjacency matrix has eigenvalues $2 \cos(2\pi k / n) for integers k = 0, 1, \dots, n-1; thus, \rho(C_n) = 2.[11] The spectral radius also connects to structural properties like diameter and connectivity: for a connected graph G with diameter d, \rho(G) \geq 2 \cos(\pi / (d+1)), with equality holding for the path graph on d+1 vertices.[12]Fundamental properties
Gelfand's formula
Gelfand's formula characterizes the spectral radius \rho(T) of a bounded linear operator T on a Banach space as\rho(T) = \lim_{n \to \infty} \|T^n\|^{1/n},
where \| \cdot \| denotes the operator norm induced by the norm on the Banach space. This limit exists and equals \inf_{n \geq 1} \|T^n\|^{1/n}.[13] For square matrices, the formula takes the alternative form \rho(A) = \lim_{n \to \infty} \|A^n\|^{1/n}, where \| \cdot \| is any matrix norm.[14] The value of the limit is independent of the choice of norm, a consequence of the equivalence of norms in finite-dimensional spaces for matrices or the submultiplicativity of the operator norm in the Banach space setting.[14][13] The formula is named after Israel Gelfand, who established it in 1941 as part of his foundational work on normed rings (Banach algebras), building on earlier contributions by John von Neumann to spectral theory in operator algebras and by Arne Beurling in 1938 for the matrix case.[15] As a simple example, consider a nilpotent matrix A satisfying A^2 = 0. Then \rho(A) = 0, and \|A^n\|^{1/n} = 0 for all n \geq 2, so the limit is 0.[14]