Fact-checked by Grok 2 weeks ago

Spectral decomposition

Spectral decomposition is a fundamental theorem in linear algebra that provides a way to factorize certain matrices using their , expressing a A over the complex numbers as A = U \Lambda U^*, where U is a whose columns are the eigenvectors of A, and \Lambda is a containing the corresponding eigenvalues on its diagonal. This decomposition generalizes the process and is guaranteed for normal matrices (those satisfying A A^* = A^* A), which include symmetric matrices over the reals and Hermitian matrices over the complexes, ensuring real eigenvalues for the latter cases and orthonormal bases of eigenvectors. The theorem, often attributed to the broader , highlights the role of eigenvalues in capturing the matrix's scaling properties along eigenvector directions. In the real symmetric case, the decomposition simplifies to A = Q \Lambda Q^T, where Q is an , allowing the matrix to be reconstructed as a A = \sum_{i=1}^n \lambda_i \mathbf{q}_i \mathbf{q}_i^T, with each term representing a rank-one scaled by an eigenvalue. This form reveals key properties, such as the reality of eigenvalues and the of eigenvectors, which form a complete basis for the . For matrices with repeated eigenvalues, the decomposition remains valid but may involve eigenspaces rather than individual vectors, ensuring uniqueness up to the ordering of eigenvalues and choice of orthonormal bases within degenerate subspaces. The spectral decomposition has profound implications across mathematics and applications, enabling efficient computation of matrix powers (via A^k = U \Lambda^k U^*), solutions to differential equations, and dimensionality reduction techniques like principal component analysis (PCA), where it identifies variance directions in data. It also underpins by diagonalizing observables as operators and facilitates optimization problems through the , which extremizes quadratic forms. Beyond finite dimensions, the concept extends to via the for bounded operators on Hilbert spaces, influencing fields like and partial differential equations.

Prerequisites and Overview

Definition and motivation

Spectral decomposition arises from the need to simplify the analysis of linear transformations, particularly in solving systems of differential equations and understanding physical systems in . In the early , Joseph Fourier's investigation of heat conduction led to the representation of solutions to the as sums of terms involving and , effectively decomposing the problem into independent modes associated with eigenvalues of the underlying spatial . This approach highlighted how diagonalizing operators could separate variables and facilitate explicit solutions to partial differential equations. Similarly, in , the for operators enables the representation of observables like the in a basis of eigenvectors, allowing and expectation values to be computed straightforwardly through diagonal forms. Formally, the spectral decomposition of a A on a \mathcal{H} expresses A in terms of its via a E: A = \int_{\sigma(A)} \lambda \, dE(\lambda), where \sigma(A) denotes the of A, a of the real numbers. This integral representation generalizes the finite-dimensional eigendecomposition, where A = \sum \lambda_i P_i with P_i as orthogonal projections onto eigenspaces. The primary motivation for spectral decomposition lies in its facilitation of functional calculus, permitting the definition of functions of the operator such as f(A) = \int f(\lambda) \, dE(\lambda) for suitable f. For instance, this yields A^n = \int \lambda^n \, dE(\lambda), which simplifies computations of powers and exponentials essential for dynamical systems and quantum evolution equations. In the finite-dimensional setting, the concept is illustrated simply by a diagonal matrix, such as D = \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix} on \mathbb{R}^2, whose spectral decomposition is D = \lambda_1 |e_1\rangle\langle e_1| + \lambda_2 |e_2\rangle\langle e_2|, with |e_i\rangle\langle e_i| as rank-one projections onto the standard basis vectors. This trivial case underscores how the decomposition aligns the operator with its spectral properties, enabling efficient matrix operations like exponentiation.

Key concepts: Operators and spectra

Linear operators are mappings between vector spaces that preserve addition and . In the finite-dimensional setting, a linear A: V \to V acts on a vector space V of finite n, which can be identified with \mathbb{C}^n equipped with the standard inner product \langle x, y \rangle = x^* y, where x^* denotes the . Such operators are represented by n \times n matrices and are always bounded, as all linear maps between finite-dimensional normed spaces are continuous. In the context of inner product spaces, which are complex vector spaces equipped with a sesquilinear inner product \langle \cdot, \cdot \rangle satisfying \langle x, y \rangle = \overline{\langle y, x \rangle}, linear operators extend the finite-dimensional case but may not induce a complete norm unless the space is a . A H is a complete , meaning every converges, and linear operators T: H \to H are often studied in their bounded form, where \|T\| = \sup_{\|x\| \leq 1} \|Tx\| < \infty. Bounded linear operators on form a under composition and the . The of a bounded linear T on a complex (including Hilbert spaces) is the set \sigma(T) = \{\lambda \in \mathbb{C} \mid T - \lambda I \text{ is not invertible}\}, where I is the identity . The is a nonempty compact subset of \mathbb{C}. It decomposes into three disjoint parts: the point \sigma_p(T), consisting of eigenvalues \lambda where T - \lambda I is not injective (i.e., \ker(T - \lambda I) \neq \{0\}); the continuous \sigma_c(T), where T - \lambda I is injective with dense but not closed range; and the residual \sigma_r(T), where T - \lambda I is injective but the range is not dense. In finite dimensions, the reduces to the , as the residual and continuous spectra are empty for matrices. The resolvent set \rho(T) is the complement \mathbb{C} \setminus \sigma(T), comprising all \lambda for which T - \lambda I is bijective with bounded inverse R(\lambda, T) = (T - \lambda I)^{-1}, an analytic function outside the spectrum. The spectral radius r(T) = \sup\{|\lambda| : \lambda \in \sigma(T)\} satisfies r(T) = \lim_{n \to \infty} \|T^n\|^{1/n}, where the limit exists and equals \inf_{n \geq 1} \|T^n\|^{1/n}. For normal operators, r(T) = \|T\|. The adjoint of a bounded linear operator A: H \to H on a Hilbert space is the unique bounded operator A^*: H \to H satisfying \langle Ax, y \rangle = \langle x, A^* y \rangle for all x, y \in H. This follows from the Riesz representation theorem, ensuring A^* exists and \|A^*\| = \|A\|. An operator is self-adjoint if A = A^*, equivalently \langle Ax, y \rangle = \langle x, Ay \rangle for all x, y \in H, implying the spectrum lies on the real line. A normal operator satisfies AA^* = A^*A, generalizing self-adjoint and unitary operators (where A^* = A^{-1}). In finite dimensions, normal matrices diagonalize unitarily.

Finite-dimensional case

Hermitian matrices

A is a that equals its own , satisfying H = H^*. In the finite-dimensional setting, the asserts that every such matrix admits a spectral decomposition: there exists a U whose columns form an of eigenvectors and a real D containing the eigenvalues of H on its diagonal, such that H = U D U^*. This decomposition diagonalizes H and highlights its real spectrum and orthogonal eigenspaces. To establish this result, first note that all eigenvalues of H are real. If H x = \lambda x for a unit eigenvector x (i.e., \|x\| = 1), then \lambda = \langle H x, x \rangle. Since H is Hermitian, \langle H x, x \rangle = \langle x, H^* x \rangle = \langle x, H x \rangle = \overline{\langle H x, x \rangle}, implying \lambda = \overline{\lambda} and thus \lambda \in \mathbb{R}. The existence of an orthonormal basis of eigenvectors follows by induction on the dimension n of the underlying complex vector space \mathbb{C}^n. The base case n = 1 is immediate, as a $1 \times 1 Hermitian matrix is real and thus its single entry is both the eigenvalue and eigenvector (up to ). For the inductive step, assume the result holds for dimensions less than n. The characteristic polynomial \det(H - \lambda I) has degree n and thus at least one root \lambda_1 \in \mathbb{R}, yielding an eigenspace E_1. This eigenspace is orthogonal to the eigenspaces of any other distinct eigenvalues, as eigenvectors x \in E_1 and y for \lambda_2 \neq \lambda_1 satisfy \lambda_1 \langle x, y \rangle = \langle H x, y \rangle = \langle x, H y \rangle = \lambda_2 \langle x, y \rangle, forcing \langle x, y \rangle = 0. Moreover, the orthogonal complement E_1^\perp is invariant under H, since for z \in E_1^\perp, \langle H z, x \rangle = \langle z, H x \rangle = \lambda_1 \langle z, x \rangle = 0 for all x \in E_1, so H z \in E_1^\perp. The restriction of H to E_1^\perp is Hermitian on a space of dimension less than n, so by the inductive hypothesis it admits an orthonormal eigenbasis. Combining this with an orthonormal basis for E_1 yields the full basis for \mathbb{C}^n. The eigenvalues are the roots of the characteristic polynomial and thus unique up to ordering and multiplicity. Eigenspaces corresponding to distinct eigenvalues are pairwise orthogonal, as established in the proof. The spectral decomposition itself is unique up to reordering of the eigenvalues along the diagonal of D and choice of orthonormal bases within degenerate eigenspaces. For a concrete illustration, consider the $2 \times 2 Hermitian matrix H = \begin{pmatrix} 2 & -10 \\ -10 & 2 \end{pmatrix}. The eigenvalues are found by solving the characteristic equation \det(H - \lambda I) = 0: \det\begin{pmatrix} 2 - \lambda & -10 \\ -10 & 2 - \lambda \end{pmatrix} = (2 - \lambda)^2 - 100 = \lambda^2 - 4\lambda - 96 = 0, yielding roots \lambda_1 = 12 and \lambda_2 = -8. The eigenvector for \lambda_1 = 12 satisfies (H - 12 I) v = 0, giving v_1 = \begin{pmatrix} 1 \\ -1 \end{pmatrix}; for \lambda_2 = -8, v_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}. Normalizing these to unit length produces u_1 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}, \quad u_2 = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}. The unitary matrix is U = \begin{pmatrix} u_1 & u_2 \end{pmatrix}, and D = \diag(12, -8), so H = U D U^*. These eigenvectors are orthogonal, as \langle u_1, u_2 \rangle = 0. The spectral decomposition implies key properties of Hermitian matrices. The trace equals the sum of the eigenvalues: \tr(H) = \sum_i \lambda_i, since trace is similarity-invariant and \tr(D) = \sum \lambda_i. Similarly, the determinant is the product of the eigenvalues: \det(H) = \prod_i \lambda_i, as determinant is also similarity-invariant and \det(D) = \prod \lambda_i. These relations hold for any diagonalizable matrix but underscore the real, ordered nature of the spectrum for Hermitian matrices.

Normal matrices

A normal matrix N over the complex numbers is a square matrix that commutes with its conjugate transpose, satisfying N N^* = N^* N. The spectral theorem for normal matrices states that N is unitarily diagonalizable, meaning there exists a unitary matrix U and a diagonal matrix D such that N = U D U^*, where the diagonal entries of D are the (possibly complex) eigenvalues of N. This decomposition extends the spectral theorem for Hermitian matrices, which form a special subclass of normal matrices where the eigenvalues are real. The proof relies on the Schur decomposition, which triangularizes any matrix via a unitary similarity: N = U T U^* with T upper triangular. For normal N, the triangular form T must also be normal, and a normal upper triangular matrix is necessarily diagonal, yielding the desired D. This follows from the property that normality implies orthogonal invariant subspaces, allowing construction of an orthonormal basis of eigenvectors. Unlike Hermitian matrices, where eigenvalues are real, normal matrices can have complex eigenvalues, though their moduli |\lambda| are preserved under unitary similarity transformations, reflecting the unitary invariance of the operator norm for normal operators. To compute the decomposition, first obtain the Schur form via algorithms like the QR iteration, then verify diagonality (which holds if and only if the matrix is normal) to extract . A representative example is the 2D by angle \theta, R(\theta) = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}, which is (as it is orthogonal, hence unitary over \mathbb{C}) but not Hermitian unless \theta = 0 or \pi. Its eigenvalues are the pair e^{i\theta} and e^{-i\theta}, with corresponding eigenvectors involving entries, enabling unitary over \mathbb{C}.

Relation to singular value decomposition

The (SVD) provides a factorization for any m \times n A, expressed as A = U \Sigma V^*, where U is an m \times m , V is an n \times n , and \Sigma is an m \times n rectangular with non-negative real singular values \sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_p \geq 0 on its (with p = \min(m, n)). This decomposition generalizes the spectral decomposition to arbitrary matrices, including non-square and non-normal ones, by revealing a "diagonal" form in an appropriate basis. For a Hermitian matrix A (where A = A^*), the spectral decomposition yields A = Q \Lambda Q^*, with Q unitary and \Lambda diagonal containing the real eigenvalues \lambda_i. In this case, the singular values of A are precisely the absolute values \sigma_i = |\lambda_i|, and the SVD takes the form A = U \Sigma V^* where \Sigma = |\Lambda| (with signs absorbed into U or V), thus recovering the spectral decomposition up to phase adjustments in the singular vectors. This connection highlights how SVD extends the unitary diagonalization of Hermitian matrices to a broader setting while preserving the non-negativity of the diagonal entries. When A is normal (satisfying A A^* = A^* A), the spectral theorem provides a unitary diagonalization A = Q \Lambda Q^* with complex eigenvalues \lambda_i on the diagonal of \Lambda. Here, the SVD aligns closely with this decomposition, as the singular values are again \sigma_i = |\lambda_i|, and the left and right singular vectors correspond to the eigenvectors of A, up to scaling by the argument of each \lambda_i. This alignment demonstrates SVD as a real-valued analog of the spectral decomposition for normal operators. Computationally, the SVD of a general A is often obtained by applying the spectral decomposition to the Hermitian matrices A A^* (yielding left singular vectors and squared singular values) and A^* A (yielding right singular vectors), thereby leveraging the for Hermitian matrices . As an illustration of how generalizes to non-square matrices, consider the $2 \times 3 X = \begin{pmatrix} 1 & -1 & 0 \\ 1 & 1 & 0 \end{pmatrix}. Its economy is X = U \Sigma V^T, where U = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -1 \\ 1 & 1 \end{pmatrix}, \quad \Sigma = \begin{pmatrix} \sqrt{2} & 0 \\ 0 & \sqrt{2} \end{pmatrix}, \quad V^T = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}, with singular vectors derived from the eigendecompositions of X X^T and X^T X; this form diagonalizes X in the sense of revealing its action as a weighted sum of orthogonal rank-one updates, extending beyond the square-case spectral decomposition.

Infinite-dimensional case for compact operators

Self-adjoint compact operators

In the context of infinite-dimensional separable Hilbert spaces, spectral decomposition for self-adjoint compact operators provides a diagonal representation analogous to that of Hermitian matrices in finite dimensions, but with eigenvalues accumulating only at zero to account for the infinite dimensionality. A bounded linear K: H \to H on a H is called compact if the image of the closed unit ball \{x \in H : \|x\| \leq 1\} under K is relatively compact, meaning its closure is compact in the norm topology of H. Such an K is if it equals its , K = K^*, which implies that its lies on the real line and it preserves the inner product structure. The for these operators states that if K is a compact on a separable H, then there exists a countable orthonormal set \{e_n\}_{n=1}^\infty \subset H (possibly finite) consisting of eigenvectors with corresponding real eigenvalues \{\lambda_n\}_{n=1}^\infty satisfying \lambda_n \to 0 as n \to \infty, such that Kx = \sum_{n=1}^\infty \lambda_n \langle x, e_n \rangle e_n for every x \in H, where the series converges in the norm of H. The eigenspace for the eigenvalue 0 may have infinite multiplicity, completing the of H if necessary. A proof sketch proceeds by approximating K in the operator norm by a sequence of finite-rank self-adjoint operators K_m, each of which admits a finite spectral decomposition by the finite-dimensional spectral theorem. The eigenvalues are then characterized variationally via the minimax principle: the nth largest eigenvalue satisfies \lambda_n = \min_{\dim V = n} \max_{\substack{x \in V \\ \|x\|=1}} \langle Kx, x \rangle = \max_{\dim W = n-1} \min_{\substack{x \perp W \\ \|x\|=1}} \langle Kx, x \rangle, allowing one to show that the eigenvalues of K_m converge to those of K and that the corresponding eigenspaces stabilize to yield the infinite orthonormal basis. The non-zero spectrum of K consists solely of discrete eigenvalues of finite multiplicity, with 0 as the only possible ; this follows from the , which ensures that non-zero spectral points cannot form continuous parts or accumulate elsewhere. A concrete example arises with Hilbert-Schmidt integral operators on L^2[a,b], defined by (Kf)(x) = \int_a^b k(x,y) f(y) \, dy where the k is continuous, symmetric, and positive semi-definite, ensuring K is compact and . By , under these conditions, k(x,y) = \sum_{n=1}^\infty \lambda_n \phi_n(x) \phi_n(y) uniformly on [a,b] \times [a,b], where \{\lambda_n\} are the non-negative eigenvalues of K with \lambda_n \to 0 and \{\phi_n\} form an of eigenfunctions in L^2[a,b].

Spectral theorem for compact operators

The spectral theorem for compact self-adjoint operators asserts that if T is a compact self-adjoint operator on a separable \mathcal{H}, then there exists an \{ e_n \} of \mathcal{H} consisting of eigenvectors of T, with corresponding eigenvalues \{ \lambda_n \} satisfying \lambda_n \to 0 as n \to \infty, and each nonzero eigenvalue having finite multiplicity. This discrete , accumulating only at zero, distinguishes compact operators from more general bounded self-adjoint operators. The decomposition is unique in the sense that the eigenvalues \lambda_n (counted with multiplicity) and the associated orthogonal projections P_n onto the finite-dimensional eigenspaces \ker(T - \lambda_n I) are uniquely determined by T. Specifically, the projections satisfy P_n P_m = \delta_{nm} P_n and \sum_n P_n = I in the strong , ensuring the T = \sum_n \lambda_n P_n. The series \sum_n \lambda_n P_n converges to T in the strong operator topology, meaning that for every x \in \mathcal{H}, Tx = \sum_n \lambda_n P_n x with the partial sums converging in norm. This convergence holds because the eigenvalues decay to zero and the projections are mutually orthogonal with finite ranks except possibly for the kernel. This result connects to Riesz-Fredholm theory, where the essential spectrum of any compact operator on a Banach space is precisely \{0\}, reflecting the finite-dimensional nature of eigenspaces away from zero and the possible infinite-dimensional kernel. The index of T - \lambda I is zero for \lambda \neq 0, underscoring the discrete structure enforced by compactness.

General infinite-dimensional case

Bounded self-adjoint operators

For bounded self-adjoint operators on a Hilbert space, the spectral theorem provides a canonical form that generalizes the finite-dimensional diagonalization. Specifically, every bounded self-adjoint operator A on a separable Hilbert space H admits a unique spectral measure E, which is a projection-valued measure on the Borel \sigma-algebra of the real line such that A = \int_{\sigma(A)} \lambda \, dE(\lambda), where \sigma(A) denotes the spectrum of A and the integral is understood in the strong operator topology. This resolution of the identity expresses A as an "integral" over its spectrum, weighted by the spectral projections. The theorem was established by John von Neumann in his foundational work on quantum mechanics. A key consequence is that A is unitarily equivalent to multiplication by the identity function \lambda on the Hilbert space L^2(\sigma(A), \mu), where \mu is the scalar spectral measure \mu(\Delta) = \langle E(\Delta) \xi, \xi \rangle for some cyclic vector \xi \in H and Borel set \Delta \subseteq \sigma(A). This multiplication operator form highlights how the action of A corresponds to pointwise multiplication by spectral values, distributed according to the measure \mu. In this representation, the spectrum \sigma(A) is precisely the essential range of \lambda with respect to \mu. Unlike the compact case, where the spectrum is discrete and consists of eigenvalues with orthonormal eigenvectors, bounded self-adjoint operators can have continuous spectrum without point spectrum (eigenvalues). A classic example is the multiplication operator M_x on L^2[0,1] defined by (M_x f)(x) = x f(x) for f \in L^2[0,1] with . Here, \sigma(M_x) = [0,1] is purely continuous, and there are no eigenvalues: supposing M_x f = \lambda f for some \lambda \in [0,1] and f \neq 0, then f(x) = 0 almost everywhere on [0,1] \setminus \{\lambda\}, implying f = 0 almost everywhere, a . Thus, M_x has no eigenvectors, yet its measure is supported on the continuous [0,1]. The measure E defines subspaces via projections E(\Delta) for Borel sets \Delta \subseteq \mathbb{R}, where E(\Delta) is the orthogonal projection onto the closure of the of generalized eigenvectors corresponding to eigenvalues in \Delta \cap \sigma(A); these subspaces satisfy E(\Delta_1) E(\Delta_2) = E(\Delta_1 \cap \Delta_2) and A E(\Delta) = E(\Delta) A. The satisfies \|A\| = \sup \{ |\lambda| : \lambda \in \sigma(A) \}, reflecting the boundedness. In the compact case, this framework specializes to a discrete sum over Dirac measures at eigenvalues. An illustrative example from quantum mechanics is the position operator Q on L^2(\mathbb{R}), which acts as multiplication by x and has spectral decomposition Q = \int_{\mathbb{R}} \lambda \, dE(\lambda), where E(\Delta) projects onto functions supported in \Delta \subseteq \mathbb{R}; this captures the continuous position spectrum over all real numbers.

Unbounded self-adjoint operators

In functional analysis, an unbounded self-adjoint operator A on a H is defined as a densely defined linear operator with domain D(A) such that D(A) is dense in H, A = A^* where A^* denotes the adjoint operator, and A has no finite operator norm, meaning \sup_{\|x\|=1, x \in D(A)} \|Ax\| = \infty. This contrasts with bounded self-adjoint operators, where the domain is the entire space and the norm is finite; here, the domain restriction is essential to ensure self-adjointness. The spectral theorem for unbounded self-adjoint operators extends the finite-dimensional diagonalization to infinite dimensions, stating that A is unitarily equivalent to multiplication by the independent variable \lambda on a direct sum of copies of L^2(\sigma(A), \mu), where \sigma(A) \subseteq \mathbb{R} is the spectrum of A and \mu is a \sigma-finite measure. Specifically, there exists a unitary operator U: H \to L^2(\sigma(A), \mu) such that U A U^{-1} f(\lambda) = \lambda f(\lambda) for functions f in the domain \{f \in L^2(\sigma(A), \mu) \mid \int_{\sigma(A)} |\lambda f(\lambda)|^2 \, d\mu(\lambda) < \infty\}. This representation highlights how the domain of A corresponds to functions where the multiplication remains square-integrable after weighting by \lambda, accommodating the unbounded nature of the spectrum, which typically extends to \pm \infty. Spectral multiplicity arises from the direct sum structure, where the number of copies reflects the dimension of the eigenspaces or generalized eigenspaces at each \lambda, while the essential spectrum consists of those \lambda \in \sigma(A) that are either accumulation points of the spectrum or eigenvalues of infinite multiplicity, remaining invariant under compact perturbations. For symmetric but not necessarily self-adjoint operators, the Friedrichs extension provides a way to obtain a extension, particularly for positive symmetric operators. If T is a densely defined symmetric operator on H with \langle Tx, x \rangle \geq 0 for all x \in D(T), the Friedrichs extension is constructed by considering the q(x) = \langle Tx, x \rangle on the form domain, completing it to a new , and defining the extension via the associated , yielding a positive that extends T. This extension, introduced by Friedrichs, ensures the resulting operator is and preserves the positivity, facilitating the application of the . A classic example is the differential operator A = -\frac{d^2}{dx^2} on the Hilbert space L^2[0, \pi] with Dirichlet boundary conditions, defined on the domain D(A) = \{f \in H^2[0, \pi] \cap H_0^1[0, \pi] \mid f(0) = f(\pi) = 0\}. This operator is unbounded and self-adjoint, with spectral decomposition given by eigenvalues \lambda_n = n^2 and corresponding orthonormal eigenfunctions \phi_n(x) = \sqrt{\frac{2}{\pi}} \sin(nx) for n = 1, 2, \dots, so that any f \in L^2[0, \pi] expands as f(x) = \sum_{n=1}^\infty c_n \phi_n(x) with Af = \sum_{n=1}^\infty n^2 c_n \phi_n(x) for f \in D(A). Here, the spectrum is discrete \{n^2 \mid n \in \mathbb{N}\}, with essential spectrum empty and multiplicity one for each eigenvalue, illustrating how boundary conditions enforce self-adjointness and determine the spectral properties.

Spectral measures and functional calculus

In the spectral theorem for self-adjoint operators on a , the , also known as the spectral measure E, plays a central role. It is a family of orthogonal projections \{E(\Delta)\}_{\Delta \in \mathcal{B}(\mathbb{R})}, where \mathcal{B}(\mathbb{R}) denotes the Borel \sigma-algebra on \mathbb{R}, such that E(\mathbb{R}) = I (the identity ) and, for disjoint Borel sets \Delta_1, \Delta_2 \in \mathcal{B}(\mathbb{R}), E(\Delta_1 \cup \Delta_2) = E(\Delta_1) + E(\Delta_2). This measure is countably additive in the , meaning that for any countable collection of pairwise disjoint Borel sets \{\Delta_n\}_{n=1}^\infty, \sum_{n=1}^\infty E(\Delta_n) = E\left(\bigcup_{n=1}^\infty \Delta_n\right) strongly. The A is then represented as A = \int_{\mathbb{R}} \lambda \, dE(\lambda), where the integral is understood in the strong sense. The arises naturally from the spectral measure E. For any Borel f: \mathbb{R} \to \mathbb{C}, the operator f(A) is defined by f(A) = \int_{\mathbb{R}} f(\lambda) \, dE(\lambda), where the integral is again strong. This construction extends the functional calculus, as polynomials in A correspond to polynomial functions of \lambda, and it preserves the algebraic structure: if f and g are Borel functions, then f(A)g(A) = (fg)(A) and f(A) + g(A) = (f + g)(A). Moreover, f(A) commutes with A for any such f, and if f is real-valued, f(A) is . For unbounded operators, the of f(A) consists of those vectors \xi \in H for which \int_{\mathbb{R}} |f(\lambda)|^2 \, d\|E(\lambda)\xi\|^2 < \infty. Stone's formula provides an explicit way to recover the spectral projections from the resolvent for intervals. For -\infty < a < b < \infty, E((a,b)) = \frac{1}{2\pi i} \lim_{\varepsilon \to 0^+} \int_a^b \left[ R(\lambda + i\varepsilon, A) - R(\lambda - i\varepsilon, A) \right] d\lambda, with adjustments for the endpoints to obtain projections like E((a,b]). For an isolated eigenvalue \lambda \in \mathbb{R}, the projection onto the corresponding eigenspace is given by E(\{\lambda\}) = \frac{1}{2\pi i} \int_{\gamma} R(z, A) \, dz, where R(z, A) = (A - zI)^{-1} is the resolvent operator and \gamma is a closed enclosing \lambda but no other points of the spectrum. The direct integral representation offers another perspective on the spectral decomposition. The Hilbert space H is unitarily equivalent to a direct integral \int^\oplus_{\sigma(A)} H_\lambda \, d\mu(\lambda), where \sigma(A) is the of A, \mu is a suitable measure on \sigma(A), and each H_\lambda is a fiber. In this representation, A acts as multiplication by the independent variable \lambda on the direct integral, and the spectral measure E corresponds to over these fibers. Regarding multiplicity, a A is called simple if there exists a cyclic \xi \in H such that the closed of \{p(A)\xi \mid p \text{ [polynomial](/page/Polynomial)}\} is all of H, or equivalently, the scalar spectral measure \langle E(\cdot)\xi, \xi \rangle has full support on \sigma(A). In general, the multiplicity function m: \mathbb{R} \to \mathbb{N} \cup \{\infty\} describes the dimension of the fibers H_\lambda in the direct integral, capturing the "degeneracy" of the spectrum with respect to the spectral measure.

Applications

In quantum mechanics

In quantum mechanics, physical observables such as position, momentum, and energy are represented by operators on a , ensuring that their eigenvalues are real numbers corresponding to the possible outcomes of measurements. The of such an , consisting of its eigenvalues, determines the set of measurable values for the . The spectral decomposition theorem provides a rigorous framework for these operators, expressing a self-adjoint operator A as A = \int \lambda \, dE(\lambda), where E is a (spectral measure) over the s of the real line, with the integral taken with respect to the . For a represented by a \psi in the , the probability of obtaining a outcome in a \Delta \subseteq \mathbb{R} is given by \|E(\Delta) \psi\|^2 = \langle \psi | E(\Delta) | \psi \rangle, which quantifies the likelihood associated with the projection onto the corresponding eigensubspace. In the , where states are time-independent and operators evolve, the of an A is governed by the generated by the H, yielding A(t) = e^{i t H / \hbar} A e^{-i t H / \hbar}. This evolution leverages the from the , allowing functions of operators like f(A) = \int f(\lambda) \, dE(\lambda) to define dynamics, such as the time-dependent for the itself. A canonical example is the quantum harmonic oscillator, whose Hamiltonian is H = \frac{p^2}{2m} + \frac{1}{2} m \omega^2 x^2, where p and x are the momentum and position operators (unbounded self-adjoint operators). Using ladder operators a = \sqrt{\frac{m \omega}{2 \hbar}} \left( x + \frac{i p}{m \omega} \right) and a^\dagger = \sqrt{\frac{m \omega}{2 \hbar}} \left( x - \frac{i p}{m \omega} \right), the Hamiltonian decomposes as H = \hbar \omega \left( a^\dagger a + \frac{1}{2} \right), revealing a discrete spectrum of eigenvalues (n + \frac{1}{2}) \hbar \omega for n = 0, 1, 2, \dots, with corresponding energy eigenstates forming a complete orthonormal basis. The projections E(\Delta) in the spectral measure directly connect to the collapse postulate of measurement, where upon observing an outcome in \Delta, the state collapses to the normalized projection E(\Delta) \psi / \|E(\Delta) \psi\|, reflecting the irreversible update of the quantum state consistent with the probabilistic interpretation.

In numerical analysis and signal processing

In numerical analysis, spectral decomposition of Hermitian matrices is computed using stable iterative algorithms that exploit the matrix's symmetry to ensure real eigenvalues and orthogonal eigenvectors. The QR algorithm, adapted for Hermitian matrices, iteratively applies QR factorizations with shifts to converge to the diagonal form, achieving quadratic convergence near eigenvalues. This method forms the basis for implementations in libraries like LAPACK, where it computes the full eigendecomposition for dense matrices up to moderate sizes (e.g., n ≈ 1000). For large sparse symmetric matrices, the generates a via , reducing the problem to a smaller dense eigendecomposition while approximating extremal eigenvalues efficiently. It requires O(k n) operations for k iterations on an n × n matrix, making it suitable for applications like simulations where only a few eigenvalues are needed. Reorthogonalization techniques mitigate loss of in the Lanczos vectors, ensuring . Error analysis in spectral decomposition emphasizes the of eigenvalues, which measures to : for a simple eigenvalue λ_i, the relative perturbation bound is approximately κ(V) / |λ_i - λ_j| for the closest distinct eigenvalue λ_j, where κ(V) is the of the eigenvector matrix. The Bauer-Fike theorem provides a disk-based bound, stating that each perturbed eigenvalue lies within a disk of radius n cond(Q) ε centered at an original eigenvalue, where Q diagonalizes the matrix and ε is the . This quantifies risks in finite-precision arithmetic, guiding the use of Wilkinson's shift strategy to improve stability. In , the represents the spectral decomposition of the translation operator on L^2 functions, diagonalizing it with eigenfunctions as complex exponentials and eigenvalues as frequencies, enabling efficient frequency-domain analysis via the (FFT). (PCA) applies eigendecomposition to the of data, projecting onto directions of maximum variance; the eigenvalues quantify explained variance, with the first few often capturing over 90% in high-dimensional datasets like images. Briefly, this relates to for centered data. A practical example is image compression, where the (DCT) performs block-wise spectral decomposition, approximating the Karhunen-Loève transform for stationary processes and related to the eigendecomposition of circulant matrices via cosine basis functions. Low-frequency coefficients are retained after quantization, achieving compression ratios up to 20:1 with minimal perceptual loss, as higher frequencies correspond to fine details. Challenges arise with ill-conditioned matrices, where clustered eigenvalues amplify errors, potentially causing O(ε n^{3/2}) relative perturbations in computed eigenvalues, as analyzed in backward error bounds. techniques address this by isolating converged eigenpairs—subtracting rank-one updates or using implicit shifts—allowing accurate computation of remaining without full recomputation, essential for multiple eigenvalues in applications like vibration analysis.

Extensions to other decompositions

Spectral decomposition generalizes to non-normal operators via the canonical form, which replaces with a block structure consisting of Jordan blocks corresponding to defective eigenvalues, where each block features the eigenvalue on the diagonal and ones on the superdiagonal. This form expresses the operator as A = P J P^{-1}, with J upper triangular and P the matrix of generalized eigenvectors, extending the diagonal case of the for normal operators. Another extension is the , which factors any square matrix A as A = U |A|, where U is unitary and |A| = \sqrt{A^* A} is positive semi-definite; this links directly to the (), as the provides the explicit form of |A| via its singular values and U via the corresponding unitary factors. For operators with continuous spectrum, and decompositions offer generalized spectral bases, using overcomplete, non-orthogonal systems to represent signals or functions in a manner analogous to eigenbasis expansions but accommodating redundancy and localization in both time and frequency domains. These approaches, rooted in theory, enable decompositions for multiplication operators or integral operators where point eigenvalues are absent. The foundational work on the infinite-dimensional spectral theorem for normal operators in Hilbert space was developed by John von Neumann in 1929, unifying earlier finite-dimensional results with operator theory in quantum mechanics. Despite these extensions, limitations persist: not all bounded operators admit a diagonalizing spectral decomposition, particularly non-normal ones, where the Jordan form reveals algebraic multiplicity exceeding geometric multiplicity. For such operators, pseudospectra—regions where the is sensitive to perturbations—extend the analysis beyond eigenvalues, quantifying transient growth and instability not captured by the point spectrum alone.

References

  1. [1]
    Spectral Decomposition
    Originally, spectral decomposition was developed for symmetric or self-adjoint matrices. Following tradition, we present this method for symmetric/self-adjoint ...
  2. [2]
    [PDF] Lecture 7 — Spectral methods 7.1 Linear algebra review - UCSD CSE
    7.1.2 Spectral decomposition. The spectral decomposition recasts a matrix in terms of its eigenvalues and eigenvectors. This representation turns out to be ...
  3. [3]
    [PDF] Unit 17: Spectral theorem
    The spectral theorem states that every symmetric matrix has an orthonormal eigenbasis. Symmetric matrices have only real eigenvalues.
  4. [4]
    [PDF] Lecture 13: February 28 - Statistics & Data Science
    Representing a matrix by its eigenvectors and eigenvalues is sometimes called computing its spectral decomposition. Many popular unsupervised learning ...
  5. [5]
    [PDF] Spectral Decomposition of Quantum-Mechanical Operators
    One of the key results in linear algebra is the spectral theorem, which gives a method of diagonalizing a matrix in terms of its eigenvalues and eigenvectors.
  6. [6]
  7. [7]
    [PDF] Self-adjoint operators and solving the Schrödinger equation
    Jun 13, 2014 · Two of the most important applications of the spectral theorem are the introduction of a functional calculus for self-adjoint operators as well ...
  8. [8]
  9. [9]
    [PDF] Operators on Hilbert spaces 1. Kernels, boundedness, continuity 2 ...
    Feb 19, 2005 · 1. Kernels, boundedness, continuity. Definition: A linear (not necessarily continuous) map T : X → Y from one Hilbert space to another is. ...
  10. [10]
    [PDF] Bounded Linear Operators on a Hilbert Space - UC Davis Math
    Definition 8.19 A bounded linear operator A : H → H on a Hilbert space H satisfies the Fredholm alternative if one of the following two alternatives holds:
  11. [11]
    [PDF] Chapter 9: The Spectrum of Bounded Linear Operators
    With the definitions we use here, the spectrum of a bounded operator is the disjoint union of its point, continuous, and residual spectrums. For an ...
  12. [12]
    [PDF] The Spectral Radius Theorem - McGill University
    We define the spectral radius of A as ρ(A) := sup {|λ| : λ ∈ σ(A)} . Then, σ(A) is non-empty, compact and ρ(A) = lim n→∞ ...
  13. [13]
    [PDF] functional analysis lecture notes: adjoints in hilbert spaces
    operator L defined in equation(1.3) is self-adjoint. The following result gives a useful condition for telling when an operator on a complex. Hilbert space ...
  14. [14]
    [PDF] Spectral Theorems for Hermitian and unitary matrices - Purdue Math
    For an Hermitian matrix: a) all eigenvalues are real, b) eigenvectors corresponding to distinct eigenvalues are orthogonal, c) there exists an orthogonal basis ...
  15. [15]
    [PDF] Spectral decomposition of a matrix - Quest Journals
    Abstract. The spectral decompositions of (2x2), (3x3) and (4x4) matrices were examined in this paper. To determine the spectral decomposition, it is ...
  16. [16]
    [PDF] Matrix Theory, Math6304 Lecture Notes from September 18, 2012
    Sep 18, 2012 · 2.2.7 Theorem. (Spectral Theorem) Let A ∈ Mn be normal. Then A is unitarily equivalent to a diagonal matrix. Proof.
  17. [17]
    [PDF] Lecture 3.26. Hermitian, unitary and normal matrices - Purdue Math
    Hint: an appropriate diagonal matrix will do the job. Spectral theorem for normal matrices. A matrix is normal is and only if there is an orthogonal basis of Cn ...
  18. [18]
    [PDF] Eigenvalues and eigenvectors of rotation matrices
    In these notes, we shall focus on the eigenvalues and eigenvectors of proper and improper rotation matrices in two and three dimensions. Since R(θ) rotates the ...
  19. [19]
    Singular value decomposition and least squares solutions
    Golub, GH, Reinsch, C. Singular value decomposition and least squares solutions. Numer. Math. 14, 403–420 (1970).
  20. [20]
    [PDF] Spectral theorems, SVD, and Quadratic forms
    Let A be a square matrix, then there exists a decomposition A = U|A|, where U is orthogonal and |A| is positive definite. Proof. Define a linear transformation ...
  21. [21]
    [PDF] Chapter 8 Singular Value Decomposition
    The singular value decomposition (SVD) provides a matrix factorization related to the eigenvalue decomposition that works for all matrices.
  22. [22]
    [PDF] Lecture 5: Singular Value Decomposition (SVD)
    Singular Value Decomposition (SVD). Connection to spectral decomposition of symmetric matrices. From the SVD of X we obtain that. XXT = UΣVT · VΣT UT = U ΣΣT ...
  23. [23]
    [PDF] 18.102 S2021 Lecture 22. The Spectral Theorem for a Compact Self ...
    May 11, 2021 · Theorem 232 (Spectral theorem). Let A = A∗ be a self-adjoint compact operator on a separable Hilbert space H. If |λ1| ≥ |λ2| ≥ ··· are the.
  24. [24]
    [PDF] spectral theorem for compact, self-adjoint operators
    The following theorem is known as the spectral theorem for compact, self-adjoint operators. We will give two proofs which connect as much.
  25. [25]
    [PDF] Compact operators II
    Jan 17, 2000 · Our next proof requires the min–max formula for the eigenvalues of compact self-adjoint operators. Here is a statement of the min–max formula.
  26. [26]
    [PDF] spectral theory for compact self-adjoint operators
    If T is a compact, self-adjoint operator on H, then either ±kTk is an eigenvalue of T. Indeed, if Txn,xn → ±kTk where kxnk = 1, then a subsequence of {xn} ...
  27. [27]
    [PDF] Mercer's Theorem and Related Topics1 - USC Dornsife
    4 The result follows from the Hilbert-Schmidt theorem once we show that the correspond- ing Green's function is continuous and symmetric in [0.L]×[0,L] and ...
  28. [28]
    [PDF] Math212a1403 The spectral theorem for compact self-adjoint ...
    Sep 9, 2014 · If M is a complete subspace of a pre-Hilbert space H then for any v ∈ H there is a unique w ∈ M such that (v − w) ⊥ M. This w is.
  29. [29]
    [PDF] Spectral Theory for Compact Self–Adjoint Operators
    In terms of its spectrum, we will see that a compact operator behaves like a matrix, in the sense that its spectrum is the union of all of its eigenvalues and ...
  30. [30]
    [PDF] Compact Operators on Hilbert Space
    Feb 18, 2012 · We prove a spectral theorem for self-adjoint operators with minimal fuss. Thus, we do not invoke broader discussions of properties of spectra.
  31. [31]
    [PDF] Fredholm-Riesz 1. Compact operators on Banach spaces
    The first serious goal is to prove that all non-zero spectrum of a compact operator on a Banach space is point spectrum. This requires some preparation. [1.3] ...
  32. [32]
    Fredholm Theory in Hilbert Space — A Concise Introductory ...
    Fredholm operators are introduced in Section 2, and Fredholm index in Section 3. The essential spectrum is considered in Section 4, the spectral picture is ...
  33. [33]
    [PDF] Spectral Properties of the Laplacian on Compact Lie Groups
    The paper shows the Laplace-Beltrami operator on compact Lie groups has a complete basis of eigenfunctions and computes its spectrum, using Fourier series and ...Missing: resolvent | Show results with:resolvent
  34. [34]
    [PDF] The Spectral Theorem for Unbounded Operators.
    Nov 29, 2001 · We then apply the spectral theorem for bounded normal operators to derive the spectral theorem for unbounded self-adjoint operators. This is ...
  35. [35]
    [PDF] A nonstandard proof of the spectral theorem for unbounded self ...
    We generalize Moore's nonstandard proof of the Spectral theorem for bounded self-adjoint operators to the case of unbounded operators. The key step is to ...
  36. [36]
    [PDF] The Spectral Theorem for Self-Adjoint and Unitary Operators
    In §§1–5, we established the Spectral Theorems 1.1 and 1.2, for bounded self- adjoint A ∈ L(H) and unitary U ∈ L(H), making use of the holomorphic functional.
  37. [37]
    The spectral theorem and its converses for unbounded symmetric ...
    Dec 20, 2011 · The spectral theorem for finite-dimensional self-adjoint operators (ie Hermitian matrices, when viewed in coordinates), which provides a sequence.Missing: von | Show results with:von<|control11|><|separator|>
  38. [38]
    [PDF] Unbounded operators, Friedrichs' extension theorem
    That there is a unique maximal sub-adjoint requires proof, given below. An operator T is symmetric when T ⊂ T∗, and self-adjoint when T = T∗. These comparisons ...
  39. [39]
    [PDF] Unbounded operators in Hilkest space - LSU Math
    Jan 23, 2008 · Reed / Simon has a good presentation of this material. However, for now, we have introduced enough mathematics to present the spectral theorem ...
  40. [40]
    [PDF] methods of - modern mathematical physics - 1: functional analysis
    Symmetric and self-adjoint operators: the basic criterion for self-adjointness. 255. 3. The spectral theorem. 259. 4. Stone's theorem. 264. 5. Formal ...
  41. [41]
    [PDF] Chapter 4 Spectral theory for self-adjoint operators
    In this chapter we develop the spectral theory for self-adjoint operators. As already seen in Lemma 2.2.6, these operators have real spectrum, however much ...
  42. [42]
    The Functional Calculus Approach to the Spectral Theorem - arXiv
    Mar 13, 2020 · This paper presents a functional calculus approach to the spectral theorem, where functional calculus is central, based on five axioms.
  43. [43]
    [PDF] Computing Spectral Measures of Self-Adjoint Operators - DAMTP
    The Spectral Measure of a Self-Adjoint Operator. Any linear ... classical result in operator theory is Stone's formula, which says that the spectral.
  44. [44]
    [PDF] A Good Spectral Theorem
    Here we show that direct integrals of Hilbert spaces are trivializable in a sense made precise just below. This fact is essential at certain technical points in ...
  45. [45]
    [PDF] On the multiplicity function of real normal operators - Ele-Math
    The spectral measure E is said to be of multiplicity one if there exists a cyclic vector h ∈ Ж , i.e., is such that the closed linear span of {E(Δ)h : Δ ∈ ¿} ...
  46. [46]
    [2211.12742] Spectral theorem for dummies - Quantum Physics - arXiv
    Nov 23, 2022 · John von Neumann's spectral theorem for self-adjoint operators is a cornerstone of quantum mechanics. Among other things, it also provides a ...Missing: motivation | Show results with:motivation<|control11|><|separator|>
  47. [47]
    [PDF] 10 Spectral theorem
    The spectral theorem is the statement that any self- adjoint operator is constructed this way, as formalized by a PVM integral, (10.15).
  48. [48]
    [PDF] Von Neumann's work on Hilbert space quantum mechanics
    Von Neumann's work on Hilbert space quantum mechanics. It is mainly through the work of von Neumann that we think today of quantum mechanics as.<|control11|><|separator|>
  49. [49]
    [PDF] 1 Postulate (QM4): Quantum measurements
    Let its spectral projection be given by A = P n. anPn, where 1anl denote the set of eigenvalues of A and Pn denotes the orthogonal projection onto the subspace ...
  50. [50]
    [PDF] Lecture 8: Quantum Harmonic Oscillator - MIT OpenCourseWare
    Mar 5, 2013 · One example might be V (x) = αx4 for some proportionality constant α. The energy eigenstates of the harmonic oscillator form a family labeled by ...
  51. [51]
    A Jordan Canonical Approach to an Indefinite Spectral Theorem
    We aim to develop a generalization of an Indefinite Spectral Theorem by examining Jordan Form matrices and their associated metric. We start with an operator ...
  52. [52]
    [PDF] Chapter 12 Singular Value Decomposition and Polar Form
    This is the celebrated singular value decomposition (SVD). A close cousin of the SVD is the polar form of a linear map, which shows how a linear map can be ...
  53. [53]
    [PDF] A Short Course on Frame Theory
    Apr 21, 2011 · Frame theory provides a flexible tool for signal decompositions, accommodating nonorthogonal and redundant signal sets, unlike linear ...
  54. [54]
    [PDF] Highlights in the History of Spectral Theory
    Von Neumann [1930a] and Stone [1932a] extended both the definition and spectral theory of normal operators to the unbounded case as well. We have come a long ...
  55. [55]
    [PDF] Spectra and pseudospectra
    Trefethen, Lloyd N. (Lloyd Nicholas). Spectra and pseudospectra: the behavior of nonnormal matrices and operators / Lloyd N. Trefethen and Mark Embree p. cm ...Missing: URL | Show results with:URL