Fact-checked by Grok 2 weeks ago

Hermitian matrix

In , a Hermitian matrix is a square with complex entries that is equal to its own , meaning A = A^\dagger, where A^\dagger denotes the of A.[ScienceDirect] For real-valued matrices, this condition simplifies to , as the reduces to the ordinary transpose.[ScienceDirect] Hermitian matrices play a central role in linear algebra and functional analysis due to their rich spectral properties: all eigenvalues are real, and the matrix can be unitarily diagonalized, with eigenvectors forming an orthonormal basis.[Purdue University Lecture Notes] Specifically, for any Hermitian matrix A, the eigenvalues \lambda satisfy \lambda \in \mathbb{R}, eigenvectors corresponding to distinct eigenvalues are orthogonal, and there exists a unitary matrix U such that U^\dagger A U is diagonal with the eigenvalues on the main diagonal.[Purdue University Lecture Notes] These properties extend the analogous results for real symmetric matrices and ensure that the quadratic form \mathbf{v}^\dagger A \mathbf{v} is real for any complex vector \mathbf{v}.[San Diego State University Lecture Notes] Beyond pure mathematics, Hermitian matrices are fundamental in , where they represent observable physical quantities such as , , and (via the ), guaranteeing real measurement outcomes. They also arise in , statistics (e.g., matrices), and optimization, where positive semi-definite Hermitian matrices define inner products and semi-norms in complex Hilbert spaces. The for Hermitian matrices is central to the analysis of numerical algorithms, such as the , for eigenvalue computation.

Definition and Characterizations

Definition

A matrix is a matrix whose entries are complex numbers from the field ℂ. The of a matrix A = (a_{ij}) is the matrix A^T defined by (A^T)_{ij} = a_{ji}. The of A, also known as the and denoted A^H, is the matrix obtained by first transposing A and then taking the of each entry, so (A^H)_{ij} = \overline{a_{ji}}, where \overline{\cdot} denotes complex conjugation. A square matrix A \in \mathbb{C}^{n \times n} is Hermitian if it equals its : A = A^H. In physics contexts, the conjugate transpose is often denoted by a dagger superscript, A^\dagger. The term "Hermitian matrix" is named after the French mathematician Charles Hermite (1822–1901), who proved in 1855 that the eigenvalues of such matrices are real, building on earlier 18th- and 19th-century work on quadratic forms by and .

Conjugate Transpose Equality

A Hermitian matrix A satisfies the defining equality A = A^H, where A^H denotes the of A. This condition requires that the (i,j)-th entry of A equals the of the (j,i)-th entry, expressed as a_{ij} = \overline{a_{ji}} for all indices i, j. To derive this entry-wise relation, consider the entries of the conjugate transpose: the (i,j)-th entry of A^H is \overline{a_{ji}}. Thus, the equality A = A^H directly implies a_{ij} = \overline{a_{ji}} for every i and j. For the diagonal elements, setting i = j yields a_{ii} = \overline{a_{ii}}, so each diagonal entry must be a real number. Off-diagonal elements, where i \neq j, are complex conjugates of each other, ensuring the matrix's symmetry under conjugation and transposition. This conjugate transpose condition generalizes the real symmetric case, where A = A^T holds because complex conjugation has no effect on real entries; in the domain, however, the conjugation is essential to account for non-real components. In contrast, a satisfies A = -A^H, leading to the entry-wise relation a_{ij} = -\overline{a_{ji}}.

Real-Valued Quadratic Forms

A square matrix A \in \mathbb{C}^{n \times n} is Hermitian if and only if the associated x^H A x takes real values for every x \in \mathbb{C}^n. This characterization highlights the close relationship between Hermitian matrices and s over complex spaces. The arises naturally from the defined by A, where \langle x, y \rangle_A = x^H A y, and evaluating it on the diagonal \langle x, x \rangle_A = x^H A x yields a real scalar precisely when A is Hermitian. The explicit expansion of the quadratic form is x^H A x = \sum_{i=1}^n \sum_{j=1}^n \bar{x}_i a_{ij} x_j. This double encodes the conjugate symmetry of the entries of A, since a_{ji} = \bar{a}_{ij} for Hermitian matrices. To verify that the form is real-valued when A = A^H, consider the : \overline{x^H A x} = x^H \overline{A} x = x^H A^H x, where the bar denotes entrywise conjugation. Substituting A^H = A immediately shows \overline{x^H A x} = x^H A x, confirming the result holds for all x. For the converse, suppose x^H A x \in \mathbb{R} for all x \in \mathbb{C}^n. Let B = A - A^H, which is skew-Hermitian (B^H = -B). Then x^H B x = x^H A x - x^H A^H x = x^H A x - \overline{x^H A x} = 0, since the quadratic form is real. The vanishing of this quadratic form for the skew-Hermitian matrix B implies B = 0 via polarization: the full sesquilinear form can be recovered from the quadratic form using identities such as $4 \langle u, v \rangle = \langle u+v, u+v \rangle - \langle u-v, u-v \rangle - i \langle u + i v, u + i v \rangle + i \langle u - i v, u - i v \rangle, which must be zero for all u, v due to the reality condition, forcing A = A^H. Hermitian matrices also underpin positive semi-definite sesquilinear forms; specifically, if all eigenvalues of A are non-negative, then x^H A x \geq 0 for all x, defining a positive semi-definite that generalizes the standard inner product on \mathbb{C}^n.

Spectral Characterization

A square matrix A over the numbers is Hermitian it has real eigenvalues and admits a unitary , where the eigenvectors form an . This equivalence highlights the deep connection between the property and the structure of such matrices. The spectral theorem provides a precise formulation: for any n \times n Hermitian matrix A, there exists a unitary matrix U (satisfying U^H U = I) and a real diagonal matrix D such that A = U D U^H, where the diagonal entries of D are the eigenvalues of A, all of which are real. The columns of U are the corresponding orthonormal eigenvectors of A, ensuring that eigenvectors associated with distinct eigenvalues are orthogonal with respect to the standard Hermitian inner product. The eigenvalues are uniquely determined up to , reflecting the intrinsic content of the matrix. In particular, if A is positive semi-definite—meaning x^H A x \geq 0 for all nonzero vectors x \in \mathbb{C}^n—then all eigenvalues of A are non-negative real numbers. In contrast, non-Hermitian complex matrices generally possess eigenvalues that may be complex and do not necessarily admit unitary , often requiring more general similarity transformations that preserve neither nor reality of the .

Elementary Properties

Real Diagonal Entries

A Hermitian matrix A \in \mathbb{C}^{n \times n} satisfies A = A^\dagger, where A^\dagger denotes the of A. The diagonal entries of such a matrix are necessarily real numbers. To see this, consider the (i,i)-th entry: a_{ii} = (A^\dagger)_{ii} = \overline{a_{ii}}, where the overline denotes complex conjugation. This equality implies that the imaginary part of a_{ii} must be zero, as any nonzero imaginary component would violate the relation. Formally, if a_{ii} = x + iy with x, y \in \mathbb{R} and y \neq 0, then \overline{a_{ii}} = x - iy \neq a_{ii}, contradicting the Hermitian condition. Thus, y = 0, and a_{ii} = x \in \mathbb{R} for each i = 1, \dots, n. (Exercise 6.7 in Chapter 6) This property has direct implications for key invariants of the matrix. The trace of A, defined as \operatorname{Tr}(A) = \sum_{i=1}^n a_{ii}, is therefore a , as it is the of real scalars. Moreover, since the trace equals the of the eigenvalues of A (which are themselves real for Hermitian matrices), this reinforces the reality of the spectral without altering the diagonal's intrinsic reality. In applications such as , where Hermitian matrices represent observables, the real diagonal entries correspond to real-valued measurements in the basis where the matrix is expressed, ensuring physical quantities like expectation values remain real.

Symmetry in Real Case

When all entries of a Hermitian matrix A are real numbers, the condition A = A^* reduces to A = A^T, meaning A is symmetric. This equivalence holds because the A^* of a real matrix simplifies to its ordinary transpose A^T. To see this, note that for a real matrix, the complex conjugate \bar{A} = A, so A^* = \bar{A}^T = A^T. Thus, A = A^* if and only if A = A^T. The study of symmetric matrices predates that of Hermitian matrices, with foundational work including Augustin-Louis Cauchy's 1829 proof that the eigenvalues of a real symmetric matrix are real. Charles Hermite introduced the concept of Hermitian matrices in 1855 while investigating quadratic forms and their transformations. Real symmetric matrices therefore form a proper of the Hermitian matrices, as the latter encompass matrices where a_{ij} = \overline{a_{ji}} for all i, j.

Inner Product Preservation

A positive definite Hermitian matrix A \in \mathbb{C}^{n \times n} induces an inner product on the complex vector space \mathbb{C}^n defined by \langle x, y \rangle_A = y^H A x, where ^H denotes the . This construction generalizes the standard inner product \langle x, y \rangle = y^H x, which corresponds to the I. The \langle \cdot, \cdot \rangle_A satisfies the axioms of an inner product: it is conjugate symmetric, linear in the second argument, conjugate linear in the first, and positive definite. The Hermitian property of A (i.e., A^H = A) guarantees that \langle x, y \rangle_A = \overline{\langle y, x \rangle_A} and ensures the \langle x, x \rangle_A = x^H A x is real-valued and non-negative for all x \in \mathbb{C}^n. Positive definiteness of A, meaning x^H A x > 0 for all nonzero x, further implies \langle x, x \rangle_A > 0 for x \neq 0, establishing a \|x\|_A = \sqrt{\langle x, x \rangle_A}. This condition is equivalent to all eigenvalues of A being . The collection of all Hermitian matrices forms a real vector space under addition and scalar multiplication by real numbers, with dimension n^2. A basis consists of n(n+1)/2 real symmetric matrices for the real parts (including the diagonal) and n(n-1)/2 imaginary skew-symmetric matrices for the imaginary parts of the off-diagonal entries. In contrast, a non-Hermitian matrix B generally yields x^H B x that is complex-valued, failing to produce a real non-negative norm and thus not defining a valid inner product.

Algebraic Operations

Addition and Scalar Multiplication

The set of Hermitian matrices is closed under addition. If A and B are n \times n Hermitian matrices, then A + B is also Hermitian because (A + B)^* = A^* + B^* = A + B, where * denotes the conjugate transpose. Hermitian matrices are also closed under multiplication by real scalars. For a real number \lambda and an n \times n Hermitian matrix A, the matrix \lambda A is Hermitian since (\lambda A)^* = \overline{\lambda} A^* = \lambda A, as \lambda is real. However, multiplication by a complex scalar \mu generally does not preserve hermiticity: (\mu A)^* = \overline{\mu} A^*, which equals \mu A only if \mu is real. These closure properties imply that the set of all n \times n Hermitian matrices forms a real under and real . This space has n^2 over the reals, as there are n real parameters for the diagonal entries and n(n-1) real parameters for the upper-triangular entries (each complex off-diagonal pair contributes two real ).

Inverses

If a Hermitian matrix A is invertible, then its inverse A^{-1} is also Hermitian. To see this, note that the Hermitian adjoint satisfies (A^{-1})^H = (A^H)^{-1}. Since A is Hermitian, A^H = A, so (A^{-1})^H = A^{-1}, confirming that the inverse is Hermitian. The determinant of a Hermitian matrix is real. A Hermitian matrix is non-invertible if and only if it has a zero eigenvalue.

Products under Commutativity

The product of two Hermitian matrices A and B is Hermitian if and only if A and B commute, that is, AB = BA. To verify this, compute the conjugate transpose: (AB)^H = B^H A^H = BA, since A^H = A and B^H = B. Thus, (AB)^H = AB holds precisely when BA = AB. If A and B do not commute, their product AB is generally not Hermitian. A concrete counterexample arises with the Pauli matrices \sigma_x and \sigma_y, both of which are Hermitian, yet \sigma_x \sigma_y = i \sigma_z, where i \sigma_z is anti-Hermitian because (i \sigma_z)^H = -i \sigma_z./03%3A_The_Lorentz_Group_and_the_Pauli_Algebra/3.04%3A_The_Pauli_Algebra) A notable form that yields a Hermitian result without requiring commutativity is A^H B A, where A and B are Hermitian matrices. Here, (A^H B A)^H = A^H B^H A = A^H B A. For multiple matrices, if Hermitian matrices A, B, and C pairwise commute, then the associative product (AB)C (or A(BC)) is Hermitian, as each successive pairwise product preserves hermiticity under commutativity.

Spectral Theory

Normality

A matrix A \in \mathbb{C}^{n \times n} is defined as normal if it commutes with its conjugate transpose, that is, A A^H = A^H A, where A^H denotes the conjugate transpose of A. Every is . To see this, let A be Hermitian, so A^H = A. Then A A^H = A A = A^2 and A^H A = A A = A^2, hence A A^H = A^H A. The set of Hermitian matrices forms a proper of the set of matrices, as there exist matrices that are not Hermitian, such as certain unitary matrices that are not the . As a consequence, Hermitian matrices inherit the property of all matrices that they are unitarily diagonalizable. However, Hermitian matrices possess the additional feature that all their eigenvalues are real, distinguishing them within the class of matrices.

Unitary Diagonalizability

A fundamental property of Hermitian matrices is their unitary diagonalizability, encapsulated in the . Specifically, every n \times n Hermitian matrix A admits a U \in \mathbb{C}^{n \times n} and a real D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) such that A = U D U^H, where the \lambda_i are the eigenvalues of A. This result follows from the fact that Hermitian matrices are , meaning A A^H = A^H A, and the for normal matrices guarantees the existence of a unitary ; the Hermiticity of A further ensures that all eigenvalues are real, making D real-valued. The proof proceeds by first establishing that eigenvectors corresponding to distinct eigenvalues are orthogonal with respect to the standard Hermitian inner product \langle x, y \rangle = y^H x. To verify this, suppose A v = \lambda v and A w = \mu w with \lambda \neq \mu, both eigenvalues real. Then, \langle A v, w \rangle = \lambda \langle v, w \rangle and also equals \langle v, A w \rangle = \mu \langle v, w \rangle, yielding (\lambda - \mu) \langle v, w \rangle = 0, so \langle v, w \rangle = 0. For eigenvalues with algebraic multiplicity greater than one, the corresponding eigenspace is invariant under A, and since it is a of the finite-dimensional \mathbb{C}^n equipped with the standard inner product, one can apply the Gram-Schmidt process to obtain an of eigenvectors for that eigenspace. Combining orthonormal bases from all eigenspaces yields a complete of \mathbb{C}^n, whose vectors form the columns of the U that diagonalizes A.

Eigendecomposition

Every Hermitian matrix A \in \mathbb{C}^{n \times n} admits an eigendecomposition of the form A = U D U^H, where U is a whose columns are orthonormal eigenvectors of A, and D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) is a real containing the eigenvalues \lambda_i of A. This decomposition leverages the spectral theorem for Hermitian matrices, ensuring that the eigenvectors form an orthonormal basis for \mathbb{C}^n. The eigendecomposition is unique up to permutation of the eigenvalues and, for degenerate eigenvalues (multiplicities greater than 1), up to unitary transformations within the corresponding eigenspaces and phase factors in the individual eigenvectors. In practice, the eigendecomposition is computed numerically using algorithms such as the QR algorithm, which iteratively applies QR factorizations to converge to the eigenvalues and eigenvectors, or divide-and-conquer methods that exploit the tridiagonal structure after initial reduction. A key application is the simplification of matrix powers: A^k = U D^k U^H, where D^k = \operatorname{diag}(\lambda_1^k, \dots, \lambda_n^k), which facilitates efficient computation in areas like dynamical systems and quantum mechanics.

Eigenvalues and Singular Values

A fundamental property of Hermitian matrices is that all their eigenvalues are real numbers. This follows from the spectral theorem for Hermitian matrices, which guarantees that the eigenvalues lie on the real line. For an n \times n Hermitian matrix A, the characteristic polynomial \det(A - \lambda I) has real coefficients, and the eigenvalues \lambda_1, \lambda_2, \dots, \lambda_n can be ordered such that \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_n. The determinant of a Hermitian matrix A is real and equals the product of its eigenvalues, \det(A) = \prod_{i=1}^n \lambda_i. Since each \lambda_i is real, their product is also real, providing a direct consequence of the reality of the eigenvalues. The singular values of a Hermitian matrix A are the absolute values of its eigenvalues, \sigma_i = |\lambda_i| for i = 1, \dots, n. This relationship arises because the singular value decomposition of A aligns with its eigendecomposition, where the singular values capture the magnitudes of the eigenvalues while being inherently non-negative. A Hermitian matrix A is positive semi-definite, denoted A \geq 0, all its eigenvalues are non-negative, \lambda_i \geq 0 for all i. Equivalently, A \geq 0 the x^H A x \geq 0 for all vectors x \in \mathbb{C}^n. This characterization links the spectral properties directly to the matrix's action as a quadratic form.

Decompositions

Cartesian Decomposition

Any square complex matrix A admits a unique decomposition into the sum of a Hermitian matrix H and a skew-Hermitian matrix S, referred to as the Cartesian decomposition: A = H + S. This splitting arises naturally from the properties of the adjoint operation and provides a way to separate the "symmetric" and "antisymmetric" components of A with respect to conjugation. The Hermitian component is explicitly given by H = \frac{A + A^H}{2}, where A^H denotes the () of A, ensuring H^H = H. The skew-Hermitian component is S = \frac{A - A^H}{2}, satisfying S^H = -S. These expressions follow directly from averaging A with its for the Hermitian part and differencing for the skew-Hermitian part, leveraging the of the A \mapsto A^H. Uniqueness of the decomposition stems from the fact that the set of Hermitian matrices and the set of skew-Hermitian matrices form complementary real subspaces of the of all n \times n matrices, with their spanning the entire and their intersection being trivial (only the ). To see this, suppose A = H_1 + S_1 = H_2 + S_2 for Hermitian H_1, H_2 and skew-Hermitian S_1, S_2; then H_1 - H_2 = S_2 - S_1, and taking the yields H_1 - H_2 = -(S_2 - S_1), implying both sides are zero, so H_1 = H_2 and S_1 = S_2. When A itself is Hermitian, the decomposition simplifies trivially to H = A and S = 0, as A^H = A forces the skew-Hermitian part to vanish. The Hermitian part H in the general captures the symmetric structure inherent to operators, while the skew-Hermitian part encodes the remaining antisymmetric behavior. Over the real numbers, the space of n \times n Hermitian matrices has n^2, reflecting the n real diagonal entries and n(n-1)/2 complex off-diagonal entries (each contributing two real parameters), which aligns with the real $2n^2 of the full matrix space when paired with the isomorphic skew-Hermitian subspace.

Polar Decomposition

Any square complex matrix A \in \mathbb{C}^{n \times n} admits a A = UP, where P = |A| = \sqrt{A^H A} is the unique positive semi-definite Hermitian matrix and U is a partial . When A is invertible, U is unitary and the decomposition is unique. This decomposition generalizes the representation of complex numbers in polar form. For a Hermitian matrix A = A^H, the polar decomposition simplifies because A^H A = A^2, so |A| = \sqrt{A^2}, where the square root is the unique positive semi-definite Hermitian matrix whose eigenvalues are the absolute values of those of A. The factor U then satisfies A = U |A|, and when A is invertible, U = A |A|^{-1}, which incorporates the "sign" of A in the sense that its eigenvalues are \pm 1 aligned with the eigenspaces of positive and negative eigenvalues of A. If A is positive semi-definite, the polar decomposition takes the trivial form A = I \cdot A, with U = I the and P = A. In the general Hermitian case, the decomposition highlights the separation of magnitude (via |A|) and phase (via U), distinct from additive decompositions like the Cartesian form. The polar factors are unique for invertible Hermitian A, with the positive semi-definite P being the only such matrix satisfying the factorization.

Variational Principles

Rayleigh Quotient

The Rayleigh quotient provides a variational of the eigenvalues of a Hermitian matrix. For a Hermitian matrix A \in \mathbb{C}^{n \times n} and a nonzero vector x \in \mathbb{C}^n, it is defined as R(A, x) = \frac{x^H A x}{x^H x}, where x^H denotes the of x. This quotient represents the average value of the x^H A x normalized by the squared of x. Since A is Hermitian, its eigenvalues are real, and the Rayleigh quotient inherits this reality, ensuring R(A, x) is always real-valued for any x. A fundamental property of the Rayleigh quotient is that it bounds the eigenvalues of A. If the eigenvalues of A are ordered as \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n, then for any nonzero x, \lambda_n \leq R(A, x) \leq \lambda_1. This follows from the spectral theorem for Hermitian matrices, which decomposes A = U \Lambda U^H with unitary U and diagonal \Lambda, allowing R(A, x) to be expressed as a weighted average of the eigenvalues with nonnegative weights summing to 1. The minimum and maximum are achieved when x aligns with the corresponding extremal eigenvectors. The stationary points of the occur precisely at the eigenvectors of A. Specifically, the critical points of R(A, x) with respect to variations in x (under the constraint x^H x = 1) satisfy A x = \lambda x, where \lambda = R(A, x) is the corresponding eigenvalue. At these points, R(A, u_i) = \lambda_i for eigenvector u_i. This underpins the use of the Rayleigh quotient in eigenvalue approximation, as gradients or iterative adjustments can drive x toward eigenspaces. In iterative methods for computing eigenvalues, the serves as an efficient estimator for convergence monitoring. For instance, the power method approximates the dominant eigenvector by repeated multiplication with A, and evaluating the on the iterates yields a refined estimate of the largest eigenvalue \lambda_1, often converging faster than the residual norm due to its direct tie to the variational characterization. This approach enhances the method's practical utility for Hermitian matrices, where eigenvalue reality simplifies analysis.

Min-Max Theorem

The Courant–Fischer min-max theorem characterizes the eigenvalues of a Hermitian matrix through extremal properties of the Rayleigh quotient over subspaces. Let A \in \mathbb{C}^{n \times n} be Hermitian with eigenvalues \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n. Then, for each k = 1, \dots, n, \lambda_k = \max_{\dim S = k} \min_{\substack{x \in S \\ \|x\|=1}} x^H A x = \min_{\dim T = n-k+1} \max_{\substack{x \in T \\ \|x\|=1}} x^H A x, where the norms are and S, T over subspaces of \mathbb{C}^n. A proof sketch proceeds via the , which diagonalizes A = U \Lambda U^H with unitary U and diagonal \Lambda. For the max-min form, consider the S_k spanned by the first k vectors in the eigenbasis; the minimum over unit vectors in S_k equals \lambda_k. Extending to any k-dimensional S yields a minimum at most \lambda_k (by projecting onto the dominant eigenspace), while the maximum over such minima achieves exactly \lambda_k by choosing S_k. The min-max form follows dually using orthogonal complements and the fact that \dim S^\perp = n - \dim S. This leverages the quotient's variational nature, where x^H A x / \|x\|^2 lies between \lambda_n and \lambda_1. The theorem enables eigenvalue bounds without full eigendecomposition, by evaluating the Rayleigh quotient over trial of appropriate dimension; for instance, the k-th largest eigenvalue is at least the minimum over any k-dimensional , providing lower bounds via optimization over feasible S. It also underpins Weyl's inequalities for perturbations: if A, B are Hermitian, then \lambda_{j+k-1}(A + B) \leq \lambda_j(A) + \lambda_k(B) for j + k \leq n+1, derived by applying the min-max characterization to A + B and subspaces aligned with those of A and B. For positive definite Hermitian matrices (where all \lambda_i > 0), the theorem generalizes directly, yielding positive quotients and enabling bounds on the \kappa(A) = \lambda_1 / \lambda_n; specifically, \lambda_1 = \max_{\|x\|=1} x^H A x and \lambda_n = \min_{\|x\|=1} x^H A x, so \kappa(A) = (\max x^H A x) / (\min x^H A x), with variants tightening estimates for stability analysis in numerical methods.

Applications

Quantum Mechanics

In quantum mechanics, Hermitian matrices form the mathematical foundation for representing physical observables, as formalized by in his 1932 treatise on the subject. Von Neumann established that observables correspond to operators on a , ensuring that in finite-dimensional systems—such as those modeling or multi-level atoms—these operators are precisely Hermitian matrices with real spectra. This correspondence guarantees that outcomes are real-valued quantities, aligning with empirical observations in physics. The expectation value of an represented by a Hermitian A in a normalized \psi is computed as \langle \psi | A | \psi \rangle = \psi^\dagger A \psi, where \dagger denotes the conjugate transpose. Hermiticity of A (A = A^\dagger) ensures this expectation value is always real, reflecting the fact that average measurements must yield real numbers. This property underpins the probabilistic interpretation of quantum states, where repeated measurements converge to the expectation value. The for Hermitian operators, as applied in this context, decomposes A into a sum over its projectors onto orthogonal eigenspaces: A = \sum_k \lambda_k P_k, where \lambda_k are the real eigenvalues and P_k are the operators onto the corresponding eigenspaces. In , this decomposition identifies the eigenstates as the complete basis for : upon observing the system, it collapses to one of these eigenstates with probability given by the state's onto that , yielding eigenvalue \lambda_k as the outcome. While canonical observables like and are unbounded operators in infinite-dimensional Hilbert spaces, finite-dimensional approximations are routinely employed in computational to model discretized systems, such as simulations or tight-binding models, where these are represented as Hermitian matrices. For instance, in numerical studies of on a finite , the becomes a diagonal Hermitian with grid-point values, and the a Hermitian differentiation , preserving essential commutation relations approximately.

Signal Processing and Statistics

In and , Hermitian matrices play a fundamental role in modeling structures for multivariate data. The sample , constructed from real-valued observations, is symmetric and thus Hermitian, ensuring real eigenvalues that facilitate (). In , these eigenvalues represent the variance explained by each principal component, allowing by retaining components with the largest eigenvalues. This property enables efficient data compression and noise reduction in applications like image processing and arrays. Circulant matrices, which arise in the (DFT) for periodic signals, are normal matrices diagonalized by the unitary Fourier matrix, sharing spectral properties with Hermitian matrices. For real-valued signals, the associated circulant matrices are symmetric (Hermitian), and their eigenvalues correspond to the , enabling efficient computation of frequency-domain features via algorithms. This is particularly useful in spectral estimation and operations for time-series analysis. In , Hermitian matrices are employed in optimization problems to ensure and minimize forms representing error criteria. The eigenfilter method formulates (FIR) filter coefficients as the eigenvector of a Hermitian matrix derived from desired responses, guaranteeing real-valued eigenvalues for monotonic convergence and optimal approximation in and regions. This approach reduces compared to least-squares methods while maintaining for stable designs. The extends to complex cases, where the complex Wishart is Hermitian positive semi-definite, modeling the of multivariate complex Gaussian vectors in and communications. For a complex Wishart W \sim \mathcal{CW}_p(n, \Sigma) with n \geq p, the eigenvalues follow a joint that quantifies uncertainty in high-dimensional . The condition number of a Hermitian matrix, defined as the ratio of its largest to smallest eigenvalue, measures in statistical estimators like those from Wishart-distributed covariances. In complex Wishart matrices, the exact distribution of this provides bounds on in high-dimensional settings, such as multiple-input multiple-output () systems, where ill-conditioning can degrade performance. In modern , kernel matrices constructed from positive definite are Hermitian positive semi-definite, enabling implicit mapping to high-dimensional feature spaces for algorithms like support vector machines. Seminal work established that such kernels ensure the Gram matrix's eigenvalues are non-negative, supporting and generalization bounds in non-linear and tasks.

Examples

Basic Constructions

The simplest Hermitian matrix is a 1×1 matrix of the form , where $a$ is a real number, as its conjugate transpose is $[\bar{a}]$, which equals since \bar{a} = a. For matrices, a with real entries on the diagonal is Hermitian. For example, the matrix \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} has \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}, which matches the original. Another Hermitian matrix involves off-diagonal entries that are conjugates of each other. Consider H = \begin{pmatrix} 1 & i \\ -i & 2 \end{pmatrix}. The is \bar{H} = \begin{pmatrix} 1 & -i \\ i & 2 \end{pmatrix}, and transposing yields H^\dagger = \begin{pmatrix} 1 & i \\ -i & 2 \end{pmatrix}, which equals H, confirming it is Hermitian. Real symmetric matrices are a special case of Hermitian matrices, as the conjugate transpose reduces to the transpose for real entries. For instance, S = \begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix} satisfies S^T = S, hence S^\dagger = S. A non-example is N = \begin{pmatrix} 1 & i \\ i & 2 \end{pmatrix}. Its conjugate transpose is N^\dagger = \begin{pmatrix} 1 & -i \\ -i & 2 \end{pmatrix}, which differs from N because the off-diagonal entries i and i become -i and -i. To verify if a matrix is Hermitian, compute its A^\dagger by first taking the of each entry and then transposing, then check if A^\dagger = A.

Eigenvalue Computations

To compute the eigenvalues of a small Hermitian matrix, one solves the \det(A - \lambda I) = 0, where the roots \lambda are guaranteed to be real numbers. This direct algebraic approach is feasible for matrices up to size 3×3 or 4×4, as the resulting is low-degree and solvable explicitly. Consider the 2×2 real A = \begin{pmatrix} 2 & 1 \\ 1 & 3 \end{pmatrix}, which is Hermitian since it equals its own . The is \det(A - \lambda I) = \det\begin{pmatrix} 2 - \lambda & 1 \\ 1 & 3 - \lambda \end{pmatrix} = (2 - \lambda)(3 - \lambda) - 1 = \lambda^2 - 5\lambda + 5 = 0. The solutions are \lambda = \frac{5 \pm \sqrt{5}}{2}, approximately 3.618 and 1.382, both real as expected for a Hermitian matrix. For a complex Hermitian example, take A = \begin{pmatrix} 1 & i \\ -i & 1 \end{pmatrix}, where A = A^\dagger (the ) since the off-diagonal entries are conjugates. The is \det(A - \lambda I) = \det\begin{pmatrix} 1 - \lambda & i \\ -i & 1 - \lambda \end{pmatrix} = (1 - \lambda)^2 - (i)(-i) = (1 - \lambda)^2 - 1 = \lambda^2 - 2\lambda = 0. The eigenvalues are \lambda = 0 and \lambda = 2, again real. A straightforward 3×3 diagonalization occurs when the Hermitian matrix is already diagonal, such as D = \operatorname{diag}(1, 2, 3). Here, the eigenvalues are precisely the diagonal entries 1, 2, and 3, and the eigendecomposition is D = U D U^\dagger with the U = I serving as the of eigenvectors. For larger Hermitian matrices, explicit computation via the becomes impractical due to the high-degree equation, so numerical libraries such as NumPy's linalg.eigh function are employed, which exploits the Hermitian structure for stable and efficient eigenvalue extraction.