Fact-checked by Grok 2 weeks ago

Positive semidefinite

In linear algebra, a positive semidefinite (also known as a positive semidefinite in more general settings) is a symmetric real matrix A such that all its eigenvalues are non-negative, or equivalently, the x^T A x \geq 0 for every x \in \mathbb{R}^n; for Hermitian complex matrices, all eigenvalues are non-negative, or equivalently, x^H A x \geq 0 for every x \in \mathbb{C}^n, where ^H denotes the . This property ensures that the matrix induces a that is always non-negative, distinguishing it from positive definite matrices, where the inequality is strict (> 0) for non-zero x. Positive semidefinite matrices form a in the space of symmetric (real) or Hermitian (complex) matrices, closed under non-negative linear combinations, which underpins their role in and (SDP), where constraints involve such matrices to model problems like or stable set in . In statistics, covariance matrices of multivariate distributions are inherently positive semidefinite, reflecting the non-negativity of variances and the Cauchy-Schwarz inequality for covariances, enabling techniques like . In and theory, density matrices representing mixed quantum states are positive semidefinite Hermitian operators with trace one, capturing probabilistic superpositions and entanglement measures. Other applications include graph Laplacians in , where positive semidefiniteness ensures non-negative eigenvalues related to connectivity, and Hessian matrices in , indicating local minima when positive semidefinite. These matrices also admit factorizations like A = B^H B for some (possibly rectangular) B, where ^H is the (reducing to transpose for real matrices), facilitating numerical computations such as in the definite case.

Definitions

Real matrices

A real symmetric matrix A \in \mathbb{R}^{n \times n} is defined to be positive semidefinite if the associated satisfies x^T A x \geq 0 for all vectors x \in \mathbb{R}^n. This condition ensures that the matrix encodes a nonnegative over the real numbers. An equivalent characterization is that all principal minors of A are nonnegative. This criterion, known as for positive semidefiniteness, provides a practical test without requiring computation of eigenvalues. A direct consequence of the definition is that all diagonal entries of A must be nonnegative, as setting x to a vector yields the i-th diagonal entry.

Complex matrices

In the complex case, a matrix A \in \mathbb{C}^{n \times n} is called Hermitian if A = A^*, where A^* denotes the of A. A A is positive semidefinite if the x^* A x \geq 0 for all vectors x \in \mathbb{C}^n. This definition uses the standard inner product on \mathbb{C}^n, ensuring the form is real-valued and nonnegative. An equivalent characterization is that all principal minors of the A are nonnegative. This condition extends , which for requires all leading principal minors to be positive, but for semidefiniteness, the full set of principal minors (not just leading ones) must be nonnegative to ensure consistency across submatrices. This complex definition aligns with the real symmetric case as a special instance: when all entries of A are real, the conjugate transpose A^* coincides with the ordinary transpose A^T, reducing the quadratic form to the real version x^T A x \geq 0 for x \in \mathbb{R}^n. The proof follows by noting that the imaginary parts vanish, preserving the nonnegativity condition without altering the Hermitian structure. Notation for the conjugate transpose varies, with A^* common in many texts, though A^\dagger or A^H is also used interchangeably.

Characterizations

Eigenvalues

A positive semidefinite matrix is characterized by its spectral properties: all of its eigenvalues are real and non-negative. This holds for both real symmetric and complex Hermitian cases, as the eigenvalues of Hermitian matrices are always real, and the non-negativity condition distinguishes positive semidefiniteness from mere Hermiticity. The for Hermitian matrices guarantees that any positive semidefinite matrix A admits a A = U D U^*, where U is a and D is a with non-negative real entries along the diagonal, corresponding to the eigenvalues of A. This decomposition underscores the connection between the matrix's eigenvalues and its positive semidefiniteness, as the non-negative diagonal entries ensure that the x^* A x \geq 0 for all vectors x. The trace of a positive semidefinite matrix equals the sum of its eigenvalues and is therefore non-negative. Moreover, the presence of zero eigenvalues indicates singularity: a positive semidefinite matrix is singular if and only if zero is an eigenvalue, with the algebraic multiplicity of the zero eigenvalue determining the dimension of the kernel.

Quadratic forms

A symmetric real matrix A is positive semidefinite if and only if the associated quadratic form q(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} satisfies q(\mathbf{x}) \geq 0 for all \mathbf{x} \in \mathbb{R}^n. Similarly, a Hermitian complex matrix A is positive semidefinite if the sesquilinear form q(\mathbf{x}) = \mathbf{x}^* A \mathbf{x} \geq 0 for all \mathbf{x} \in \mathbb{C}^n, where \mathbf{x}^* denotes the conjugate transpose. Geometrically, for a positive semidefinite A, the of the z = q(\mathbf{x}) over \mathbb{R}^n describes an elliptic (or a degenerate in the semidefinite case where the is nontrivial), opening upwards from the and lying entirely in the non-negative half-space z \geq 0. This interpretation highlights the non-negativity inherent to positive semidefiniteness, contrasting with indefinite forms that produce surfaces crossing the plane. The provides a variational linking the to the of A: for \mathbf{x} \neq \mathbf{0}, R(\mathbf{x}) = \frac{q(\mathbf{x})}{\|\mathbf{x}\|^2} satisfies \lambda_{\min}(A) \leq R(\mathbf{x}) \leq \lambda_{\max}(A), where \lambda_{\min}(A) \geq 0 and \lambda_{\max}(A) \geq 0 for positive semidefinite A. Thus, the infimum of R(\mathbf{x}) over unit vectors is the smallest eigenvalue, and the supremum is the largest, emphasizing how the bounds reflect spectral non-negativity. A simple example is the I_n, for which q(\mathbf{x}) = \|\mathbf{x}\|^2 \geq 0 for all \mathbf{x}, with equality only at \mathbf{x} = \mathbf{0}; here, A = I_n is positive definite (a strict case of semidefiniteness).

Other characterizations

A A is positive semidefinite if and only if B^* A B is positive semidefinite for every B of compatible dimensions. This condition arises from the Löwner partial order on the space of , where A \succeq B if and only if A - B is positive semidefinite, and A \succeq 0 precisely when A is positive semidefinite. The equivalence follows because the characterization extends to transformed vectors via B, ensuring non-negativity across all linear images, while the converse uses specific choices of B (such as rank-one updates) to recover the original . For a A, an equivalent condition involves its . Let E_{\geq 0} denote the eigenspace corresponding to the non-negative eigenvalues of A. Then A is positive semidefinite the column space () of A is contained in E_{\geq 0}. This holds because the of A decomposes into components aligned with positive and negative eigenspaces in the general Hermitian case; containment in the non-negative part ensures no negative spectral contribution affects the quadratic form. The provides a block-wise for positive semidefiniteness. Consider a Hermitian \begin{pmatrix} P & Q \\ Q^* & R \end{pmatrix}. If P is positive definite, then this matrix is positive semidefinite the R - Q^* P^{-1} Q is positive semidefinite. Symmetrically, if R is positive definite, then it is positive semidefinite P - Q R^{-1} Q^* is positive semidefinite. This criterion facilitates recursive checks on submatrices, useful in verifying larger structures without full eigenvalue computation, and the positive definite case follows by requiring the blocks and complements to be positive definite. A Hermitian matrix A is positive semidefinite all of its principal minors are non-negative. A principal minor is the of a principal submatrix, obtained by deleting the same set of rows and columns. This condition provides a purely determinantal characterization and is necessary and sufficient, extending (which uses leading principal minors) from the positive definite to the semidefinite case. The numerical range of a matrix A, defined as W(A) = \{ x^* A x : x \in \mathbb{C}^n, \, \|x\| = 1 \}, offers another perspective. For a Hermitian matrix A, A is positive semidefinite if and only if every value in W(A) has non-negative real part (equivalently, W(A) \subseteq [0, \infty) since W(A) is real-valued). This follows from the fact that the numerical range contains the eigenvalues and is the convex hull thereof for Hermitian matrices, ensuring non-negativity across the spectrum implies the quadratic form is non-negative everywhere. Geometrically, this positions W(A) as a projection of the positive semidefinite cone.

Decompositions

Cholesky decomposition

The provides a of a symmetric A \in \mathbb{R}^{n \times n} into the form A = LL^T, where L is a with positive diagonal entries. This exists and is unique for every , as established through inductive proofs relying on the ensuring nonzero pivots during elimination. The standard computes L via a series of scalar updates, starting from the top-left entry and proceeding column by column: for the j-th column, the diagonal entry is l_{jj} = \sqrt{a_{jj} - \sum_{k=1}^{j-1} l_{jk}^2}, and subdiagonal entries are l_{ij} = \left( a_{ij} - \sum_{k=1}^{j-1} l_{ik} l_{jk} \right) / l_{jj} for i > j. This process requires approximately \frac{1}{3} n^3 floating-point operations, achieving O(n^3) . For symmetric positive semidefinite matrices, which may have zero eigenvalues, the extends to a rank-revealing form A = P R^T R P^T (with P a permutation matrix and R upper triangular with nonnegative diagonal entries), but the standard algorithm can encounter zero pivots and fail in finite precision arithmetic. To address this, complete pivoting—selecting the largest remaining diagonal element as the pivot in each step—is used to ensure numerical stability and reveal the rank r, resulting in a block form where the leading r \times r principal submatrix is positive definite. The for positive semidefinite matrices is unique up to permutations of rows and columns corresponding to zero-diagonal entries, as multiple choices for zero pivots can lead to equivalent representations after reordering. Implementations, such as those in LINPACK, incorporate complete pivoting to handle this, bounding the backward error such that \|A - \hat{R}^T \hat{R}\| \leq 4r(r+1)(\|W\|_2 + 1)^2 u \|A\|_2 + O(u^2), where u is roundoff, r is the , and \|W\|_2 \leq \sqrt{(n-r)(4r-1)/3}.

Square root

Every positive semidefinite matrix A admits a unique positive semidefinite square root B such that B^2 = A. This uniqueness holds among all positive semidefinite matrices and follows from the for Hermitian matrices, ensuring that the principal branch of the function applied to the eigenvalues yields the only such B. The B is constructed via the of A. Suppose A = U D U^*, where U is unitary and D is the of nonnegative eigenvalues of A. Then, B = U \sqrt{D} \, U^*, where \sqrt{D} denotes the whose entries are the nonnegative s of those in D. This approach preserves positive semidefiniteness and ensures B^2 = A. The B commutes with A, as both share the same eigenspaces from the . Additionally, for an n \times n positive semidefinite matrix A, the satisfies \operatorname{trace}(B) \leq \sqrt{n \cdot \operatorname{trace}(A)}, with when all eigenvalues of A are equal; this bound arises from applying the Cauchy-Schwarz inequality to the eigenvalues \lambda_1, \dots, \lambda_n \geq 0 of A, yielding \left( \sum \sqrt{\lambda_i} \right)^2 \leq n \sum \lambda_i. Practical computation of B often employs iterative methods, such as the Denman–Beavers iteration, which generates sequences starting with Y_0 = A, Z_0 = I, and updates Y_{k+1} = \frac{1}{2} (Y_k + Z_k^{-1}), Z_{k+1} = \frac{1}{2} (Z_k + Y_k^{-1}), converging quadratically to B and its inverse A^{-1/2} (if invertible). Alternatively, iterates X_{k+1} = \frac{1}{2} (X_k + A X_k^{-1}) from an initial positive definite guess, also achieving quadratic convergence under mild conditions. These methods are particularly useful for large-scale or ill-conditioned matrices where direct spectral decomposition is inefficient.

Unitary diagonalization

A Hermitian positive semidefinite matrix A \in \mathbb{C}^{n \times n} admits a unitary diagonalization via the , which guarantees the existence of a U and a real D = \operatorname{diag}(\lambda_1, \dots, \lambda_n) with non-negative entries \lambda_i \geq 0 such that A = U D U^*, where the columns of U form an of eigenvectors and the \lambda_i are the eigenvalues of A. This decomposition is a direct consequence of the for Hermitian matrices, specialized to the positive semidefinite case where eigenvalue non-negativity holds. The diagonal entries in D (the eigenvalues) are unique up to permutation, while the corresponding eigenvectors (columns of U) are unique up to multiplication by unit-modulus complex scalars within each eigenspace of multiplicity greater than one; for simple eigenvalues, eigenvectors are unique up to phase. This partial uniqueness ensures that the spectral decomposition captures the essential structure of A, with orthogonal eigenspaces for distinct eigenvalues. The unitary diagonalization facilitates the definition and computation of matrix functions on positive semidefinite matrices: for a function f defined on the non-negative reals, f(A) = U f(D) U^*, where f(D) = \operatorname{diag}(f(\lambda_1), \dots, f(\lambda_n)). If f is non-decreasing on [0, \infty) (ensuring f(\lambda_i) \geq 0), then f(A) remains positive semidefinite, preserving the property through the spectral structure. In practice, unitary diagonalizations of Hermitian positive semidefinite matrices are computed using variants of the , which first reduces the matrix to tridiagonal form via transformations and then applies implicit QR iterations with real shifts (such as the Wilkinson shift) to deflate eigenvalues. These methods exploit the real, non-negative spectrum for faster convergence and , achieving O(n^3) complexity for dense matrices.

Properties

Operations on matrices

The set of positive semidefinite matrices is closed under addition: if A and B are positive semidefinite, then so is A + B. This follows directly from the quadratic form definition, as x^* (A + B) x = x^* A x + x^* B x \geq 0 for all vectors x. It is also closed under non-negative scalar multiplication: for any scalar c \geq 0 and positive semidefinite matrix A, the matrix cA is positive semidefinite. The case c = 0 yields the zero matrix, which is positive semidefinite, while c > 0 preserves the non-negativity of the quadratic form. The ordinary product of two positive semidefinite matrices need not be positive semidefinite, as it may fail to be Hermitian or have negative eigenvalues. However, the symmetrized (or anti-commutator) product \frac{1}{2} (AB + BA) is positive semidefinite whenever A and B are. This Hermitian matrix satisfies x^* \left( \frac{1}{2} (AB + BA) \right) x = \frac{1}{2} \left( (Bx)^* A (Bx) + (Ax)^* B (Ax) \right) \geq 0, leveraging the positive semidefiniteness of A and B. Positive definite matrices, a strict subclass of positive semidefinite matrices, are invertible, and their inverses are also positive definite. In contrast, positive semidefinite matrices may be singular (with zero eigenvalues) and thus lack inverses, though the Moore-Penrose pseudoinverse preserves positive semidefiniteness in such cases. The trace of a positive semidefinite matrix A, defined as the sum of its diagonal entries or equivalently the sum of its non-negative eigenvalues, satisfies \operatorname{tr}(A) \geq 0, with equality A is the . This property underscores the connection between the functional and the non-negativity inherent to positive semidefiniteness.

Ordering and convexity

The Löwner order defines a partial ordering on the space of Hermitian matrices, where a Hermitian matrix A is greater than or equal to another Hermitian matrix B in the Löwner order, denoted A \succeq B, if and only if A - B is positive semidefinite. This ordering is reflexive, antisymmetric, and transitive, but not total, as not all pairs of Hermitian matrices are comparable under it. It plays a central role in matrix inequalities, semidefinite programming, and comparisons of quadratic forms. The set \mathcal{S}_n^+ of all n \times n positive semidefinite matrices forms a within the of n \times n symmetric matrices equipped with the . Specifically, \mathcal{S}_n^+ is closed under and nonnegative , ensuring that if A, B \in \mathcal{S}_n^+ and \alpha, \beta \geq 0, then \alpha A + \beta B \in \mathcal{S}_n^+. Convexity follows from the fact that for A, B \in \mathcal{S}_n^+ and $0 \leq t \leq 1, the matrix tA + (1-t)B has nonnegative eigenvalues as a of matrices with nonnegative eigenvalues. This structure underpins the convexity of spectrahedra in semidefinite optimization. The Hadamard product, also known as the , of two positive semidefinite matrices is itself positive semidefinite, as established by the . For Hermitian positive semidefinite matrices A = (a_{ij}) and B = (b_{ij}), their entrywise product A \circ B = (a_{ij} b_{ij}) satisfies x^* (A \circ B) x = \sum_{i,j} a_{ij} b_{ij} \overline{x_i} x_j \geq 0 for all vectors x, leveraging the positive semidefiniteness of both factors. This theorem, originally proved by in 1911, extends to positive definite cases and has implications for structures and . The of two positive semidefinite matrices is positive semidefinite. If A \in \mathbb{C}^{m \times m} and B \in \mathbb{C}^{n \times n} are positive semidefinite, then for any z \in \mathbb{C}^{mn}, the z^* (A \otimes B) z \geq 0, since the eigenvalues of A \otimes B are products of the nonnegative eigenvalues of A and B. This preservation property facilitates the analysis of tensor products in and multivariable systems. The Frobenius inner product between two positive semidefinite matrices is nonnegative. Defined as \langle A, B \rangle_F = \operatorname{trace}(A^* B) for Hermitian A, B, this follows from the spectral decompositions A = U \Lambda U^* and B = V \Gamma V^* with \Lambda, \Gamma diagonal and nonnegative, yielding \langle A, B \rangle_F = \sum_{i,j} \lambda_i \gamma_j |\langle u_i, v_j \rangle|^2 \geq 0. This nonnegativity underscores the self-dual nature of the positive semidefinite cone under the Frobenius inner product.

Block matrices

A block matrix of the form M = \begin{pmatrix} A & B \\ B^* & C \end{pmatrix}, where A is an m \times m , C is an n \times n , and B is m \times n, is positive semidefinite if and only if A \succeq 0, the columns of B lie in the of A, and the C - B^* A^\dagger B \succeq 0, where A^\dagger denotes the Moore-Penrose pseudoinverse of A. If A \succ 0, the range condition is automatically satisfied, and the block matrix is positive semidefinite if and only if A \succeq 0 and C - B^* A^{-1} B \succeq 0. This characterization extends the notion of positive semidefiniteness to partitioned structures and is fundamental in matrix completions and numerical methods. A key property of positive semidefinite matrices is that all principal submatrices are also positive semidefinite. A principal submatrix is obtained by selecting a of indices and deleting the corresponding rows and columns; for a positive semidefinite matrix A, the quadratic form restricted to any corresponding to these indices remains nonnegative. This hereditary property ensures that positive semidefiniteness is preserved under deletion of rows and columns, aiding in algorithms for checking semidefiniteness via leading principal minors. Bordered matrices arise when extending a positive semidefinite matrix by adding rows and columns, often analyzed via the Schur complement. For a Hermitian matrix A \succeq 0 bordered by vectors b and scalar c to form \begin{pmatrix} A & b \\ b^* & c \end{pmatrix}, the resulting matrix is positive semidefinite if and only if b lies in the range of A and c - b^* A^\dagger b \geq 0. If A \succ 0, this simplifies to c \geq b^* A^{-1} b. For multiple bordering blocks, iterative application of the Schur complement provides the conditions, ensuring the extension preserves semidefiniteness. In optimization, positive semidefiniteness of the at a critical point of a twice-differentiable indicates a local minimum. Specifically, if the vanishes and the H(x_0) \succeq 0, then x_0 is a local minimizer, though the minimum may not be strict if H(x_0) is singular. For constrained problems, the bordered —a incorporating the objective and constraint Jacobians—provides second-order conditions via sign patterns of its principal minors, linking back to block matrix semidefiniteness on the .

Extensions and Applications

Non-Hermitian extensions

In the context of non-Hermitian matrices, the notion of positive semidefiniteness is extended by considering the Hermitian part of the matrix, defined as H = \frac{A + A^*}{2}, where A^* denotes the of A. A non-Hermitian matrix A is said to be positive semidefinite if its Hermitian part H is positive semidefinite in the standard Hermitian sense, meaning x^* H x \geq 0 for all vectors x \neq 0. This generalization arises in , particularly for solving systems A x = b where A is large, sparse, and non-Hermitian but satisfies this condition, enabling the development of efficient iterative methods like Hermitian and skew-Hermitian splitting. Equivalently, A is non-Hermitian positive semidefinite if the real part of its numerical range is non-negative, i.e., \operatorname{Re} W(A) \subseteq [0, \infty), where the numerical range W(A) = \{ x^* A x \mid \|x\| = 1 \}. This follows directly from the fact that \operatorname{Re}(x^* A x) = x^* H x, so the projection of W(A) onto the real axis coincides with the numerical range of H. The numerical range characterization is particularly useful for analyzing spectral properties, as it provides a convex set containing all eigenvalues of A, ensuring that the real parts of the eigenvalues are non-negative. This extension finds applications in stability analysis, notably in the study of dissipative Hamiltonian differential-algebraic equations (DAEs), where the system matrix inherits a positive semidefinite Hermitian part, leading to bounded growth rates and marginal stability properties. In the context of Lyapunov equations, such as A^* P + P A = -Q with P > 0 and Q \geq 0, the condition A + A^* \succeq 0 implies that the system \dot{x} = A x cannot be asymptotically stable, as no such P exists satisfying the strict inequality for Q > 0; instead, it guarantees non-negative real parts for eigenvalues, providing a zero stability margin and ensuring Lyapunov stability only in the sense of bounded trajectories rather than convergence to equilibrium. This property is leveraged in control theory to assess passivity and energy dissipation in non-symmetric systems. However, many properties of Hermitian positive semidefinite matrices do not directly transfer. For instance, while all eigenvalues of a Hermitian positive semidefinite matrix are real and non-negative, those of a non-Hermitian counterpart can be complex with non-negative real parts, but the matrix lacks a into real orthogonal projections, complicating and eigenvalue-based analyses. Additionally, operations like inversion or may not preserve the structure without additional assumptions, limiting direct analogies to the Hermitian case.

Optimization and statistics

Semidefinite programming (SDP) is a subfield of that leverages the cone of positive semidefinite matrices as the , allowing the formulation of complex problems as tractable linear objectives over matrix variables. SDP problems typically take the primal form of maximizing \operatorname{[trace](/page/Trace)}(C X) subject to linear constraints \mathcal{A}(X) = b and the positive semidefiniteness condition X \succeq 0, where X is a , C and the A_i in \mathcal{A} are given symmetric matrices, and b is a vector. This structure generalizes by replacing nonnegativity constraints with the more expressive semidefiniteness condition, enabling applications in areas like and . The development of polynomial-time interior-point algorithms in the made SDP computationally feasible, with solvers achieving high precision for matrices up to moderate sizes. In , positive semidefiniteness is a foundational property of , ensuring that the variance of any of random variables is nonnegative. For a random X with \mu, the \Sigma = \mathbb{E}[(X - \mu)(X - \mu)^\top] satisfies z^\top \Sigma z \geq 0 for all z, as it equals \operatorname{Var}(z^\top X). The sample covariance matrix, estimated from n observations as S = \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})(x_i - \bar{x})^\top, is likewise positive semidefinite (assuming full rank data), providing a reliable for multivariate analysis. This property is crucial in (PCA), where the of the covariance matrix yields orthogonal directions of maximum variance; the positive eigenvalues correspond to meaningful components, while zero eigenvalues indicate redundancy or noise. Positive semidefiniteness also ensures physical consistency in models of heat conduction and , particularly for anisotropic materials where conductivity varies by direction. The thermal conductivity tensor k, relating q = -k \nabla T to \nabla T, must be positive semidefinite to align with thermodynamic principles, including the second law (nonnegative ) and Onsager reciprocity ( under time reversal). Violations, such as indefinite tensors, could imply energy creation from temperature differences, which is unphysical; for instance, in finite element simulations of , enforcing k \succeq 0 prevents numerical instabilities and guarantees realistic heat flow. This requirement extends to broader models, like those in porous media or biological tissues, where semidefiniteness maintains bounded solutions. In , density matrices formalize the description of quantum states, especially mixed states arising from partial knowledge or subsystems. A \rho is a positive semidefinite Hermitian operator on a with \operatorname{Tr}(\rho) = 1, ensuring that measurement probabilities p_i = \operatorname{Tr}(\rho P_i) (for projectors P_i) are nonnegative and sum to 1. For pure states |\psi\rangle, \rho = |\psi\rangle\langle\psi| is rank-1 positive semidefinite; mixed states are convex combinations thereof. This framework underpins quantum , entanglement measures, and algorithms like , where positivity preserves coherence and information integrity. Seminal treatments emphasize that any such \rho admits a \rho = \sum \lambda_i |\phi_i\rangle\langle\phi_i| with \lambda_i \geq 0, linking to probabilistic interpretations. Machine learning applications, particularly kernel methods, exploit positive semidefinite kernels to enable nonlinear learning in high-dimensional spaces without explicit computation. In support vector machines (SVMs), the kernel matrix K_{ij} = k(x_i, x_j) must be positive semidefinite to represent inner products in a , allowing the dual optimization \max_\alpha \sum \alpha_i - \frac{1}{2} \alpha^\top K \alpha subject to constraints on \alpha. establishes that any continuous symmetric positive semidefinite function k admits an expansion k(x, y) = \sum \lambda_m \phi_m(x) \phi_m(y) with \lambda_m \geq 0, justifying the kernel trick for tasks like and . This property ensures convex quadratic programs in SVM training, with common kernels like the k(x, y) = \exp(-\|x - y\|^2 / 2\sigma^2) inherently satisfying the condition and improving generalization on nonlinear data.

References

  1. [1]
    [PDF] Lecture 3: The cone of positive semidefinite matrices
    Recall that all the eigenvalues of a real symmetric matrix A are real. Definition. The matrix A ∈ Rn×n sym is positive semidefinite, denoted A 0, if all its.
  2. [2]
    [PDF] 1 Some Facts on Symmetric Matrices
    Definition: The symmetric matrix A is said positive semidefinite (A ≥ 0) if all its eigenvalues are non negative. Theorem: If A is positive definite ( ...
  3. [3]
    [PDF] Lecture 4.9. Positive definite and semidefinite forms - Purdue Math
    Apr 10, 2020 · So positive semidefinite means that there are no minuses in the signature, while positive definite means that there are n pluses, where n is ...
  4. [4]
    [PDF] 1 Positive semidefinite matrices - CSE Home
    The matrix A is said to be positive semidefinite (PSD) if we replace '>' by '⩾'. The definition makes it clear that the set of PSD matrices forms a closed, ...
  5. [5]
    [PDF] 1 The Covariance Matrix - TTIC
    A matrix satisfying this property for all u is called positive semi- definite. The covariance matrix is always both symmetric and positive semi- definite.
  6. [6]
    [PDF] 2. Positive semidefinite matrices
    Positive semidefinite matrices. 2.29. Page 33. Exercises. Exercise 4 as an application of exercise 3, let. 𝑓 (𝑥) = 𝑐0 + 𝑐1𝑥 +···+ 𝑐𝑑𝑥. 𝑑 be a polynomial ...
  7. [7]
    [PDF] Lecture 20: Density Operator Formalism - cs.wisc.edu
    Oct 25, 2010 · Also recall that a matrix is positive semi-definite if hx|M |xi ≥ 0 for all x. Claim 1. Let ̺ be a density operator. Then Tr(̺)=1. Proof. First ...
  8. [8]
    [PDF] 1 Positive Semidefinite matrices
    Oct 5, 2020 · PSD matrices appear all the time in algorithmic applications, including some that we have already seen. Graph Laplacians, Hessians of convex ...
  9. [9]
    1.8 Positive Semi-Definite Matrices - A First Course in Linear Algebra
    Positive semi-definite matrices (and their cousins, positive definite matrices) are square matrices which in many ways behave like non-negative (respectively, ...
  10. [10]
    [PDF] This lecture: Lec2p1, ORF363/COS323
    Sylvester's criterion provides another approach to testing positive definiteness or positive semidefiniteness of a matrix. A symmetric matrix is positive ...
  11. [11]
    James Joseph Sylvester - Biography - MacTutor
    In 1851 he discovered the discriminant of a cubic equation and first used the name 'discriminant' for such expressions of quadratic equations and those of ...Missing: positive definite
  12. [12]
    [PDF] Lecture 7: Positive Semidefinite Matrices - CSE - IIT Kanpur
    By choosing x to be a standard basis vector ei, we get Mii ≥ 0, ∀i. Hence, all diagonal elements are non-negative and tr(M) ≥ 0. If x is chosen to have only two ...
  13. [13]
    [PDF] Convex Quantifier Elimination for Semidefinite Programming - MIT
    Theorem 1 (Sylvester's criterion). Let M = (mij ) ∈ Cn×n be a Hermitian matrix. Then M is positive semidefinite if and only if all principal minors of M are ...
  14. [14]
  15. [15]
    Positive Semidefinite Quadratic Form -- from Wolfram MathWorld
    A quadratic form Q(x) is said to be positive semidefinite if it is never <0. However, unlike a positive definite quadratic form, there may exist a x!
  16. [16]
    [PDF] Linear Algebra (part ∞) : Bilinear and Quadratic Forms - Evan Dummit
    x2 + y2 − z2 = 1), the elliptic paraboloid (e.g., z = x2 + y2), and the ... quadratic form is positive definite, then all the diagonal entries in its ...
  17. [17]
    [PDF] Semidefinite geometry of the numerical range
    Abstract. The numerical range of a matrix is studied geometrically via the cone of positive semidefinite matrices (or semidefinite cone for short).
  18. [18]
    [PDF] Analysis of the Cholesky Decomposition of a Semi-definite Matrix
    The Cholesky decomposition A = RTR of a positive definite matrix A, in which R is upper triangular with positive diagonal elements, is a fundamental tool in ...
  19. [19]
    [PDF] 2·Hermitian Matrices - Faculty
    A distinguished class of Hermitian matrices have Rayleigh quotients that are always positive. Matrices of this sort are so useful in both theory and ...
  20. [20]
    [PDF] 7 Spectral Properties of Matrices
    (Spectral Theorem for Hermitian Matrices) If the matrix A ∈ Cn×n is Hermitian or skew-Hermitian, A can be written as. A = UDUH, where U is a unitary matrix and ...
  21. [21]
    [PDF] Spectral Theorems for Hermitian and unitary matrices - Purdue Math
    A real matrix is unitary if and only if it is orthogonal. 2. Spectral theorem for Hermitian matrices. For an Hermitian matrix: a) all eigenvalues are real,.
  22. [22]
    [PDF] arXiv:1404.6839v1 [math.FA] 27 Apr 2014
    Apr 27, 2014 · Recall that by Theorem 2.1, a power function xα preserves positivity when applied entrywise to all n×n symmetric positive semidefinite matrices ...
  23. [23]
    [PDF] The QR Algorithm - Ethz
    The QR algorithm computes a Schur decomposition of a matrix. It is certainly one of the most important algorithm in eigenvalue computations [9].Missing: semidefinite | Show results with:semidefinite
  24. [24]
    None
    Below is a merged summary of the properties of Positive Semidefinite (PSD) matrices from Rajendra Bhatia’s *Positive Definite Matrices*, consolidating all information from the provided segments into a single, detailed response. To maximize density and clarity, I will use a table in CSV format to summarize the key properties, followed by additional details, quotes, and URLs. This ensures all information is retained while being organized and concise.
  25. [25]
    Positive Definite Matrix -- from Wolfram MathWorld
    A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite. A necessary and ...
  26. [26]
    [PDF] Semidefinite positive matrices and generalized inverses
    Apr 23, 2014 · Definition 2 (Löwner ordering). The space Sn is equipped with a partial ordering ⪰, defined as. A ⪰ B ⇐⇒ A − B ∈ Sn. +. In particular ...
  27. [27]
    A.1 Loewner Order - Akshay Agrawal
    Oct 20, 2018 · The Loewner order is a partial order on the set of positive semidefinite symmetric matrices. For two positive semidefinite matrices A and B, we ...Missing: characterization | Show results with:characterization
  28. [28]
    [PDF] 3 The positive semidefinite cone - DAMTP
    ... positive semidefinite cone. Let Sn denote the vector space of n × n real ... i Avi ≥ 0 for all i = 1,...,n and thus, since λi ≥ 0 we get Tr(AB) ≥ 0 ...
  29. [29]
    [PDF] Lecture 12: Positive semidefinite cone - CSE - IIT Kanpur
    Positive semidefinite matrices are symmetric matrices with non-negative eigenvalues. The positive semidefinite cone (Sn) is a convex cone of these matrices, ...
  30. [30]
    [PDF] The Geometry of Semidefinite Programming
    The set of positive semidefinite matrices is a convex cone. A spectrahedron is its intersection with an affine-linear space. Semidefinite programming maximizes ...
  31. [31]
    [PDF] sharp nonzero lower bounds for the schur product theorem - IISc Math
    Abstract. By a result of Schur [J. reine angew. Math. 140 (1911), pp. 1–. 28], the entrywise product M ◦ N of two positive semidefinite matrices M, N.
  32. [32]
    [PDF] arXiv:2004.03909v1 [math.CA] 8 Apr 2020
    Apr 8, 2020 · . A classical theorem of Schur, known as the Schur product theorem, tells us that the Hadamard product of two positive semidefinite matrices is.
  33. [33]
    [PDF] The Kronecker Product - Thomas T.C.K. Zhang
    Corollary 1.5 An immediate result from the previous theorem is that the Kronecker product of two positive (negative) semi-definite matrices is positive semi- ...
  34. [34]
    [PDF] The Schur Complement and Symmetric Positive Semidefinite (and ...
    Aug 24, 2019 · as claimed. We now return to our original problem, characterizing when a symmetric matrix,. M = ( A B. B>. C. ) , is positive semidefinite. Thus ...
  35. [35]
    [PDF] Lec3p1, ORF363/COS323
    A (strict) global minimum is of course also a (strict) local minimum. ... (i.e., the Hessian at is positive semidefinite.) "Little o" notation: see ...
  36. [36]
    [PDF] 19. Constrained Optimization II
    Nov 22, 2022 · This involves semidefinite matrices. There are also sufficient conditions using semidefinite matrices, analogous to Theorem 17.8.1 and Theorem.
  37. [37]
    Preconditioned Hermitian and skew-Hermitian splitting methods for ...
    Mar 16, 2004 · & Pan, JY. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer. Math ...
  38. [38]
    On Non-Hermitian Positive (Semi)Definite Linear Algebraic Systems ...
    Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. ... numerical range, and ...
  39. [39]
    [PDF] Semidefinite Programming - Stanford University
    [3] .,Optimization over the positive-definite cone: interiorpoint methods and combinatorial applications, in Advances in Optimization and Parallel Computing ...
  40. [40]
    Semidefinite Programming | SIAM Review
    This paper gives a survey of the theory and applications of semidefinite programs and an introduction to primaldual interior-point methods for their solution.Missing: Nemirovski | Show results with:Nemirovski
  41. [41]
    [PDF] Topic 5: Principal component analysis 5.1 Covariance matrices
    Covariance matrices are always symmetric by definition. Moreover, they are always positive semidefinite, since for any non-zero z ∈ Rd, z> cov(X)z = ...
  42. [42]
    [PDF] On the Necessity of Positive Semi-Definite Conductivity and ...
    Note that selection of a conductivity tensor which is more di- agonally dominant can render the tensor to be positive semi- definite. For example, changing k( ...
  43. [43]
    [PDF] quantum-computation-and-quantum-information-nielsen-chuang.pdf
    Containing a wealth of figures and exercises, this well-known textbook is ideal for courses on the subject, and will interest beginning graduate students and ...
  44. [44]
    [PDF] Learning the Kernel Matrix with Semidefinite Programming
    Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points.