Fact-checked by Grok 2 weeks ago

Definite matrix

In linear algebra, a definite matrix is a real that is either positive definite or negative definite, meaning its associated \mathbf{x}^T A \mathbf{x} is strictly positive (or strictly negative) for all non-zero vectors \mathbf{x}. Equivalently, all eigenvalues of a positive definite matrix are positive, while all eigenvalues of a negative definite matrix are negative. These matrices play a fundamental role in various fields, including optimization, where the of the at a critical point indicates a local minimum, and negative definiteness indicates a local maximum. Positive definite matrices are particularly ubiquitous in statistics and probability, often appearing as covariance matrices of multivariate normal distributions, ensuring that variances are positive and correlations are well-defined. They also admit unique Cholesky decompositions into lower and upper triangular factors, facilitating numerical computations such as solving linear systems efficiently. Negative definite matrices share analogous properties but with sign reversals, such as the leading principal minors alternating in sign, starting with negative, for negative definiteness. Both types are invertible, with inverses that preserve definiteness (the inverse of a positive definite matrix is positive definite, and similarly for negative). Key tests for definiteness include Sylvester's criterion, which states that a symmetric matrix is positive definite if and only if all leading principal minors are positive, and negative definite if they alternate in sign starting with negative. Eigenvalue computation provides another definitive method, leveraging the spectral theorem for symmetric matrices, which guarantees real eigenvalues. In applications like control theory and physics, definiteness ensures stability and positive energy forms, such as in the analysis of quadratic potentials.

Definitions

Real symmetric matrices

A real symmetric matrix A \in \mathbb{R}^{n \times n} (i.e., A = A^T) is defined as positive definite if the associated satisfies x^T A x > 0 for every nonzero real x \in \mathbb{R}^n. This condition ensures that the quadratic form is strictly positive, establishing a foundational notion of tied to the matrix's . The concept extends to positive semi-definite matrices, where x^T A x \geq 0 for all real vectors x, allowing equality for some nonzero x. Similarly, A is negative definite if x^T A x < 0 for all nonzero x, and negative semi-definite if x^T A x \leq 0 for all x. These classifications rely on the sign of the quadratic form, with symmetry guaranteeing that A has real eigenvalues. An equivalent characterization for positive definiteness is that all eigenvalues of A are positive. For positive semi-definiteness, all eigenvalues are non-negative, while negative definiteness requires all eigenvalues to be negative, and negative semi-definiteness requires all to be non-positive. This spectral equivalence follows from the spectral theorem for symmetric matrices, which diagonalizes A orthogonally, preserving the quadratic form's sign properties. One early characterization of positive definiteness is Sylvester's criterion, introduced by James Joseph Sylvester in 1852, stating that a real symmetric matrix is positive definite if and only if all its leading principal minors are positive. For positive definite A, the quadratic form x^T A x is always positive, providing a basic inequality that underpins applications in optimization and stability analysis.

Hermitian matrices

In the complex case, a Hermitian matrix H \in \mathbb{C}^{n \times n}, satisfying H = H^* where ^* denotes the conjugate transpose, is defined as positive definite if the quadratic form \mathbf{x}^* H \mathbf{x} > 0 for all nonzero complex vectors \mathbf{x} \in \mathbb{C}^n. Analogous definitions extend to the semi-definite cases: H is positive semi-definite if \mathbf{x}^* H \mathbf{x} \geq 0 for all \mathbf{x} \in \mathbb{C}^n, with equality holding for some nonzero \mathbf{x}; negative definite if \mathbf{x}^* H \mathbf{x} < 0 for all nonzero \mathbf{x}; and negative semi-definite if \mathbf{x}^* H \mathbf{x} \leq 0 for all \mathbf{x}. A key spectral property of positive definite Hermitian matrices is that all their eigenvalues are real and strictly positive. This follows directly from the positive definiteness condition applied to eigenvectors, ensuring the eigenvalues \lambda satisfy \mathbf{u}^* H \mathbf{u} = \lambda \mathbf{u}^* \mathbf{u} > 0 for normalized eigenvectors \mathbf{u}, implying \lambda > 0. Conversely, if all eigenvalues of a are positive, then it is positive definite. Real symmetric matrices form a special case of Hermitian matrices, as the conjugate transpose reduces to the ordinary transpose over the reals, preserving the definiteness definitions and properties in the complex framework. The Rayleigh quotient provides a normalized measure of definiteness for Hermitian matrices, defined as R(\mathbf{x}) = \frac{\mathbf{x}^* H \mathbf{x}}{\mathbf{x}^* \mathbf{x}} for nonzero \mathbf{x} \in \mathbb{C}^n. For a positive definite Hermitian matrix H, R(\mathbf{x}) > 0 holds for all nonzero \mathbf{x}, reflecting the uniform positivity of the quadratic form relative to the vector's norm.

Notation and terminology

In the study of definite matrices, standard notation distinguishes between strict definiteness and semi-definiteness using Loewner partial ordering. For a A, the symbol A \succ 0 denotes that A is positive definite, meaning the x^* A x > 0 for all nonzero vectors x \in \mathbb{C}^n. Similarly, A \succeq 0 indicates positive semi-definiteness, where x^* A x \geq 0 for all x \in \mathbb{C}^n, allowing zero values. The notation A \prec 0 is used for negative definiteness, and A \preceq 0 for negative semi-definiteness. Terminology in matrix theory differentiates "definite" from "semi-definite" based on the strictness of the 's positivity or negativity. A is definite if the associated is strictly positive (or negative) for all nonzero vectors, whereas semi-definite allows non-strict inequality, including zero for some nonzero vectors. Matrices with eigenvalues of mixed signs—both positive and negative—are termed indefinite, contrasting with the uniform sign requirement for definite cases. In mathematical literature, the phrase "positive definite matrix" is predominantly used over the more general "definite matrix" to explicitly indicate the positive sign of eigenvalues, avoiding ambiguity with negative definite counterparts. This convention ensures clarity in contexts like optimization and , where the sign directly impacts applications. Sylvester's law of inertia provides a canonical classification for real symmetric matrices under congruence, specifying the inertia as the triple (p, q, r), where p is the number of positive eigenvalues, q the number of negative eigenvalues, and r = n - p - q the multiplicity of the zero eigenvalue, with n the matrix dimension. This signature remains invariant under nonsingular congruence transformations.

Examples

Positive definite examples

A diagonal matrix with all positive diagonal entries is positive definite, as the associated quadratic form reduces to a weighted sum of squares with positive weights. For instance, consider the 2×2 diagonal matrix D = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}. The quadratic form is \mathbf{x}^T D \mathbf{x} = x_1^2 + 2 x_2^2, which is strictly positive for any nonzero \mathbf{x} = (x_1, x_2)^T \in \mathbb{R}^2 since both coefficients are positive. A common non-diagonal example arises in statistics as a , such as A = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}, which models the variances and covariance of two correlated random variables. To verify , compute the eigenvalues by solving the \det(A - \lambda I) = (2 - \lambda)^2 - 1 = 0, yielding \lambda^2 - 4\lambda + 3 = 0 or (\lambda - 3)(\lambda - 1) = 0, so the eigenvalues are \lambda_1 = 3 > 0 and \lambda_2 = 1 > 0. Alternatively, evaluate the \mathbf{x}^T A \mathbf{x} = 2x_1^2 + 2x_1 x_2 + 2x_2^2 = (x_1 + x_2)^2 + x_1^2 + x_2^2 > 0 for \mathbf{x} \neq \mathbf{0}, confirming the property directly. matrices of this form are positive definite when the variables are non-degenerate. In contrast, scaling the identity matrix by -1 yields -I = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}, which is negative definite since \mathbf{x}^T (-I) \mathbf{x} = - (x_1^2 + x_2^2) < 0 for all nonzero \mathbf{x}.

Indefinite and semi-definite cases

A symmetric matrix is classified as indefinite if its quadratic form takes both positive and negative values for different nonzero vectors, which occurs when the matrix has both positive and negative eigenvalues. A standard example is the 2×2 matrix A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, which is symmetric. The characteristic equation is \det(A - \lambda I) = \lambda^2 - 1 = 0, yielding eigenvalues \lambda = 1 and \lambda = -1. For the eigenvector corresponding to \lambda = 1, take \mathbf{x} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}; then \mathbf{x}^T A \mathbf{x} = 2 > 0. For the eigenvector corresponding to \lambda = -1, take \mathbf{y} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}; then \mathbf{y}^T A \mathbf{y} = -2 < 0, confirming that the quadratic form changes sign. In contrast to positive definite matrices, where all eigenvalues are positive and the quadratic form is always positive for nonzero vectors, semi-definite matrices allow zero eigenvalues. A symmetric matrix is positive semi-definite if all eigenvalues are nonnegative and the quadratic form is nonnegative for all vectors. An example is the rank-1 matrix B = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}, which is symmetric. The characteristic equation is \det(B - \lambda I) = \lambda(\lambda - 2) = 0, yielding eigenvalues \lambda = 2 and \lambda = 0. To verify, consider the nonzero vector \mathbf{z} = \begin{pmatrix} 1 \\ -1 \end{pmatrix}; then B \mathbf{z} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, so \mathbf{z}^T B \mathbf{z} = 0, satisfying the semi-definite condition without being strictly positive. For semi-definite matrices, the multiplicity of the zero eigenvalue equals the nullity, which is n - r where n is the matrix dimension and r is the rank; in the example above, the zero eigenvalue has multiplicity 1, matching the nullity since \rank(B) = 1. Negative semi-definite matrices follow analogously, with all eigenvalues nonpositive and the quadratic form nonpositive.

Spectral properties

Eigenvalues

For real symmetric matrices, all eigenvalues are real numbers. This follows from the spectral theorem for symmetric matrices, which guarantees that such matrices are diagonalizable over the reals with orthogonal eigenvectors. Similarly, for , which generalize symmetric matrices to the complex case, all eigenvalues are also real. This property arises because the Hermitian adjoint preserves the inner product structure, ensuring that eigenvalues satisfy a reality condition derived from the . A real symmetric matrix A is positive definite if and only if all its eigenvalues \lambda_i satisfy \lambda_i > 0. It is negative definite if and only if \lambda_i < 0 for all i. It is positive semi-definite if \lambda_i \geq 0 for all i, with at least one zero eigenvalue in the semi-definite case excluding the definite one. It is negative semi-definite if \lambda_i \leq 0 for all i, with at least one zero eigenvalue excluding the definite case. The same characterizations hold for , where definiteness is defined via the . These conditions link the spectral properties directly to the quadratic form x^* A x > 0 (or < 0) for all nonzero x in the positive (or negative) definite case, and \geq 0 (or \leq 0) in the semi-definite cases. The Gershgorin circle theorem provides bounds on the possible locations of eigenvalues for any square matrix, including definite ones. For a matrix A = (a_{ij}), every eigenvalue lies within at least one of the disks centered at a_{ii} with radius \sum_{j \neq i} |a_{ij}| in the complex plane. For positive definite matrices, where eigenvalues are positive reals, this theorem can confirm that all disks lie in the positive half-plane if the diagonal entries are positive and sufficiently dominant, thus supporting definiteness without full computation. Similarly, for negative definite matrices, all disks can lie in the negative half-plane if the diagonal entries are negative and sufficiently dominant. The min-max theorem, also known as the Courant-Fischer theorem, characterizes the eigenvalues of a symmetric or Hermitian matrix A through variational principles. The smallest eigenvalue is given by \lambda_{\min} = \min_{x \neq 0} \frac{x^* A x}{x^* x}, with the maximum yielding \lambda_{\max}. More generally, the k-th smallest eigenvalue satisfies \lambda_k = \min_{\dim S = k} \max_{x \in S, x \neq 0} \frac{x^* A x}{x^* x} = \max_{\dim T = n-k+1} \min_{x \in T, x \neq 0} \frac{x^* A x}{x^* x}, where S and T are subspaces. This theorem underscores how definiteness corresponds to the Rayleigh quotient being bounded away from zero or non-negative over all directions for positive cases, and bounded above zero or non-positive for negative cases (with \lambda_{\max} < 0 for negative definite). The spectral radius \rho(A) = \max_i |\lambda_i| plays a key role in . For positive definite matrices, \rho(A) = \lambda_{\max} > 0. For negative definite matrices, \rho(A) = -\lambda_{\min} > 0. Bounds on \rho(A) (e.g., via norms like \rho(A) \leq \|A\|_2) can verify or imply when combined with trace positivity (or negativity) or other conditions. This relation is particularly useful in stability analysis and iterative methods.

Trace and determinants

For a positive definite matrix A, the trace \operatorname{tr}(A) equals the sum of its eigenvalues \lambda_i, all of which are positive, so \operatorname{tr}(A) = \sum \lambda_i > 0. For a negative definite matrix, \operatorname{tr}(A) = \sum \lambda_i < 0. The determinant \det(A) is the product of the eigenvalues, yielding \det(A) = \prod \lambda_i > 0 for positive definite matrices. For negative definite matrices, \det(A) = \prod \lambda_i has sign (-1)^n, where n is the matrix dimension, and |\det(A)| > 0. The log-determinant \log \det(A) = \sum \log \lambda_i is a on the of positive definite matrices and plays a key role in problems, such as and for multivariate Gaussians. For a positive semi-definite A, the \det(A) \geq 0, but \det(A) = 0 if A is singular (i.e., has at least one zero eigenvalue). For negative semi-definite matrices, \det(A) = 0 if singular, and otherwise follows the sign (-1)^n with positive magnitude for the definite case. Hadamard's inequality states that for a positive definite matrix A = (a_{ij}), \det(A) \leq \prod_{i=1}^n a_{ii}, with equality if and only if A is diagonal or a thereof.

Decompositions

Cholesky decomposition

The , also known as , applies exclusively to positive definite matrices and provides a into the product of a and its . For a real symmetric positive definite matrix A \in \mathbb{R}^{n \times n}, there exists a unique lower L \in \mathbb{R}^{n \times n} with positive diagonal entries such that A = L L^T. This decomposition is guaranteed by the of A, which ensures all leading principal minors are positive and allows the square roots of the diagonal terms to be real and positive. The uniqueness follows from the requirement that the diagonal entries of L are strictly positive; without this , the would hold up to sign changes in the columns of L, but the positive diagonal fixes it uniquely. The standard algorithm computes L column by column in a forward substitution manner, leveraging the symmetry of A to halve the storage and work compared to general LU factorization. For k = 1 to n, compute the k-th column of L as follows: l_{kk} = \sqrt{ a_{kk} - \sum_{m=1}^{k-1} l_{km}^2 }, and for i = k+1 to n, l_{ik} = \frac{1}{l_{kk}} \left( a_{ik} - \sum_{m=1}^{k-1} l_{im} l_{km} \right). If at any step the argument of the is non-positive, the matrix is not . This process exploits the to avoid pivoting, ensuring in finite precision arithmetic under mild conditions on the matrix entries. The of the Cholesky algorithm is \frac{1}{3} n^3 + O(n^2) floating-point operations (), plus n , making it roughly twice as efficient as for dense matrices of the same . Storage requires only the lower triangular part of L, which is \frac{1}{2} n(n+1) entries. For the complex case, the decomposition extends to Hermitian positive definite matrices A \in \mathbb{C}^{n \times n}, where A = L L^* with L lower triangular and positive real diagonal entries; the proof of existence and uniqueness mirrors the real case via on the dimension, using the condition x^* A x > 0 for all nonzero x \in \mathbb{C}^n. The algorithm adapts by replacing transposes with conjugate transposes in the sums and using the same positive diagonal convention.

Eigenvalue decomposition

The spectral theorem states that every Hermitian matrix A \in \mathbb{C}^{n \times n} admits an eigenvalue decomposition of the form A = U D U^*, where U is a unitary matrix, D is a real diagonal matrix containing the eigenvalues of A, and U^* denotes the conjugate transpose of U. This decomposition diagonalizes A using an orthonormal basis of eigenvectors, ensuring that the eigenvalues are real and the transformation preserves the inner product structure. For a positive definite Hermitian matrix A, the diagonal entries of D are all strictly positive, reflecting the property that all eigenvalues are positive. This eigenvalue positivity directly confirms the of A, as the x^* A x > 0 for all nonzero x holds if and only if the eigenvalues are positive. Consequently, since A is invertible, its inverse admits the decomposition A^{-1} = U D^{-1} U^*, where D^{-1} is the with reciprocal positive entries, preserving the positive definiteness of the inverse. The columns of U form an orthonormal set of eigenvectors corresponding to the eigenvalues in D, providing a complete for \mathbb{C}^n that diagonalizes A. In practice, computing this decomposition for large Hermitian matrices relies on iterative numerical methods, such as the , which converges to the by repeated QR factorizations and ensures stability for symmetric problems.

Square roots and functions

For a positive definite matrix A, there exists a unique positive definite matrix B such that B^2 = A. This B inherits the of A, ensuring all its eigenvalues are positive. The uniqueness holds specifically among all positive semi-definite square roots of A. The principal can be constructed using the , which applies to Hermitian matrices like positive definite A. Specifically, if A = U D U^* is the eigenvalue decomposition with U unitary and D diagonal containing the positive eigenvalues \lambda_i > 0, then \sqrt{A} = U \sqrt{D} U^*, where \sqrt{D} is the with entries \sqrt{\lambda_i}. This construction preserves , as the square roots of positive eigenvalues remain positive. More generally, analytic functions can be defined on positive definite matrices via the same . For an f on the positive reals, f(A) = U f(D) U^*, where f(D) applies f entrywise to the diagonal eigenvalues. For example, the matrix exponential \exp(A) = U \exp(D) U^* yields another positive definite matrix, since \exp(\lambda_i) > 0 for all real \lambda_i. This framework extends to other functions like powers A^r for r > 0, maintaining .

Characterizations

Quadratic forms

A Hermitian matrix A is positive definite if the associated quadratic form Q(\mathbf{x}) = \mathbf{x}^* A \mathbf{x} satisfies Q(\mathbf{x}) > 0 for all nonzero vectors \mathbf{x}, negative definite if Q(\mathbf{x}) < 0 for all nonzero \mathbf{x}, positive semi-definite if Q(\mathbf{x}) \geq 0 for all \mathbf{x}, and negative semi-definite if Q(\mathbf{x}) \leq 0 for all \mathbf{x}. These sign conditions on Q(\mathbf{x}) provide the primary characterization of definiteness for Hermitian matrices, as the quadratic form captures the matrix's behavior under inner products. For real symmetric matrices, the form simplifies to Q(\mathbf{x}) = \mathbf{x}^T A \mathbf{x}. To illustrate for low dimensions, consider a 2×2 real symmetric matrix A = \begin{pmatrix} a & b \\ b & c \end{pmatrix}, where the quadratic form is Q(x, y) = a x^2 + 2 b x y + c y^2. Completing the square yields Q(x, y) = a \left( x + \frac{b}{a} y \right)^2 + \left( c - \frac{b^2}{a} \right) y^2 assuming a > 0. This expression is positive definite if a > 0 and c - b^2 / a > 0, as both terms are nonnegative and the first is positive for nonzero inputs; otherwise, it may be indefinite or semi-definite depending on the signs. Sylvester's criterion offers an alternative test via principal minors: a A is positive definite all leading principal minors are positive, negative definite if they alternate in sign starting with negative. This criterion derives from the continuity of the and properties of determinants, providing a practical computational check without eigenvalues. For positive semi-definite matrices, the quadratic form satisfies Q(\mathbf{x}) \geq \lambda_{\min} \|\mathbf{x}\|^2, where \lambda_{\min} is the smallest eigenvalue (which is nonnegative), establishing a lower bound tied to the . This inequality follows from the and bounds the form's minimum value relative to the Euclidean norm. An indefinite matrix produces a quadratic form Q(\mathbf{x}) that takes both positive and negative values, corresponding to points in the surface defined by Q, where the form increases in some directions and decreases in others. For instance, the matrix \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} yields Q(x, y) = x^2 - y^2, which has a saddle at the .

Schur complements and minors

A symmetric matrix A is positive definite if and only if all of its principal minors are positive. This condition provides a complete characterization of positive definiteness in terms of the determinants of the principal submatrices of A. For positive semidefiniteness, a symmetric matrix A has all principal minors nonnegative, and A is singular if at least one principal minor is zero. Another characterization involves Schur complements, which offer a recursive perspective on definiteness. Consider a symmetric matrix A partitioned in block form as A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}, where A_{22} is nonsingular. The of A_{22} in A, denoted A / A_{22}, is given by A / A_{22} = A_{11} - A_{12} A_{22}^{-1} A_{21}. If A is positive definite and A_{22} is positive definite, then the A / A_{22} is also positive definite. Similarly, for positive semidefiniteness, the inherits the nonnegative definiteness property under compatible partitioning. This propagation of definiteness through s enables a recursive : the definiteness of A can be checked by confirming the definiteness of A_{22} and then recursively applying the test to the A / A_{22}. A key identity linking the determinant of A to its Schur complement is \det(A) = \det(A_{22}) \cdot \det(A / A_{22}), assuming A_{22} is invertible. This formula underscores the recursive structure, as the sign of \det(A) follows from the signs of \det(A_{22}) and \det(A / A_{22}), aligning with the principal minor conditions for definiteness. For semidefinite cases, the identity holds with nonnegative determinants, and singularity arises when either factor is zero.

Algebraic properties

Addition and scaling

Positive definite matrices exhibit desirable closure properties under addition and scalar multiplication. If A \succ 0 and B \succ 0, where A and B are symmetric matrices, then their sum A + B is also positive definite. This follows from the characterization: for any nonzero x, x^T (A + B) x = x^T A x + x^T B x > 0 since both terms are positive. Analogously, if A \prec 0 and B \prec 0, then A + B \prec 0. In contrast, the sum of positive semidefinite matrices is positive semidefinite but not necessarily definite. For instance, if A \succeq 0 and B \succeq 0, then x^T (A + B) x \geq 0 for all x, yet the minimum eigenvalue of A + B may be zero even if those of A and B are strictly positive in some directions. This boundary behavior highlights that strict positive definiteness is preserved only when both summands are strictly positive definite. Similar considerations apply to negative semidefinite matrices. Scalar multiplication preserves or reverses based on the sign of the scalar. Specifically, if A \succ 0 and c > 0, then cA \succ 0; conversely, if c < 0, then cA \prec 0. For A \prec 0, positive c yields cA \prec 0, while negative c yields cA \succ 0. For c = 0, the result is the zero matrix, which is positive semidefinite. These properties stem from scaling the quadratic form: x^T (cA) x = c (x^T A x), which maintains positivity only for positive scalars when A \succ 0. Adding a negative definite matrix to a positive definite one does not necessarily destroy positive definiteness if the perturbation is sufficiently small. For example, consider A = I_2 \succ 0 and B = -0.5 I_2 \prec 0; then A + B = 0.5 I_2 \succ 0. This stability is quantified by Weyl's inequality for the minimum eigenvalues of Hermitian matrices: \lambda_{\min}(A + B) \geq \lambda_{\min}(A) + \lambda_{\min}(B). Thus, if \lambda_{\min}(A) > -\lambda_{\min}(B), the sum remains positive definite. A symmetric result holds for adding a positive definite matrix to a negative definite one.

Multiplication and products

The product of two positive definite matrices A and B is not necessarily positive definite, as AB may fail to be Hermitian unless A and B commute. If A and B do commute, then AB = BA is Hermitian and shares the simultaneous of A and B, ensuring all eigenvalues of AB are positive products of those of A and B, thus positive definite. For two negative definite matrices that commute, the product AB has eigenvalues that are products of negative numbers, hence positive, making AB positive definite. A key exception arises with the Hadamard (or entrywise, Schur) product A \circ B, where (A \circ B)_{ij} = a_{ij} b_{ij}. The Schur product theorem states that if A \succ 0 and B \succ 0, then A \circ B \succ 0; this holds because the Hadamard product preserves the positive definiteness through the positivity of entrywise operations on . This result, originally due to , extends to positive semidefinite matrices yielding semidefiniteness and has applications in covariance structures and . For negative definite matrices, the Hadamard product is not necessarily negative definite; for example, the entrywise product of diagonal negative definite matrices results in positive diagonal entries, yielding positive definite. Another preserving product is the Kronecker (tensor) product A \otimes B, defined blockwise as A \otimes B = \begin{pmatrix} a_{11} B & \cdots & a_{1n} B \\ \vdots & \ddots & \vdots \\ a_{n1} B & \cdots & a_{nn} B \end{pmatrix}. If A \succ 0 and B \succ 0, then A \otimes B \succ 0, since the quadratic form (A \otimes B) \succ 0 for all nonzero vectors follows from the positive definiteness of A and B on Kronecker-structured inputs. The eigenvalues of A \otimes B are precisely the products \lambda_i \mu_j for eigenvalues \lambda_i of A and \mu_j of B, all positive, confirming definiteness. Similarly, if both are negative definite, the products of eigenvalues are positive, so A \otimes B \succ 0. The \langle A, B \rangle_F = \operatorname{tr}(A^* B) between two Hermitian matrices A and B satisfies \langle A, B \rangle_F > 0, as the of the product of two matrices is strictly positive. This property underscores the of the space of such matrices under the Frobenius metric, with applications in optimization and matrix inequalities. For two negative definite matrices, \langle A, B \rangle_F < 0, since it equals \operatorname{tr}((-A)(-B)) > 0 but with overall negative sign from the forms.

Inverses and ordering

A positive definite matrix A is invertible, and its A^{-1} is also positive definite. This follows from the fact that the eigenvalues of A^{-1} are the reciprocals of the positive eigenvalues of A, ensuring all eigenvalues of A^{-1} are positive. Similarly, the inverse of a negative definite matrix is negative definite, as eigenvalues are reciprocals of negatives, remaining negative. The Löwner partial order on the set of symmetric matrices is defined such that for symmetric matrices A and B, A \geq B A - B \succeq 0 (), and A > B if A - B \succ 0 (positive definite). This order induces a partial ordering on the cone of positive definite matrices, preserving properties like monotonicity under certain operations. For negative definite matrices, one considers the order in the opposite cone, where A \leq B < 0 if B - A \succ 0. The inversion operation is monotone decreasing with respect to the : if A \geq B > 0, then A^{-1} \leq B^{-1}. This monotonicity arises from the operator monotonicity of the reciprocal function on . An analogous decreasing monotonicity holds in the . For satisfying $0 < B \leq A, the largest eigenvalue satisfies \lambda_{\max}(B^{-1}) \geq \lambda_{\max}(A^{-1}). This follows because \lambda_{\max}(A^{-1}) = 1 / \lambda_{\min}(A) and \lambda_{\min}(B) \leq \lambda_{\min}(A), so $1 / \lambda_{\min}(B) \geq 1 / \lambda_{\min}(A). Similar bounds apply in the negative case, adjusting for signs. A trace inequality for an n \times n positive definite matrix A states that \operatorname{tr}(A^{-1}) \geq n^2 / \operatorname{tr}(A). This bound is obtained by applying the AM-HM inequality to the eigenvalues of A. For negative definite A, since eigenvalues are negative, the trace \operatorname{tr}(A) < 0 and \operatorname{tr}(A^{-1}) < 0, but an analogous inequality holds for -\operatorname{tr}((-A)^{-1}) \geq n^2 / (-\operatorname{tr}(-A)) by applying to -A > 0.

Block and submatrix properties

Principal submatrices

A principal submatrix of an n \times n Hermitian matrix A is the r \times r submatrix formed by selecting the same set of r indices for both rows and columns, for some r \leq n. If A is positive definite, then every principal submatrix of A is also positive definite. This property arises because the x^* A x > 0 for nonzero x \in \mathbb{C}^n restricts to a positive definite form on any coordinate corresponding to the submatrix. Similarly, if A is positive semi-definite, every principal submatrix is positive semi-definite. Conversely, a A is positive definite all its principal minors are positive. This characterization, a form of extended to all principal minors, ensures that the definiteness propagates through substructures. For positive semi-definiteness, all principal minors are nonnegative. Analogous results hold for negative definiteness and negative semi-definiteness, where signs are reversed. To illustrate, consider the $3 \times 3 positive definite matrix A = \begin{pmatrix} 4 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{pmatrix}, which has eigenvalues approximately 1.27, 3.00, and 4.73, all positive. The $2 \times 2 principal submatrix formed by the first two rows and columns is B = \begin{pmatrix} 4 & 1 \\ 1 & 3 \end{pmatrix}, with eigenvalues approximately 2.38 and 4.62, also positive, confirming B is positive definite. The eigenvalues of principal submatrices exhibit interlacing with those of the original matrix. For an r \times r principal submatrix B of an n \times n A with eigenvalues \lambda_1 \leq \cdots \leq \lambda_n and \mu_1 \leq \cdots \leq \mu_r, respectively, the Cauchy interlacing theorem states that \lambda_i \leq \mu_i \leq \lambda_{i + n - r}, \quad i = 1, \dots, r. This inequality bounds the of B between extremes of A's , preserving sign patterns consistent with . For the example above, the eigenvalues of B (2.38, 4.62) interlace those of A (1.27, 3.00, 4.73), satisfying $1.27 \leq 2.38 \leq 3.00 and $3.00 \leq 4.62 \leq 4.73.

Schur complements in blocks

In the context of block matrices, the Schur complement provides a crucial tool for determining the definiteness of the entire based on the properties of its s. Consider a Hermitian partitioned as M = \begin{pmatrix} A & B \\ B^* & C \end{pmatrix}, where A is Hermitian positive definite (A \succ 0). The of A in M is defined as S = C - B^* A^{-1} B. A fundamental result states that M is positive definite (M \succ 0) A \succ 0 and S \succ 0. This equivalence preserves the property across the block structure, allowing recursive checks on smaller submatrices. The determinant of the block matrix M can be expressed using the Schur complement as \det(M) = \det(A) \cdot \det(S) = \det(A) \cdot \det(C - B^* A^{-1} B), provided A is invertible. This formula facilitates the computation of determinants in partitioned systems without full matrix inversion, and it underscores how the positive definiteness of S ensures all eigenvalues of M are positive. Block inverse formulas also rely on the Schur complement. Assuming M is invertible, the inverse is given by M^{-1} = \begin{pmatrix} A^{-1} + A^{-1} B S^{-1} B^* A^{-1} & -A^{-1} B S^{-1} \\ -S^{-1} B^* A^{-1} & S^{-1} \end{pmatrix}. This expression highlights the role of S in decoupling the blocks during inversion, which is particularly useful in solving linear systems involving large partitioned matrices. For positive semidefiniteness, the conditions extend using the Moore-Penrose pseudo-inverse. Specifically, M \succeq 0 if A \succeq 0, the range condition (I - A^\dagger A) B = 0 holds, and the C - B^* A^\dagger B \succeq 0. Analogous conditions apply when starting from C \succeq 0. These generalized criteria account for potential singularity in the blocks while preserving semidefiniteness. Numerically, Schur complements arise in block , where eliminating the A-block yields the Schur complement S as the reduced system. This process allows efficient checks for by verifying the positive (semi)definiteness of successive complements, avoiding full and reducing computational cost in algorithms for large-scale problems.

Simultaneous diagonalization

Simultaneous diagonalization refers to the process of finding a common P that transforms two definite matrices into diagonal form via , specifically P^* A P = D_1 and P^* B P = D_2, where D_1 and D_2 are diagonal matrices and P^* denotes the . For two positive definite Hermitian matrices A and B, this is possible they commute, meaning AB = BA. The commuting condition ensures that A and B share a complete set of common eigenvectors, allowing a single basis in which both are diagonal. In the context of the generalized eigenvalue problem, where B is positive definite, the equation A \mathbf{v} = \lambda B \mathbf{v} yields generalized eigenvalues \lambda and eigenvectors \mathbf{v} that form the columns of P. This decomposition satisfies P^* A P = \operatorname{diag}(\lambda_1, \dots, \lambda_n) and P^* B P = I, the , but to achieve two arbitrary diagonal forms D_1 and D_2 without requires the assumption. If A and B do not , simultaneous via a common P generally fails, as their eigenspaces do not align. This framework is particularly useful in optimizing quadratic forms, such as maximizing \mathbf{x}^* A \mathbf{x} subject to the constraint \mathbf{x}^* B \mathbf{x} = 1, where the extrema correspond to the generalized eigenvalues of the pair (A, B).

Applications

Statistics and covariance

In statistics, the covariance matrix of a random vector captures the pairwise covariances between its components and is always symmetric and positive semi-definite, meaning that for any non-zero vector \mathbf{a}, the quadratic form \mathbf{a}^T \Sigma \mathbf{a} \geq 0, where \Sigma is the covariance matrix. This property arises because the covariance matrix can be expressed as an expectation of outer products of centered random vectors, ensuring non-negativity. If the random vector has full rank (i.e., the components are linearly independent), the covariance matrix is positive definite, with all eigenvalues strictly positive. For the , the precision matrix, which is the of the \Sigma^{-1}, is positive definite when \Sigma is positive definite, parameterizing the distribution in terms of conditional independencies and partial correlations among variables. The between a point \mathbf{x} and the \boldsymbol{\mu} is defined as the (\mathbf{x} - \boldsymbol{\mu})^T \Sigma^{-1} (\mathbf{x} - \boldsymbol{\mu}), which generalizes the by accounting for correlations and scales in the data, and is non-negative due to the positive definiteness of \Sigma^{-1}. The sample , estimated from n observations in a p-dimensional , is given by \mathbf{S} = \frac{1}{n} \mathbf{X}^T \mathbf{X}, where \mathbf{X} is the centered (with rows as mean-centered observations), and \mathbf{S} \succeq 0 (positive semi-definite) for any n \geq 1. It becomes positive definite if n > p and the data span the full space. The arises as the of n \mathbf{S} when observations are independent from a multivariate with positive definite , providing a for in and ensuring samples are positive definite with probability one under sufficient .

Optimization and control theory

In optimization, positive definite matrices play a crucial role in ensuring the convexity of problems. Consider the standard form of minimizing \frac{1}{2} \mathbf{x}^T Q \mathbf{x} + \mathbf{c}^T \mathbf{x} subject to linear constraints, where Q \succ 0 guarantees that the objective function is , allowing for efficient via methods like active-set or interior-point algorithms. This property stems from the \mathbf{x}^T Q \mathbf{x} being positive definite, which implies the of the objective is positive definite everywhere, ensuring a unique minimum. More broadly, in unconstrained nonlinear optimization, the second-order sufficient condition for a critical point \bar{\mathbf{x}} (where \nabla f(\bar{\mathbf{x}}) = 0) to be a strict local minimum requires the Hessian \nabla^2 f(\bar{\mathbf{x}}) \succ 0. This confirms the function is locally around \bar{\mathbf{x}}, enabling trust-region or Newton-based methods to converge quadratically near the solution. In , particularly (SDP), the Karush-Kuhn-Tucker (KKT) conditions involve Lagrange multipliers that are positive semi-definite matrices to enforce positive semi-definiteness constraints on decision variables. For an SDP of the form \min \mathbf{C} \bullet X subject to \mathcal{A}(X) = \mathbf{b} and X \succeq 0, the dual multipliers Z \succeq 0 satisfy stationarity and complementarity, with positive definiteness in the interior ensuring strict feasibility and duality gaps approaching zero. Interior-point methods for problems over the positive definite cone, such as , rely on self-concordant barrier functions like -\log \det(X) to maintain strict feasibility while approaching the boundary. This barrier penalizes proximity to the boundary of the cone \{X \succ 0\}, and its is positive definite, facilitating steps that achieve polynomial-time convergence. In , positive definite matrices are essential for analyzing via the A^T P + P A = -Q, where P \succ 0 and Q \succ 0 confirm asymptotic stability of the \dot{\mathbf{x}} = A \mathbf{x}. Solving for P yields a V(\mathbf{x}) = \mathbf{x}^T P \mathbf{x} > 0 whose \dot{V} = \mathbf{x}^T (A^T P + P A) \mathbf{x} = -\mathbf{x}^T Q \mathbf{x} < 0 for \mathbf{x} \neq 0, proving exponential decay to the origin. This framework extends to linear quadratic regulators, where P is the solution to a related algebraic Riccati equation, ensuring optimal stabilizing feedback.

Physics and engineering

In the finite element method for structural analysis, the stiffness matrix represents the relationship between forces and displacements in a discretized model of a physical structure. For stable structures without rigid body modes, this matrix is symmetric and positive definite, ensuring that the strain energy is always positive for any non-zero deformation and that the system has a unique solution for displacements under applied loads. This property guarantees structural stability, as zero or negative eigenvalues would indicate modes of unconstrained motion or instability, respectively. In heat conduction problems, Fourier's law describes the heat flux \mathbf{q} as proportional to the negative temperature gradient \nabla T, expressed as \mathbf{q} = - \mathbf{K} \nabla T, where \mathbf{K} is the thermal conductivity tensor. For physical consistency in anisotropic materials, \mathbf{K} must be symmetric and positive definite, ensuring that heat flows from higher to lower temperatures and that the second law of thermodynamics is satisfied through non-negative entropy production. This positive definiteness prevents unphysical behaviors, such as heat flowing against the gradient, and is crucial for the well-posedness of the resulting elliptic partial differential equation -\nabla \cdot (\mathbf{K} \nabla T) = g in steady-state heat transfer. In quantum mechanics, the Hamiltonian operator \hat{H} governing the time-independent Schrödinger equation \hat{H} \psi = E \psi is Hermitian, guaranteeing real eigenvalues E that represent observable energies. For systems exhibiting bound states, such as particles in confining potentials, the Hamiltonian is positive definite (or positive semi-definite when shifted appropriately), ensuring a discrete spectrum bounded below by a positive ground-state energy and stability against collapse. This property manifests in the positive definiteness of the quadratic form \langle \psi | \hat{H} | \psi \rangle > 0 for non-zero wavefunctions, reflecting the dominance of over attractive potentials in stable bound configurations. Williamson's theorem provides a framework for the symplectic diagonalization of positive definite matrices in , stating that any positive definite , such as a in systems, can be decomposed as V = S^\top D S, where S is a and D is diagonal with non-negative entries representing normal-mode frequencies. In classical and , this theorem is applied to matrices of Gaussian states, enabling the identification of squeezed modes and in distributions for light fields. For instance, in multimode optical systems, it facilitates the analysis of beam propagation and noise, where the positive definite ensures physical realizability and compliance with uncertainty principles. In mass-spring systems modeling mechanical vibrations, the dynamical matrix, often formed as D = M^{-1} K where M is the and K is the , governs the \ddot{\mathbf{x}} + D \mathbf{x} = 0. For oscillatory modes in stable configurations with positive spring constants, D is symmetric and , yielding purely imaginary eigenvalues that correspond to real frequencies \omega = \sqrt{\lambda} without or . This ensures bounded motion, as verified by the of K implying positive for deformations.

Generalizations

Non-symmetric matrices

The concept of definiteness for matrices extends beyond the Hermitian (symmetric for real matrices) case to non-Hermitian square matrices, where traditional quadratic forms and spectral properties require careful adaptation due to the possibility of complex eigenvalues. In this context, the numerical range, also known as the field of values W(A) = \{ x^* A x : x \in \mathbb{C}^n, \, \|x\| = 1 \}, serves as a key analog for . For a non-Hermitian matrix A, an analog of is that W(A) lies in the open right half-plane \{ z \in \mathbb{C} : \operatorname{Re}(z) > 0 \}, ensuring all Rayleigh quotients have positive real parts. This property holds if and only if the Hermitian part (A + A^*)/2 is positive definite, as the real parts of elements in W(A) coincide with the numerical range of the Hermitian part. For real non-symmetric matrices, a related notion is that x^T A x > 0 for all real x \neq 0, which is equivalent to the symmetric part A + A^T being positive definite. However, even under this condition, the eigenvalues of A may be complex, though their real parts are all positive, bounded below by the smallest eigenvalue of (A + A^T)/2. A matrix A satisfying A + A^T \succ 0 is termed positive , meaning all eigenvalues have strictly positive real parts. Unlike the Hermitian case, where the guarantees by a and all eigenvalues are real and positive, non-Hermitian positive matrices lack such a complete ; instead, stability is characterized via the Hurwitz criterion adapted for positive real parts (all eigenvalues in the open right half-plane). These concepts find applications in dynamical systems, particularly for analyzing the growth or instability of solutions to \dot{x} = A x, where positive stability implies divergence rather than convergence, as seen in positive linear systems modeled by Metzler matrices. In contrast to the Hermitian ideal, where directly ties to quadratic forms and optimization, non-symmetric extensions emphasize spectral location over form .

Indefinite quadratic forms

A symmetric matrix A is called indefinite if it has at least one positive eigenvalue and at least one negative eigenvalue. The associated Q(\mathbf{x}) = \mathbf{x}^T A \mathbf{x} then takes both positive and negative values for nonzero \mathbf{x}, reflecting the mixed signs of the eigenvalues. The signature of an indefinite A is the pair (p, q), where p is the number of positive eigenvalues and q the number of negative eigenvalues (with p + q = n for an n \times n matrix of full ), and the inertia includes the multiplicity r of the zero eigenvalue such that p + q + r = n. By Sylvester's law of , these quantities—the numbers of positive, negative, and zero eigenvalues—are invariants under transformations, meaning that if B = P^T A P for some invertible P, then A and B share the same . Any real symmetric matrix A can be reduced by congruence to a canonical diagonal form, where there exists an invertible matrix P such that P^T A P = \begin{pmatrix} I_p & 0 & 0 \\ 0 & -I_q & 0 \\ 0 & 0 & 0_r \end{pmatrix}, with I_p and I_q the p \times p and q \times q identity matrices, respectively; this form directly encodes the signature and inertia of A. The Courant-Fischer min-max theorem provides a variational characterization of the eigenvalues that enables counting the numbers of sign changes: the k-th largest eigenvalue \lambda_k(A) satisfies \lambda_k(A) = \max_{\dim S = k} \min_{\mathbf{x} \in S, \|\mathbf{x}\| = 1} \mathbf{x}^T A \mathbf{x} = \min_{\dim T = n-k+1} \max_{\mathbf{x} \in T, \|\mathbf{x}\| = 1} \mathbf{x}^T A \mathbf{x}, allowing the inertia to be determined by evaluating the number of eigenvalues exceeding a threshold via sign variations in the sequence of Rayleigh quotients over nested subspaces. A prominent example of indefinite quadratic forms arises in the hyperbolic case, particularly with Lorentzian metrics in , where the has indefinite (1, 3) or (3, 1), corresponding to one timelike direction (negative eigenvalue) and three spacelike directions (positive eigenvalues); this structure underpins the geometry of spacetime, enabling the description of causal structures like light cones.

References

  1. [1]
    [PDF] Math 2270 - Lecture 33 : Positive Definite Matrices
    A positive definite matrix is a symmetric matrix with all positive eigenvalues, or all positive pivots, or where xTAx > 0 for all non-zero vectors x.
  2. [2]
    [PDF] Test for Positive and Negative Definiteness
    The conditions for the quadratic form to be negative definite are similar, all the eigenvalues must be negative. Theorem 4. Let A be an n × n symmetric matrix ...
  3. [3]
    [PDF] Quadratic Forms and Definite Matrices - Faculty
    Lemma 16.1 If A is a positive or negative definite matrix, then A is nonsingular. Proof Suppose that such an A is singular. Then, there exists a nonzero ...
  4. [4]
    Positive definite matrix - StatLect
    A square matrix is positive definite if pre-multiplying and post-multiplying it by the same vector always gives a positive number as a result.Definiteness · Eigenvalues of a positive... · Eigenvalues of a positive semi...
  5. [5]
    [PDF] 8.3 Positive Definite Matrices
    A positive definite matrix is a symmetric matrix where all its eigenvalues are positive, and xT Ax > 0 for every non-zero column x.Missing: mathematics | Show results with:mathematics
  6. [6]
    [PDF] 7.2 Positive Definite Matrices and the SVD - MIT Mathematics
    (a) Every positive definite matrix is invertible. (b) The only positive definite projection matrix is P D I. (c) A diagonal matrix with positive diagonal ...
  7. [7]
    Positive Definite Matrix -- from Wolfram MathWorld
    A positive definite matrix has at least one matrix square root. Furthermore, exactly one of its matrix square roots is itself positive definite. A necessary and ...
  8. [8]
    [PDF] 1. Positive Definite Matrices
    Oct 26, 2005 · A matrix A is positive definite if xTAx > 0 for all nonzero x. A positive definite matrix has real and positive eigenvalues, and its leading ...<|control11|><|separator|>
  9. [9]
    [PDF] Lecture 4.9. Positive definite and semidefinite forms - Purdue Math
    Apr 10, 2020 · A quadratic form Q(x) is positive semidefinite if Q(x) ≥ 0 for all x, and positive definite if Q(x) > 0 for all x ≠ 0.
  10. [10]
    [PDF] Spectral theorems, SVD, and Quadratic forms
    If A is a real symmetric matrix, then its eigenvalues are real ... A symmetric matrix A is positive definite if its associated quadratic form is positive definite ...
  11. [11]
    [PDF] Symmetric Matrices and Quadratic Forms - Aerostudents
    A quadratic form Q is per definition: • positive definite if Q(x) > 0 for all x 6= 0. • negative definite if Q(x) < 0 for all x 6= 0. • positive semidefinite if ...<|control11|><|separator|>
  12. [12]
    [PDF] I eigenvectors of symmetric matrices I quadratic forms I inequalities ...
    we say A is positive definite if xTAx > 0 for all x 6= 0. I denoted A > 0. I A > 0 if and only if min(A) > 0, i.e., all eigenvalues are positive. 12. Page 13 ...
  13. [13]
    James Joseph Sylvester: Jewish mathematician in a Victorian world ...
    Mar 29, 2007 · Sylvester-Gallai theorem, Sylvester's criterion for a positive definite matrix, and ... Reverts to public domain 28 years from publication. 481 ...
  14. [14]
    [PDF] 2·Hermitian Matrices - Faculty
    A Hermitian matrix is positive definite if and only if all its eigenvalues are positive. This result, an immediate consequence of the definition of positive ...
  15. [15]
    Unitary Matrices and Hermitian Matrices
    Since real symmetric matrices are Hermitian, the previous results apply to them as well. I'll restate the previous result for the case of a symmetric matrix ...
  16. [16]
    Chapter 2 Semidefinite Optimization
    If A A is positive definite, then M M is positive semidefinite if and only if M/A M / A is positive semidefinite: If A≻0, then M⪰0⇔M/A⪰0. If A ≻ 0 , then M ...
  17. [17]
    [PDF] 1 Semidefinite Matrices
    A symmetric matrix A is semidefinite (A 0) if all its eigenvalues are nonnegative. A is definite (A 0) if all eigenvalues are positive.
  18. [18]
  19. [19]
    Matrix Analysis - Cambridge University Press & Assessment
    Chapter 7 - Positive definite matrices. pp 391-486. You have access Access. PDF ... Select Notation. Notation. pp 547-548. You have access Access. PDF; Export ...
  20. [20]
    [PDF] Signatures in algebra, topology and dynamics
    The sum p + q is the rank and the difference p ≠ q is the signature of the quadratic form. In other words,. Sylvester's Law of Inertia asserts that for any n > ...
  21. [21]
    [PDF] 12.2 Covariance Matrices and Joint Probabilities
    The covariance matrix V is positive definite unless the experiments are dependent. Now we move from two variables x and y to M variables like age-height-weight.
  22. [22]
    What Is a Symmetric Indefinite Matrix? - Nick Higham
    Oct 25, 2022 · A symmetric indefinite matrix A is a symmetric matrix for which the quadratic form x^TAx takes both positive and negative values.
  23. [23]
    7.2 Quadratic forms - Understanding Linear Algebra
    Likewise, a symmetric matrix is indefinite if some eigenvalues are positive and some are negative... We will now apply what we've learned about ...
  24. [24]
    [PDF] Positive Semi-Definite Matrices
    An n × n symmetric matrix A is positive semi-definite (“psd”) if any of the following equivalent conditions hold: 1. All of the eigenvalues λi of A are ...
  25. [25]
    [PDF] Math 2940: Symmetric matrices have real eigenvalues
    The Spectral Theorem states that if A is an n × n symmetric matrix with real entries, then it has n orthogonal eigenvectors. The first step of the proof is ...
  26. [26]
    [PDF] Lecture 8 : Eigenvalues and Eigenvectors Hermitian Matrices
    Feb 24, 2017 · Claim 1. M is Hermitian iff all its eigenvalues are real. If further M is real and symmetric, then all its eigenvectors have real entries as ...
  27. [27]
    [PDF] Gershgorin Circles - Alen Alexanderian
    Gershgorin circles provide a basic means of localizing eigenvalues of a matrix. This is made precise by the Gershgorin Theorem. Below, we provide a concise.<|control11|><|separator|>
  28. [28]
    [PDF] THE MIN-MAX PRINCIPLE Let A be a symmetric n × n matrix. The ...
    Let A be a symmetric n × n matrix. The eigenvalues are real and hence we can order them λ1(A) ≤ λ2(A) ≤···≤ λn(A) where we count them in their multiplicity.Missing: source | Show results with:source
  29. [29]
    [PDF] Spectral radius, symmetric and positive matrices
    1 Spectral radius. Definition 1. The spectral radius of a square matrix A is ρ(A) = max{|λ| : λ is an eigenvalue of A}. For an n × n matrix A, let kAk = max{| ...
  30. [30]
    [PDF] Convex Optimization
    This book is about convex optimization, a special class of mathematical optimiza- tion problems, which includes least-squares and linear programming ...
  31. [31]
    [2112.01462] Hadamard-type inequalities for $k$-positive matrices
    Dec 2, 2021 · The case k=n corresponds to the classical Hadamard inequality for positive definite matrices. Some consequences are also obtained.
  32. [32]
    Cholesky decomposition - StatLect
    In this lecture we are going to prove that all positive definite matrices possess a Cholesky factorization. Moreover, the decomposition is unique.
  33. [33]
    [PDF] Linear Algebra 2 Lecture #22 Cholesky decomposition of positive ...
    May 4, 2023 · Theorem 1.1 [Cholesky decomposition]. For every positive definite matrix A ∈ Rn×n, there exists a unique lower triangular matrix L ∈ Rn×n ...
  34. [34]
    [PDF] The Spectral Theorem for Hermitian Matrices - MIT OpenCourseWare
    Nov 15, 2010 · The Spectral Theorem for Hermitian Matrices. This is the proof that I messed up at the end of class on Nov 15. For reference: A Hermitian means ...
  35. [35]
    [PDF] 0.1 The Spectral Theorem for Hermitian Operators
    ◦ Proof: This follows immediately from the spectral theorem since a real symmetric matrix is Hermitian. • Example: The real symmetric matrix A = 3 6. 6 8.
  36. [36]
    [PDF] Symmetric matrices and positive definiteness - MIT OpenCourseWare
    Positive definite matrices. A positive definite matrix is a symmetric matrix A for which all eigenvalues are positive. A good way to tell if a matrix is ...
  37. [37]
    [PDF] 9. QR algorithm
    QR algorithm. • the standard method for computing eigenvalues and eigenvectors. • we discuss the algorithm for symmetric eigendecomposition. A = QΛQ. T. = n.
  38. [38]
    None
    Below is a merged summary of the information on square roots of positive definite matrices, consolidating all details from the provided segments into a single, comprehensive response. To maximize density and clarity, I will use a table in CSV format to organize the key details (uniqueness, construction, formulas, and matrix functions) across the different sections and sources. Following the table, I will provide additional narrative context and URLs for completeness.
  39. [39]
    Functions of Matrices: Theory and Computation
    The matrix square root is one of the most commonly occurring matrix functions, arising most frequently in the context of symmetric positive definite matrices.
  40. [40]
    Positive Definite Quadratic Form -- from Wolfram MathWorld
    A quadratic form Q(z) is positive definite if for z!=0. For (x,Ax), it's positive definite if every eigenvalue of A is positive.Missing: characterization | Show results with:characterization
  41. [41]
    [PDF] 12. Minimization - Numerical Analysis Lecture Notes
    May 18, 2008 · The quadratic form is called positive definite if q(x) > 0 for all 0 6= x ∈ Rn. (12.3) Thus, a quadratic form is positive definite if and only ...Missing: characterization | Show results with:characterization
  42. [42]
    [PDF] Positive definite matrices and minima - MIT OpenCourseWare
    We do this by completing the square: 2x2 + 12xy + 20y2 = 2(x + 3y)2 + 2y2. Figure 2: The graph of f (x, y) = 2x2 + 12xy + 20y2.
  43. [43]
    [PDF] This lecture: Lec2p1, ORF363/COS323
    Sylvester's criterion provides another approach to testing positive definiteness or positive semidefiniteness of a matrix. A symmetric matrix is positive ...
  44. [44]
    Positive Definite Matrices and Sylvester's Criterion - jstor
    Its determinant is the kth principal minor. THEOREM (Sylvester's Criterion). A real, symmetric matrix is positive definite if and only if all its principal ...
  45. [45]
    [PDF] The Rayleigh Principle for Finding Eigenvalues
    Apr 19, 2005 · R(x) = xT Ax. xT x. = (Qx)T Λ(Qx). (Qx)T (Qx) . The denominator is (Qx)T (Qx) = ||Qx||2 = ||x||2 = xT x. So,. R(QT x) = (QQT x)T Λ(QQT x). (QQT ...
  46. [46]
    Quadratic Forms - Ximera - The Ohio State University
    The quadratic form is indefinite. Its graph is ... Notice the behavior of the graph around the origin; because of its shape, this is called a saddle point.
  47. [47]
    [PDF] The Schur Complement and Symmetric Positive Semidefinite (and ...
    Aug 24, 2019 · The matrix, A − BD−1C, is called the Schur Complement of D in M. If A is invertible, then by eliminating x first using the first equation we ...
  48. [48]
    [PDF] Lecture notes on matrix analysis
    Apr 27, 2019 · Theorem 3.6 (Weyl's inequalities). If A, B ∈ Mn are Hermitian ... Corollary 3.22 (Hoffman–Wielandt inequality for Hermitian matrices).<|control11|><|separator|>
  49. [49]
    [PDF] Numerical Methods for Inverting Positive Definite Matrices - RAND
    DEFINITION 4: An mxm matrix A is positive definite if, for every non-zero m-vector x, x'Ax > 0. Theorem 5. A positive definite matrix is invertible. Proof.
  50. [50]
    [PDF] 1 Positive Semidefinite matrices
    Oct 5, 2020 · The symbols and can be used to define an ordering on matrices, which is called the “Loewner ordering”. It's a partial order: it's impossible ...
  51. [51]
    Trace of an inverse inequality $\operatorname{Tr}(A^{-1}) \ge n^2 ...
    Mar 24, 2017 · The proof follows by using the fact that trace is and a sum of eigenvalues and using AM-GM inequality. Does this inequality hold with equality ...How to prove this inequality on trace of inverse of a positive definite ...trace inequality of positive definite matrices. - Math Stack ExchangeMore results from math.stackexchange.com
  52. [52]
    [PDF] Lecture 12: Positive semidefinite cone - CSE - IIT Kanpur
    A principal submatrix P of a matrix M is obtained by selecting a subset of rows and the same subset of columns. If M is positive semidefinite then all its ...<|control11|><|separator|>
  53. [53]
  54. [54]
    [PDF] Sylvester Criterion for Positive Definiteness - 4dspace@MTTS
    A real symmetric n×n matrix is positive definite iff all its principal minors are positive. Proof. Let A be be a real positive definite n×n symmetric matrix.
  55. [55]
    [PDF] MatrixTheory-2012-10-25 Haas
    Oct 25, 2012 · 4.6.2 Definition. A Hermitian matrix A ∈ Mn is positive-definite if all of its eigenvalues are strictly positive. 4.6.3 Theorem ...<|control11|><|separator|>
  56. [56]
    The Schur Complement and Its Applications - SpringerLink
    Describes the Schur complement as a rich and basic tool in mathematical research and applications and discusses many significant results that illustrate its ...
  57. [57]
    What Is the Schur Complement of a Matrix? - Nick Higham
    Jun 1, 2023 · Test for Positive Definiteness. For Hermitian matrices the Schur complement provides a test for positive definiteness. Suppose. \notag A ...
  58. [58]
    [PDF] 1 The Covariance Matrix - TTIC
    A matrix satisfying this property for all u is called positive semi- definite. The covariance matrix is always both symmetric and positive semi- definite.
  59. [59]
    [PDF] 2. Positive semidefinite matrices
    • the covariance matrix is positive semidefinite: for all a,. a. TΣa ... an example is the polynomial kernel function f (x) = (1 + x). d (133A ...
  60. [60]
    Proof: Positive semi-definiteness of the covariance matrix
    Sep 26, 2022 · A positive semi-definite matrix is a matrix whose eigenvalues are all non-negative or, equivalently, Mpos. semi-def. ⇔xTMx≥0for allx∈Rn.
  61. [61]
    [PDF] The multivariate normal distribution - MyWeb
    Definition: Let x = (X1 X2 ···Xd) denote a vector of random variables, then E(x)=(EX1 EX2 ···EXd). Meanwhile, Vx is a d × d matrix: Vx = E{(x − µ)(x − µ)>} with ...
  62. [62]
    [PDF] Mahalanobis Distance - Indian Academy of Sciences
    The quadratic form (1) has the effect of transforming the variables to uncorrelated standardised variables. Yand computing the (squared) Euclidean distance ...
  63. [63]
    [PDF] Lecture 2. The Wishart distribution
    In this lecture, we define the Wishart distribution, which is a family of distributions for symmetric positive definite matrices, and show its relation to ...
  64. [64]
    Wishart distribution | Properties, proofs - StatLect
    The Wishart distribution is a multivariate continuous distribution which generalizes the Gamma distribution.How the distribution is derived · Covariance matrix · Review of matrix algebra
  65. [65]
    [PDF] Convex Optimization Overview - CS229
    Oct 17, 2008 · Convex optimization is a method to efficiently find the global solution of a function, where the function is convex, and the domain is a convex ...
  66. [66]
    [PDF] Basic Optimization
    = 0, the Hessian is used to classify it. • If 𝐻(𝑐∗. ) is positive definite, then 𝑐. ∗ is a local minimum. • If 𝐻(𝑐∗. ) is negative definite, then 𝑐. ∗ is a ...
  67. [67]
    [PDF] Introduction to Optimization, and Optimality Conditions for ...
    If ¯ x)=0 and H(¯ x is a local minimum, then ∇f(¯ x) is positive semidefinite. Proof: From the first order necessary condition, ∇f(¯ x) x) = 0. Suppose H ...
  68. [68]
    [PDF] Unconstrained Optimization - Stanford Computer Graphics Laboratory
    If Hf is positive definite, then ~x∗ is a local minimum of f. • If Hf is negative definite, then ~x∗ is a local maximum of f. • If Hf is indefinite, then ~ ...
  69. [69]
    [PDF] Positive-Definite Programming - Stanford University
    Secondly, many convex optimization problems, e.g., linear programming and (convex) quadratically constrained quadratic programming, can be cast as PDPs.
  70. [70]
    [PDF] Yinyu Ye, Stanford, MS&E211 Lecture Notes #15
    barrier function regularization: –log(det(Z)) min. C • Z. s.t.. Aij. • Z ... and positive definite. Then, ∇ f (xk) = c + Qxk, and the step size has ...
  71. [71]
    [PDF] A Faster Interior Point Method for Semidefinite Programming - arXiv
    Sep 21, 2020 · In contrast, interior point methods add a barrier function to the objective and, by adjusting the weight of this barrier function, solve a ...
  72. [72]
  73. [73]
    [PDF] Lecture 13 Linear quadratic Lyapunov theory
    • V will be positive definite, so it is a Lyapunov function that proves A is stable in particular: a linear system is stable if and only if there is a quadratic.
  74. [74]
    [PDF] Structured and Simultaneous Lyapunov Functions for System ...
    Jan 22, 2001 · 1 A famous result of Lyapunov theory states that A is stable if and only if there is a P = PT > 0 such that AT P + PA ≤ 0. In this case we say.
  75. [75]
    [PDF] Mathematical Properties of Stiffness Matrices - Duke People
    A stiffness matrix relates forces to displacements, is symmetric, and can be ill-conditioned if its determinant is close to zero.
  76. [76]
    [PDF] Heat flow - UC Davis Math
    for a suitable conductivity tensor A : Ω → L(Rn, Rn), which is required to be symmetric and positive definite. Explicitly, if x ∈ Ω, then A(x) : Rn → Rn ...
  77. [77]
    [PDF] On the Necessity of Positive Semi-Definite Conductivity and ...
    For such a mate- rial, a general form of the first law of thermodynamics is posed along with constitutive models for internal energy and diffusive heat flux. A ...
  78. [78]
    [PDF] Gaussian States
    The restriction to positive definite Hamiltonian matrices corresponds to considering. 'stable' systems – i.e., Hamiltonian operators bounded from below – and ...
  79. [79]
    An alternative Hamiltonian formulation for the Pais–Uhlenbeck ...
    If α 0 and α 1 are both positive, then one has a positive-definite Hamiltonian. This obviously leads to the ghost-free quantum theory for the fourth-order PU ...
  80. [80]
    Williamson theorem in classical, quantum, and statistical physics
    Dec 1, 2021 · The question addressed by the Williamson theorem is the diagonalization of positive definite matrices through symplectic matrices. Before the ...
  81. [81]
    [PDF] arXiv:2106.11965v2 [quant-ph] 22 Nov 2021
    Nov 22, 2021 · The question addressed by the Williamson theorem is the diagonalization of positive definite matrices through symplectic matrices. Before ...
  82. [82]
    [PDF] Linear Systems of Equations. . . in a Nutshell - MIT OpenCourseWare
    Nov 19, 2014 · For positive spring constants we know that any stretching of either spring will result in positive potential energy: a physical “proof” that K ...
  83. [83]
    [PDF] Lecture notes on Numerical Range - Chi-Kwong Li
    The numerical range W(A) of a matrix A is the collection of complex numbers x∗Ax, where x is a unit vector, and is defined as {x∗Ax : x ∈ Cn,x∗x = 1}.Missing: analog | Show results with:analog
  84. [84]
    Relation between real part of eigenvalues of A and (A+AT)/2
    Jun 10, 2017 · For matrix A, the real part of its eigenvalues (λ) satisfies m ≤ Re(λ) ≤ M, where m and M are the minimum and maximum eigenvalues of (A+AT)/2.
  85. [85]
    Stabilising the Metzler matrices with applications to dynamical systems
    Jan 16, 2019 · Metzler matrices play a crucial role in positive linear dynamical systems. Finding the closest stable Metzler matrix to an unstable one (and ...
  86. [86]
    Sylvester's Inertia Law -- from Wolfram MathWorld
    Sylvester's Inertia Law: The numbers of eigenvalues that are positive, negative, or 0 do not change under a congruence transformation.
  87. [87]
    [PDF] Math 416, Spring 2010 Congruence; Sylvester's Law of Inertia
    Apr 22, 2010 · We say that the index of D is the number of positive entries in D, and the signature of D is the the number of positive entries of D minus the ...
  88. [88]
    [PDF] Further linear algebra. Chapter V. Bilinear and quadratic forms.
    Theorem 6.1 (Sylvester's Law of Inertia) Let V be a finite dimensional vector space over R and let q be a quadratic form on V . Then q has exactly one (real) ...
  89. [89]
    COURANT-FISCHER MIN-MAX THEOREMS
    If fk(x) and fk+1(x) have opposite signs, it signifies that Ak+1 has one more eigenvalue on the right of x than Ak. Hence the number N(x) of sign changes of ...
  90. [90]
    [PDF] Part 3 General Relativity - DAMTP
    3.2 Lorentzian signature . ... In GR, we are interested in Lorentzian metrics,. i.e., those with signature − + +...+. This can be motivated by the ...
  91. [91]
    [PDF] Lorentzian Geometry
    A Lorentzian manifold is a pair (Mn+1,g), where Mn+1 is a (n+1)-dimensional differentiable manifold and g a Lorentz metric such that g assigns to each point p ∈ ...