In linear algebra, a symmetric bilinear form on a vector space V over a field F (typically of characteristic not equal to 2) is a function B: V \times V \to F that is linear in each argument separately and satisfies the symmetry condition B(u, v) = B(v, u) for all u, v \in V.[1] This structure generalizes the notion of a dot product and plays a fundamental role in defining geometric and algebraic properties of vector spaces.[2]With respect to a basis of V, any symmetric bilinear form can be represented by a symmetric matrix A, where B(x, y) = x^T A y and A^T = A.[3] The matrix entries a_{ij} = B(e_i, e_j) capture the form's values on basis vectors, and the rank of A equals the rank of the form, measuring its nondegeneracy (the form is nondegenerate if its kernel is trivial).[3] A key associated concept is the quadratic form Q: V \to F defined by Q(v) = B(v, v), which recovers the bilinear form via the polarization identity B(u, v) = \frac{1}{2} [Q(u + v) - Q(u) - Q(v)] over fields of characteristic not 2.[1]Over the real numbers \mathbb{R}, the spectral theorem ensures that every symmetric matrix A is orthogonally diagonalizable, so symmetric bilinear forms can be classified up to congruence by their signature (p, q), where p is the number of positive eigenvalues, q the number of negative ones, and p + q = \dim V.[2] Positive definite forms (signature (n, 0)) induce inner products on V, enabling notions of orthogonality and norms essential in Euclidean geometry.[2] Indefinite forms, such as the Minkowski metric in special relativity, arise in pseudo-Euclidean spaces with signature (p, q) where p + q = n > 0.[1]Symmetric bilinear forms find applications across mathematics and physics, including the analysis of quadratic forms in optimization—where positive definiteness of the Hessian matrix determines local minima—and in the classification of conic sections via their associated matrices.[2] Over other fields, such as finite fields or the complexes, classification depends on invariants like the discriminant \det A modulo squares, with all nondegenerate forms over \mathbb{C} equivalent to the standard sum of squares.[1]
Definition and Properties
Formal Definition
A bilinear form on a vector space V over a field F is a map B: V \times V \to F that is linear in each argument separately.[4] Specifically, for all scalars a, b \in F and vectors u, v, w \in V, it satisfiesB(au + bv, w) = a B(u, w) + b B(v, w)andB(u, cw + dw) = c B(u, w) + d B(u, w).[4] Such forms generalize the notion of bilinear maps from linear algebra, where linearity holds in each coordinate.[5]A symmetric bilinear form is a bilinear form that additionally satisfies the symmetry condition B(u, v) = B(v, u) for all u, v \in V, distinguishing it from general (possibly skew-symmetric) bilinear forms.[4] These are typically studied on finite-dimensional vector spaces V over fields F of characteristic not equal to 2, ensuring compatibility with associated quadratic structures.[5]Symmetric bilinear forms are often denoted by B(u, v) or \langle u, v \rangle_B.[3]
Basic Properties
A symmetric bilinear form B: V \times V \to F on a vector space V over a field F inherits the core properties of bilinearity, which ensure additivity and homogeneity in each argument separately. Specifically, for all u, v, w \in V and c \in F, additivity in the first argument gives B(u + v, w) = B(u, w) + B(v, w), while homogeneity yields B(cu, w) = c B(u, w).[1] Similarly, additivity in the second argument states B(u, v + w) = B(u, v) + B(u, w), and homogeneity provides B(u, cv) = c B(u, v).[6] These properties establish that B is F-linear in each variable when the other is held fixed, making it an F-bilinear map.[7]The symmetry condition B(u, v) = B(v, u) for all u, v \in V further implies that the values B(u, u) along the "diagonal" behave as quadratic terms, associating a scalar to each vector that scales quadratically under addition in fields of characteristic not equal to 2.[1] This reflexivity in the diagonal elements B(u, u) highlights the form's self-pairing structure, which underlies its connection to quadratic maps without fully deriving the latter here.[7]However, over fields of characteristic 2, the symmetry of a bilinear form does not guarantee the standard association with quadratic forms via polarization, as the identity relating B(u, v) to differences of B(w, w) involves division by 2, which is impossible.[1] In such fields, symmetric and alternating bilinear forms coincide, limiting the distinct algebraic implications of symmetry alone.[1]
Examples and Applications
Concrete Examples
One fundamental example of a symmetric bilinear form is the standard dot product on the Euclidean space \mathbb{R}^n, defined by B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i for \mathbf{u} = (u_1, \dots, u_n) and \mathbf{v} = (v_1, \dots, v_n).[1] This form is symmetric since B(\mathbf{u}, \mathbf{v}) = B(\mathbf{v}, \mathbf{u}) and non-degenerate, as the only vector orthogonal to all of \mathbb{R}^n is the zero vector.[1]An example of an indefinite symmetric bilinear form arises on \mathbb{R}^2, given by B((x_1, y_1), (x_2, y_2)) = x_1 x_2 - y_1 y_2.[1] This form is symmetric and has signature (1,1), corresponding to the diagonal matrix \operatorname{diag}(1, -1) under the standard basis, with one positive and one negative eigenvalue.[1][8]A degenerate symmetric bilinear form on \mathbb{R}^2 is B(\mathbf{u}, \mathbf{v}) = u_1 v_1, where \mathbf{u} = (u_1, u_2) and \mathbf{v} = (v_1, v_2), so only the first coordinates contribute.[9] This form has a nontrivial kernel, consisting of all vectors of the form (0, u_2), and is represented by the matrix \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, which has rank 1.[9]In matrix notation, a general symmetric bilinear form on \mathbb{R}^n can be expressed as B(\mathbf{u}, \mathbf{v}) = \mathbf{u}^T A \mathbf{v}, where A is an n \times n symmetric matrix.[1] For instance, with A = \operatorname{diag}(1, -1) on \mathbb{R}^2, this recovers the indefinite form B((x_1, y_1), (x_2, y_2)) = x_1 x_2 - y_1 y_2.[1]In abstract settings, such as Lie algebras, the Killing form provides an example of a symmetric bilinear form. For a Lie algebra \mathfrak{g} over a field of characteristic zero, it is defined by \kappa(X, Y) = \operatorname{Tr}(\operatorname{ad}_X \circ \operatorname{ad}_Y) for X, Y \in \mathfrak{g}, where \operatorname{ad} denotes the adjoint representation.[10] This form is symmetric and invariant under automorphisms of \mathfrak{g}.[10]
Geometric and Physical Applications
Symmetric bilinear forms play a fundamental role in Euclidean geometry, where the standard inner product on \mathbb{R}^n serves as a prototypical example. This form, defined by B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i, is symmetric and positive definite, enabling the measurement of lengths via |\mathbf{u}| = \sqrt{B(\mathbf{u}, \mathbf{u})} and angles through \cos \theta = \frac{B(\mathbf{u}, \mathbf{v})}{|\mathbf{u}| |\mathbf{v}|}.[11] Such forms underpin the geometric structure of Euclidean spaces, facilitating concepts like orthogonality and projections essential to classical geometry.[1]In differential geometry, the metric tensor on a manifold is a symmetric bilinear form defined on each tangent space, providing a way to measure distances, angles, and curvatures intrinsically without embedding in a higher-dimensional space. For a Riemannian manifold (M, g), the metric g_p: T_p M \times T_p M \to \mathbb{R} at point p \in M is smooth, symmetric, and positive definite, with the line element ds^2 = g_{ij} dx^i dx^j in local coordinates defining arc lengths along curves.[12] This structure generalizes the Euclidean inner product to curved spaces, forming the basis for geodesicgeometry and the study of manifolds in general relativity.[13]Physically, symmetric bilinear forms appear in special relativity through the Minkowski spacetime metric, a nondegenerate symmetric bilinear form of indefinite signature on \mathbb{R}^{1,3}, given by B(x, y) = x_0 y_0 - x_1 y_1 - x_2 y_2 - x_3 y_3. This Lorentz form, introduced by Hermann Minkowski in his 1908 lecture "Space and Time," unifies space and time into a four-dimensional continuum, invariant under Lorentz transformations, and defines the causal structure via light cones where B(x, x) = 0.[14][15] In classical mechanics, the kinetic energy in Lagrangian formulations often derives from a symmetric bilinear form associated with the mass matrix; for generalized coordinates q, the kinetic energy is T = \frac{1}{2} \dot{q}^T M(q) \dot{q}, where M(q) is the symmetric positive definite inertia matrix, yielding the Lagrangian L = T - V.[16] This bilinear structure simplifies deriving equations of motion via the Euler-Lagrange equations for systems like multi-body dynamics.[17]
Quadratic Forms
Correspondence to Quadratic Forms
A symmetric bilinear form B: V \times V \to F on a vector space V over a field F induces an associated quadratic form Q: V \to F defined by Q(v) = B(v, v) for all v \in V.[18] This quadratic form is homogeneous of degree 2, meaning Q(\lambda v) = \lambda^2 Q(v) for \lambda \in F.[1]Conversely, given a quadratic form Q on V, a unique symmetric bilinear form B can be recovered via the relationQ(u + v) = Q(u) + Q(v) + 2B(u, v)for all u, v \in V, assuming the characteristic of F is not 2.[19] Solving for B, it follows thatB(u, v) = \frac{Q(u + v) - Q(u) - Q(v)}{2}.[20] This recovery formula ensures the uniqueness of the associated symmetric bilinear form.[21]Under the assumption that \operatorname{char} F \neq 2, the map sending a symmetric bilinear form to its associated quadratic form establishes a bijection between the set of symmetric bilinear forms on V and the set of quadratic forms on V.[18] This correspondence highlights the close theoretical link between the two structures in linear algebra.[22]A bilinear form B is degenerate if its associated quadratic form Q is degenerate, meaning the matrix representation of B (or equivalently, of Q) has rank less than \dim V.[21] In geometric contexts, such degenerate quadratic forms correspond to degenerate quadric surfaces, such as cylinders, where the defining equation lacks full rank and results in unbounded or ruled surfaces.[23]
Polarization Identity
The polarization identity expresses a symmetric bilinear form B in terms of its associated quadratic form Q, over a field of characteristic not equal to 2. It states thatB(u, v) = \frac{1}{4} \left[ Q(u + v) - Q(u - v) \right],or, in an equivalent additive form,B(u, v) = \frac{1}{2} \left[ Q(u + v) - Q(u) - Q(v) \right].[24][1]A proof proceeds by expanding the quadratic form using its defining relation with the bilinear form. Specifically, bilinearity and symmetry of B implyQ(u + v) = Q(u) + Q(v) + 2 B(u, v), \quad Q(u - v) = Q(u) + Q(v) - 2 B(u, v).Subtracting these equations yields Q(u + v) - Q(u - v) = 4 B(u, v), confirming the first formula upon division by 4. The additive version follows directly by substituting into the second expansion and solving for B(u, v). These hold on basis vectors by direct computation and extend to general vectors by linearity and additivity.[1][24]This identity implies a bijective correspondence between symmetric bilinear forms and quadratic forms over fields of characteristic not 2, facilitating the analysis of quadratic forms via the algebraic structure of bilinear forms. It is useful in geometry for recovering inner products from norms through related laws like the parallelogram identity, and in optimization for techniques such as phase retrieval, where relative phases are reconstructed from magnitude data akin to quadratic evaluations.[1][25]In fields of characteristic 2, however, the polarization identity fails, as the coefficient 2 vanishes, rendering the associated bilinear form alternating rather than fully recoverable from the quadratic form; alternative approaches, such as quadratic modules or, in algebraic geometry, Cartier divisors, are then required.[1]
Matrix Representation
Symmetric Matrix Representation
In a finite-dimensional vector space V over a field of characteristic not equal to 2 (such as \mathbb{R} or \mathbb{C}), a symmetric bilinear form B: V \times V \to F can be represented by a matrix with respect to a chosen ordered basis \{e_1, \dots, e_n\}. The matrix A = (a_{ij}) is defined by its entries a_{ij} = B(e_i, e_j) for i, j = 1, \dots, n. Since B is symmetric, B(e_i, e_j) = B(e_j, e_i), which implies that A is a symmetric matrix, satisfying A^T = A.[1]For arbitrary vectors u = \sum_{i=1}^n u_i e_i and v = \sum_{j=1}^n v_j e_j in V, the bilinear form evaluates to B(u, v) = \mathbf{u}^T A \mathbf{v}, where \mathbf{u} = (u_1, \dots, u_n)^T and \mathbf{v} = (v_1, \dots, v_n)^T are the coordinate column vectors of u and v with respect to the basis. The diagonal entries of A are given explicitly by a_{ii} = B(e_i, e_i) for each i. This construction follows directly from the bilinearity of B and the basis expansion.[1][26]Several properties of the matrix A relate to invariants of the bilinear form. The trace of A, denoted \operatorname{tr}(A), equals the sum \sum_{i=1}^n B(e_i, e_i), which captures the diagonal contributions in the chosen basis. Additionally, the bilinear form B is non-degenerate if and only if the matrix A is invertible, meaning \det(A) \neq 0; in this case, B induces an isomorphism from V to its dual space.[1][27]A concrete example is the standard dot product on \mathbb{R}^n, which defines a symmetric bilinear form B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i. With respect to the standard basis \{e_1, \dots, e_n\} where each e_i is the vector with 1 in the i-th position and 0 elsewhere, the matrix A is the n \times n identity matrix I_n, since B(e_i, e_j) = \delta_{ij} (the Kronecker delta). This representation highlights the form's role in defining the Euclidean inner product.[1]
Congruence and Change of Basis
When a basis for the vector space is changed, the matrix representation of a symmetric bilinear form transforms accordingly. Suppose the form is represented by a symmetric matrix A with respect to basis \mathcal{B}, and let \mathcal{C} be another basis with transition matrix P (whose columns are the coordinates of the \mathcal{C}-basis vectors in \mathcal{B}). Then, the matrix A' of the form with respect to \mathcal{C} is given by A' = P^T A P. This transformation preserves the symmetry of the matrix, as (P^T A P)^T = P^T A^T (P^T)^T = P^T A P since A is symmetric.[26][6]The relation defined by this transformation leads to the concept of matrix congruence. Two symmetric matrices A and B are said to be congruent if there exists an invertible matrix P such that B = P^T A P. This equivalence means that A and B represent the same symmetric bilinear form up to a change of basis, capturing the intrinsic properties of the form independent of the chosen basis. Congruence is an equivalence relation on the set of symmetric matrices, partitioning them into classes that correspond to isomorphic bilinear forms.[26][28]Under congruence, certain invariants of the symmetric bilinear form are preserved, including its rank and signature, while properties like eigenvalues are not directly preserved (unlike under similarity transformations). The rank of the matrix, which equals the dimension of the image of the associated linear map, remains unchanged because the transformation P^T A P has the same nullity as A for invertible P. Similarly, the signature—a measure of the number of positive and negative eigenvalues in the real case—is invariant, reflecting the form's type (positive definite, indefinite, etc.) across bases.[26]Over the real numbers, symmetric bilinear forms can be diagonalized via congruence, simplifying computations and revealing canonical forms. For instance, consider the matrix A = \begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix} representing a form on \mathbb{R}^2. Using the change-of-basis matrix P = \begin{pmatrix} 1 & -2 \\ 0 & 1 \end{pmatrix}, the congruent matrix is A' = P^T A P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, which is diagonal with entries 1 and -1, illustrating an indefinite form. This process involves finding a basis where the form takes a diagonal representation, achievable through methods like completing the square or orthogonalization adapted for bilinear forms.[26]
Orthogonality and Degeneracy
Orthogonal Vectors and Bases
In the context of a symmetric bilinear form B on a vector space V over a field F, two vectors u, v \in V are said to be orthogonal, denoted u \perp v, if B(u, v) = 0.[24] This relation is symmetric due to the bilinearity and symmetry of B, meaning u \perp v if and only if v \perp u.[1]Orthogonality extends to subspaces: a subspace W \subseteq V is orthogonal to another subspace U \subseteq V, written W \perp U, if B(w, u) = 0 for all w \in W and u \in U.[26]An orthogonal basis for V with respect to B is a basis \{e_1, \dots, e_n\} such that e_i \perp e_j for all i \neq j, or equivalently, B(e_i, e_j) = 0 whenever i \neq j.[24] With respect to such a basis, the matrix representation of B is diagonal, with diagonal entries B(e_i, e_i).[1]Over fields F of characteristic not equal to 2, every symmetric bilinear form on a finite-dimensional vector space admits an orthogonal basis; this follows from the fact that such forms are diagonalizable.[26] The existence can be established by induction on the dimension of V: if \dim V = n > 0 and B is nonzero, select a vector v \in V with B(v, v) \neq 0, decompose V = \operatorname{Span}\{v\} \oplus v^\perp, and apply the induction hypothesis to the restriction of B on the orthogonal complement v^\perp, which has dimension n-1.[24] However, orthogonal bases do not always exist in general settings, such as over fields of characteristic 2 or in certain geometric configurations like hyperbolic planes where diagonalization may fail under additional constraints.[1]In an orthogonal basis \{e_1, \dots, e_n\}, the bilinear form decouples completely: for any u = \sum_{i=1}^n a_i e_i and v = \sum_{i=1}^n b_i e_i, we haveB(u, v) = \sum_{i=1}^n a_i b_i B(e_i, e_i).This separation highlights how the form acts independently along each basis direction, simplifying computations and revealing the intrinsic structure of B.[26]
Singularity and Non-degeneracy
A symmetric bilinear form B: V \times V \to F on a vector space V over a field F (with \operatorname{char} F \neq 2) is associated with its radical, defined as the subspace\operatorname{Rad}(B) = \{ u \in V \mid B(u, v) = 0 \ \forall v \in V \}.This radical, also called the kernel of B, consists of all vectors orthogonal to the entire space under B.[29][30]The form B is degenerate if \operatorname{Rad}(B) \neq \{0\}, meaning there exists a nonzero vector annihilating all others via B. Conversely, B is non-degenerate if \operatorname{Rad}(B) = \{0\}, ensuring no such nontrivial kernel exists. In finite-dimensional settings with a basis representation, non-degeneracy is equivalent to the associated symmetric matrix A (where B(u, v) = u^T A v) being invertible, i.e., \det A \neq 0.[29][30]For general bilinear forms, one distinguishes the left radical \operatorname{Rad}_L(B) = \{ u \in V \mid B(u, v) = 0 \ \forall v \in V \} and right radical \operatorname{Rad}_R(B) = \{ v \in V \mid B(u, v) = 0 \ \forall u \in V \}; however, symmetry of B (i.e., B(u, v) = B(v, u)) implies \operatorname{Rad}_L(B) = \operatorname{Rad}_R(B) = \operatorname{Rad}(B). The rank of B is then \operatorname{rank}(B) = \dim V - \dim \operatorname{Rad}(B), which coincides with the matrix rank of A.[31]The form B induces an adjoint map, or endomorphism, \phi_B: V \to V^* (the dual space), given by \phi_B(u)(v) = B(u, v). This map is an isomorphismif and only if B is non-degenerate, and it is singular precisely when B is degenerate, with kernel \ker \phi_B = \operatorname{Rad}(B). In finite dimensions, identifying V \cong V^{**} yields a self-adjointendomorphism on V.[29][30]
Classification
Signature
The signature of a non-degenerate symmetric bilinear form B on a finite-dimensional real vector space V of dimension n is the pair (p, q), where p is the number of positive entries and q the number of negative entries on the diagonal of the matrix representing B with respect to an orthogonal basis \{e_1, \dots, e_n\} in which B(e_i, e_j) = 0 for i \neq j and B(e_i, e_i) = \pm 1, with p + q = n.[32][1] This classification invariant captures the form's indefinite nature, distinguishing it from positive or negative definite cases by counting the signs of the diagonalized entries.[32]A special case is the neutral signature, occurring when p = q in even-dimensional spaces, which characterizes hyperbolic forms as orthogonal direct sums of hyperbolic planes—each a 2-dimensional subspace with basis vectors u, v satisfying B(u, u) = B(v, v) = 0 and B(u, v) = 1, contributing one positive and one negative eigenvalue upon diagonalization.[30][1] For example, the standard hyperbolic plane on \mathbb{R}^2 with matrix \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} has neutral signature (1, 1).[30]The signature is invariant under congruence, meaning that if two symmetric bilinear forms are represented by matrices A and A' = P^T A P for some invertible matrix P, then they share the same (p, q), independent of the choice of orthogonal basis.[32][1]For degenerate forms, where the radical \operatorname{Rad}(B) = \{v \in V \mid B(v, w) = 0 \ \forall w \in V\} has positive dimension r = \dim \operatorname{Rad}(B), the extended notation (p, q, r) is used, with p + q + r = n and the form restricted to a complement of the radical yielding the non-degenerate signature (p, q).[1][32]
Sylvester's Law of Inertia
Sylvester's law of inertia provides a complete classification of non-degenerate symmetric bilinear forms on finite-dimensional real vector spaces up to congruence. Specifically, for a non-degenerate symmetric bilinear form B on a real vector space V of dimension n, there exists a basis of V with respect to which the matrix of B is the diagonal matrix \operatorname{diag}(I_p, -I_q), where I_p is the p \times p identity matrix, I_q is the q \times q identity matrix, and p + q = n. The pair (p, q), known as the inertia of B, is uniquely determined by B and remains invariant under change of basis. This inertia corresponds to the signature of the associated quadratic form, defined as p - q.[33]The proof of this theorem relies on induction on the dimension n of V. For the base case n = 1, the form is simply multiplication by a nonzero real number, which can be scaled to +1 or -1, yielding inertia (1, 0) or (0, 1). Assume the result holds for dimension n-1. Since B is non-degenerate, there exists a nonzero vector v \in V such that B(v, v) \neq 0. Without loss of generality, scale v so that B(v, v) = 1 (if B(v, v) < 0, scale to -1 instead). The orthogonal complement W = \{ w \in V \mid B(v, w) = 0 \} is a subspace of dimension n-1, and the restriction of B to W is non-degenerate. By the induction hypothesis, W admits an orthogonal basis with respect to which B|_W has matrix \operatorname{diag}(I_{p'}, -I_{q'}) for some p' + q' = n-1. Extending this basis by adjoining v yields an orthogonal basis for V in which the matrix of B is \operatorname{diag}(1, I_{p'}, -I_{q'}) if B(v, v) = 1, so the inertia is (p' + 1, q'); the negative case adjusts accordingly to (p', q' + 1). Uniqueness of the inertia follows from the fact that congruent forms share the same maximal positive- and negative-definite subspaces.[33]This classification extends to fields of characteristic not equal to 2 in which -1 is not a sum of squares, ensuring that positive and negative directions remain distinguishable; however, the result is most prominently applied over the real numbers.[34]The theorem was first enunciated and demonstrated by James Joseph Sylvester in 1852, in his paper on the reduction of homogeneous quadratic polynomials via real orthogonal substitutions.[35] Later refinements, including explicit proofs and generalizations, appeared in works by contemporaries such as Charles Hermite and Carl Gustav Jacob Jacobi.[34]
Field Extensions
Real Symmetric Forms
Over the real numbers, symmetric bilinear forms possess particularly rich structure due to the properties of their matrix representations. A symmetric bilinear form B on a finite-dimensional real vector space V is represented by a symmetric matrix A, and the associated quadratic form is Q(\mathbf{v}) = B(\mathbf{v}, \mathbf{v}) = \mathbf{v}^T A \mathbf{v}. The key theorem governing their behavior is the spectral theorem, which asserts that every real symmetric matrix A is orthogonally diagonalizable: there exists an orthogonal matrix Q (with Q^T Q = I) and a diagonal matrix \Lambda = \operatorname{diag}(\lambda_1, \dots, \lambda_n) with real eigenvalues \lambda_i \in \mathbb{R} such thatA = Q \Lambda Q^T.This decomposition reveals that the eigenvalues \lambda_i are real and determine the signs in the quadratic form, with the counts of positive, negative, and zero eigenvalues corresponding to the signature of the form.[36]A symmetric bilinear form is positive definite if all eigenvalues satisfy \lambda_i > 0, which is equivalent to the condition Q(\mathbf{v}) > 0 for all nonzero \mathbf{v} \in V. In this case, the signature is (n, 0), and such forms induce Euclidean inner products on V, enabling the definition of norms, angles, and orthogonality in a natural way. For computational purposes, positive definite forms admit a Cholesky decomposition: the matrix A factors uniquely as A = L L^T, where L is a lower triangular matrix with positive diagonal entries. This factorization is particularly useful for solving linear systems A \mathbf{x} = \mathbf{b} efficiently via forward and backward substitution, avoiding the need for pivoting and reducing numerical instability in algorithms.[37][38]In contrast, indefinite symmetric bilinear forms arise when the eigenvalues have mixed signs, leading to quadratic forms that take both positive and negative values. The level sets \{ \mathbf{v} \in V \mid Q(\mathbf{v}) = c \} for c \neq 0 are hyperboloids in the eigenbasis coordinates, reflecting the hyperbolic geometry associated with the form's signature (p, q) where p + q = n and both p, q > 0. By Sylvester's law of inertia, this signature is an invariant under change of basis, classifying real symmetric forms up to congruence.[39]
Complex Symmetric Forms
In the complex case, a symmetric bilinear form on a finite-dimensional vector space V over the field \mathbb{C} is defined as a map B: V \times V \to \mathbb{C} that is linear in each argument separately and satisfies B(u, v) = B(v, u) for all u, v \in V.[1] The matrix representation of such a form with respect to a basis is a complex symmetric matrix, satisfying A^T = A, but not necessarily Hermitian (A^* = A).[1] This distinguishes symmetric bilinear forms from the more commonly used Hermitian forms over \mathbb{C}, which are sesquilinear (linear in the first argument and conjugate-linear in the second) and serve as inner products in complex Hilbert spaces.[1] Symmetric bilinear forms over \mathbb{C} arise in certain algebraic and geometric contexts, such as the Killing form on complex Lie algebras used for classifying semisimple Lie algebras, but are less prevalent in applications compared to sesquilinear forms, as the latter preserve the structure needed for notions like positivity and orthogonality in complex analysis.[40][1]Non-degeneracy for a symmetric bilinear form B over \mathbb{C} is defined analogously to the real case: B is non-degenerate if its radical \{v \in V \mid B(v, w) = 0 \ \forall w \in V\} is \{0\}, or equivalently, if the associated matrix is invertible.[1] The radical coincides with the kernel of the linear map V \to V^* induced by B, where V^* is the dual space.[1] An orthogonal basis for V exists with respect to B, meaning a basis \{e_1, \dots, e_n\} such that B(e_i, e_j) = 0 for i \neq j.[1] However, unlike in the real case, such a basis diagonalizes B completely over \mathbb{C}, yielding a diagonal matrix whose non-zero entries can be normalized to 1 via rescaling, as every non-zero complex number has a square root.[1]The classification of symmetric bilinear forms over \mathbb{C} simplifies significantly compared to the real case, with no direct analog to Sylvester's law of inertia.[1] Every non-degenerate symmetric bilinear form on \mathbb{C}^n is congruent to the standard dot product form \sum_{i=1}^n x_i y_i, represented by the identity matrix, via a change of basis P such that P^T A P = I_n.[1] For degenerate forms, congruence classes are determined solely by the rank r (the dimension of the image, or number of non-zero diagonal entries after diagonalization), yielding a form congruent to the direct sum of the standard non-degenerate form on \mathbb{C}^r and the zero form on \mathbb{C}^{n-r}.[1] This rank-invariance follows from the algebraic closure of \mathbb{C}, allowing arbitrary non-zero diagonal entries to be scaled uniformly without introducing signs or other invariants.[1]
Polarities
Orthogonal Polarities
A non-degenerate symmetric bilinear form B: [V](/page/V.) \times [V](/page/V.) \to F on a finite-dimensional vector space [V](/page/V.) over a field F induces a polarity \phi_B: [V](/page/V.) \to V^*, where V^* is the dual space of [V](/page/V.), defined by \phi_B(u)(v) = B(u, v) for all u, v \in [V](/page/V.).[1][41] This map is a linear isomorphism because non-degeneracy ensures that the left kernel of B is trivial, making \phi_B injective, and by dimension counting, it is also surjective.[42][1] Since B is symmetric, \phi_B coincides with the map induced by the right action, v \mapsto B(\cdot, v).[1]The orthogonal polarity associated with B is the map that sends each vector u \in V to the orthogonal complement of its polar hyperplane, specifically u \mapsto \{v \in V \mid B(u, v) = 0\}^\perp.[41] Here, the polar hyperplane of u is the kernel of \phi_B(u), which is \{v \in V \mid B(u, v) = 0\} = u^\perp, and its orthogonal complement (u^\perp)^\perp recovers the span of u due to non-degeneracy, as (W^\perp)^\perp = W for any subspace W \subseteq V.[1][42] This construction relates points in V to hyperplanes via orthogonality, establishing a duality that underlies the polarity.[41]A key property of such polarities is the existence of absolute points, which are vectors u \in V satisfying u \perp u, or equivalently B(u, u) = 0.[41] These points lie on the quadric defined by the associated quadratic form Q(u) = B(u, u). In vector spaces of odd dimension, non-degenerate orthogonal polarities over fields of characteristic not 2 typically admit absolute points, depending on the field's properties and the form's signature, though their existence varies across fields like the reals or finite fields.[41][1]Geometrically, in the associated projective space \mathbb{P}(V), an orthogonal polarity defines a correspondence between points and hyperplanes, where the absolute points form a conic (in dimension 3) or more generally a quadrichypersurface.[41] This structure captures the incidence relations preserved by the duality induced by B.[41]
In projective geometry, a non-degenerate symmetric bilinear form B on an n-dimensional vector space V over a field F (with characteristic not equal to 2) induces a projective polarity on the associated projective space \mathbb{P}^{n-1}(F). This polarity maps each point, represented by a one-dimensional subspace \langle v \rangle of V, to the hyperplane consisting of all points \langle w \rangle such that B(v, w) = 0. Such a map is a bijection between points and hyperplanes that preserves incidence relations in a dual manner.[41]Due to the symmetry of B, the induced polarity is self-dual, meaning it is an involution: applying the polarity twice returns the original element, as the orthogonal complement operation satisfies (U^\perp)^\perp = U for subspaces U. This property distinguishes polarities arising from symmetric forms from more general correlations.[41]The locus of the polarity, defined as the set of points x \in \mathbb{P}^{n-1}(F) satisfying B(x, x) = 0, forms a quadrichypersurface, which plays a central role in the geometry of the space. This quadric is the absolute conic or the set of self-orthogonal points under the form.[41]In the plane \mathbb{P}^2(F), corresponding to a 3-dimensional vector space, the polarity induced by a non-degenerate 3×3 symmetric matrix yields conic sections as the quadric loci. For instance, over the reals, the matrix \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} defines a polarity whose absolute is a hyperbola, illustrating the hyperbolic type.[41]Projective polarities in dimensions at least 3 arise from non-degenerate reflexive sesquilinear forms on the underlying vector space, with symmetric bilinear forms corresponding to the orthogonal case over fields of odd characteristic. This provides a foundational link between algebraic forms and geometric dualities.[41]