Fact-checked by Grok 2 weeks ago

Symmetric bilinear form

In linear algebra, a symmetric bilinear form on a vector space V over a field F (typically of characteristic not equal to 2) is a function B: V \times V \to F that is linear in each argument separately and satisfies the symmetry condition B(u, v) = B(v, u) for all u, v \in V. This structure generalizes the notion of a dot product and plays a fundamental role in defining geometric and algebraic properties of vector spaces. With respect to a basis of V, any symmetric bilinear form can be represented by a A, where B(x, y) = x^T A y and A^T = A. The matrix entries a_{ij} = B(e_i, e_j) capture the form's values on basis vectors, and the rank of A equals the rank of the form, measuring its nondegeneracy (the form is nondegenerate if its kernel is trivial). A key associated concept is the quadratic form Q: V \to F defined by Q(v) = B(v, v), which recovers the bilinear form via the B(u, v) = \frac{1}{2} [Q(u + v) - Q(u) - Q(v)] over fields of characteristic not 2. Over the real numbers \mathbb{R}, the ensures that every A is orthogonally diagonalizable, so symmetric bilinear forms can be classified up to by their signature (p, q), where p is the number of positive eigenvalues, q the number of negative ones, and p + q = \dim V. Positive definite forms (signature (n, 0)) induce inner products on V, enabling notions of orthogonality and norms essential in . Indefinite forms, such as the Minkowski metric in , arise in pseudo-Euclidean spaces with signature (p, q) where p + q = n > 0. Symmetric bilinear forms find applications across mathematics and physics, including the analysis of quadratic forms in optimization—where of the determines local minima—and in the of conic sections via their associated matrices. Over other fields, such as finite fields or the complexes, depends on invariants like the \det A squares, with all nondegenerate forms over \mathbb{C} equivalent to the standard .

Definition and Properties

Formal Definition

A bilinear form on a V over a F is a B: V \times V \to F that is linear in each argument separately. Specifically, for all scalars a, b \in F and vectors u, v, w \in V, it satisfies B(au + bv, w) = a B(u, w) + b B(v, w) and B(u, cw + dw) = c B(u, w) + d B(u, w). Such forms generalize the notion of bilinear maps from linear algebra, where linearity holds in each coordinate. A is a that additionally satisfies the symmetry condition B(u, v) = B(v, u) for all u, v \in V, distinguishing it from general (possibly skew-symmetric) bilinear forms. These are typically studied on finite-dimensional vector spaces V over fields F of not equal to 2, ensuring compatibility with associated structures. Symmetric bilinear forms are often denoted by B(u, v) or \langle u, v \rangle_B.

Basic Properties

A symmetric bilinear form B: V \times V \to F on a V over a F inherits the core properties of bilinearity, which ensure additivity and homogeneity in each argument separately. Specifically, for all u, v, w \in V and c \in F, additivity in the first argument gives B(u + v, w) = B(u, w) + B(v, w), while homogeneity yields B(cu, w) = c B(u, w). Similarly, additivity in the second argument states B(u, v + w) = B(u, v) + B(u, w), and homogeneity provides B(u, cv) = c B(u, v). These properties establish that B is F-linear in each variable when the other is held fixed, making it an F-. The symmetry condition B(u, v) = B(v, u) for all u, v \in V further implies that the values B(u, u) along the "diagonal" behave as quadratic terms, associating a scalar to each vector that scales under addition in fields of characteristic not equal to 2. This reflexivity in the diagonal elements B(u, u) highlights the form's self-pairing structure, which underlies its connection to quadratic maps without fully deriving the latter here. However, over fields of characteristic 2, the symmetry of a bilinear form does not guarantee the standard association with quadratic forms via polarization, as the identity relating B(u, v) to differences of B(w, w) involves division by 2, which is impossible. In such fields, symmetric and alternating bilinear forms coincide, limiting the distinct algebraic implications of symmetry alone.

Examples and Applications

Concrete Examples

One fundamental example of a symmetric bilinear form is the standard on the \mathbb{R}^n, defined by B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i for \mathbf{u} = (u_1, \dots, u_n) and \mathbf{v} = (v_1, \dots, v_n). This form is symmetric since B(\mathbf{u}, \mathbf{v}) = B(\mathbf{v}, \mathbf{u}) and non-degenerate, as the only vector orthogonal to all of \mathbb{R}^n is the zero vector. An example of an indefinite symmetric bilinear form arises on \mathbb{R}^2, given by B((x_1, y_1), (x_2, y_2)) = x_1 x_2 - y_1 y_2. This form is symmetric and has (1,1), corresponding to the \operatorname{diag}(1, -1) under the , with one positive and one negative eigenvalue. A degenerate symmetric bilinear form on \mathbb{R}^2 is B(\mathbf{u}, \mathbf{v}) = u_1 v_1, where \mathbf{u} = (u_1, u_2) and \mathbf{v} = (v_1, v_2), so only the first coordinates contribute. This form has a nontrivial , consisting of all vectors of the form (0, u_2), and is represented by the matrix \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, which has rank 1. In matrix notation, a general symmetric bilinear form on \mathbb{R}^n can be expressed as B(\mathbf{u}, \mathbf{v}) = \mathbf{u}^T A \mathbf{v}, where A is an n \times n symmetric matrix. For instance, with A = \operatorname{diag}(1, -1) on \mathbb{R}^2, this recovers the indefinite form B((x_1, y_1), (x_2, y_2)) = x_1 x_2 - y_1 y_2. In abstract settings, such as s, the Killing form provides an example of a symmetric bilinear form. For a \mathfrak{g} over a of characteristic zero, it is defined by \kappa(X, Y) = \operatorname{Tr}(\operatorname{ad}_X \circ \operatorname{ad}_Y) for X, Y \in \mathfrak{g}, where \operatorname{ad} denotes the . This form is symmetric and invariant under automorphisms of \mathfrak{g}.

Geometric and Physical Applications

Symmetric bilinear forms play a fundamental role in , where the standard inner product on \mathbb{R}^n serves as a prototypical example. This form, defined by B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i, is symmetric and positive definite, enabling the measurement of lengths via |\mathbf{u}| = \sqrt{B(\mathbf{u}, \mathbf{u})} and angles through \cos \theta = \frac{B(\mathbf{u}, \mathbf{v})}{|\mathbf{u}| |\mathbf{v}|}. Such forms underpin the geometric structure of spaces, facilitating concepts like and projections essential to classical . In , the on a manifold is a symmetric bilinear form defined on each , providing a way to measure distances, angles, and curvatures intrinsically without embedding in a higher-dimensional . For a (M, g), the metric g_p: T_p M \times T_p M \to \mathbb{R} at point p \in M is smooth, symmetric, and positive definite, with the ds^2 = g_{ij} dx^i dx^j in local coordinates defining arc lengths along curves. This structure generalizes the inner product to curved s, forming the basis for and the study of manifolds in . Physically, symmetric bilinear forms appear in special relativity through the Minkowski spacetime metric, a nondegenerate symmetric bilinear form of indefinite signature on \mathbb{R}^{1,3}, given by B(x, y) = x_0 y_0 - x_1 y_1 - x_2 y_2 - x_3 y_3. This Lorentz form, introduced by Hermann Minkowski in his 1908 lecture "Space and Time," unifies space and time into a four-dimensional continuum, invariant under Lorentz transformations, and defines the causal structure via light cones where B(x, x) = 0. In classical mechanics, the kinetic energy in Lagrangian formulations often derives from a symmetric bilinear form associated with the mass matrix; for generalized coordinates q, the kinetic energy is T = \frac{1}{2} \dot{q}^T M(q) \dot{q}, where M(q) is the symmetric positive definite inertia matrix, yielding the Lagrangian L = T - V. This bilinear structure simplifies deriving equations of motion via the Euler-Lagrange equations for systems like multi-body dynamics.

Quadratic Forms

Correspondence to Quadratic Forms

A symmetric bilinear form B: V \times V \to F on a V over a F induces an associated Q: V \to F defined by Q(v) = B(v, v) for all v \in V. This quadratic form is homogeneous of degree 2, meaning Q(\lambda v) = \lambda^2 Q(v) for \lambda \in F. Conversely, given a Q on V, a unique symmetric bilinear form B can be recovered via the relation Q(u + v) = Q(u) + Q(v) + 2B(u, v) for all u, v \in V, assuming the characteristic of F is not 2. Solving for B, it follows that B(u, v) = \frac{Q(u + v) - Q(u) - Q(v)}{2}. This recovery formula ensures the of the associated symmetric bilinear form. Under the assumption that \operatorname{char} F \neq 2, the map sending a to its associated establishes a between the set of on V and the set of on V. This correspondence highlights the close theoretical link between the two structures in linear algebra. A bilinear form B is degenerate if its associated Q is degenerate, meaning the matrix representation of B (or equivalently, of Q) has less than \dim V. In geometric contexts, such degenerate correspond to degenerate surfaces, such as cylinders, where the defining equation lacks full and results in unbounded or ruled surfaces.

Polarization Identity

The polarization identity expresses a symmetric bilinear form B in terms of its associated Q, over a of characteristic not equal to 2. It states that B(u, v) = \frac{1}{4} \left[ Q(u + v) - Q(u - v) \right], or, in an equivalent additive form, B(u, v) = \frac{1}{2} \left[ Q(u + v) - Q(u) - Q(v) \right]. A proof proceeds by expanding the quadratic form using its defining relation with the bilinear form. Specifically, bilinearity and symmetry of B imply Q(u + v) = Q(u) + Q(v) + 2 B(u, v), \quad Q(u - v) = Q(u) + Q(v) - 2 B(u, v). Subtracting these equations yields Q(u + v) - Q(u - v) = 4 B(u, v), confirming the first formula upon division by 4. The additive version follows directly by substituting into the second expansion and solving for B(u, v). These hold on basis vectors by direct computation and extend to general vectors by and additivity. This implies a bijective between symmetric bilinear forms and quadratic forms over fields of not 2, facilitating the analysis of quadratic forms via the of bilinear forms. It is useful in for recovering inner products from norms through related laws like the parallelogram , and in optimization for techniques such as , where relative phases are reconstructed from magnitude data akin to quadratic evaluations. In fields of characteristic 2, however, the polarization identity fails, as the coefficient 2 vanishes, rendering the associated bilinear form alternating rather than fully recoverable from the quadratic form; alternative approaches, such as quadratic modules or, in algebraic geometry, Cartier divisors, are then required.

Matrix Representation

Symmetric Matrix Representation

In a finite-dimensional vector space V over a field of characteristic not equal to 2 (such as \mathbb{R} or \mathbb{C}), a symmetric bilinear form B: V \times V \to F can be represented by a matrix with respect to a chosen ordered basis \{e_1, \dots, e_n\}. The matrix A = (a_{ij}) is defined by its entries a_{ij} = B(e_i, e_j) for i, j = 1, \dots, n. Since B is symmetric, B(e_i, e_j) = B(e_j, e_i), which implies that A is a symmetric matrix, satisfying A^T = A. For arbitrary vectors u = \sum_{i=1}^n u_i e_i and v = \sum_{j=1}^n v_j e_j in V, the bilinear form evaluates to B(u, v) = \mathbf{u}^T A \mathbf{v}, where \mathbf{u} = (u_1, \dots, u_n)^T and \mathbf{v} = (v_1, \dots, v_n)^T are the coordinate column vectors of u and v with respect to the basis. The diagonal entries of A are given explicitly by a_{ii} = B(e_i, e_i) for each i. This construction follows directly from the bilinearity of B and the basis expansion. Several properties of the matrix A relate to invariants of the bilinear form. The trace of A, denoted \operatorname{tr}(A), equals the sum \sum_{i=1}^n B(e_i, e_i), which captures the diagonal contributions in the chosen basis. Additionally, the bilinear form B is non-degenerate if and only if the matrix A is invertible, meaning \det(A) \neq 0; in this case, B induces an isomorphism from V to its dual space. A concrete example is the standard dot product on \mathbb{R}^n, which defines a symmetric bilinear form B(\mathbf{u}, \mathbf{v}) = \mathbf{u} \cdot \mathbf{v} = \sum_{i=1}^n u_i v_i. With respect to the \{e_1, \dots, e_n\} where each e_i is the with 1 in the i-th and 0 elsewhere, the matrix A is the n \times n I_n, since B(e_i, e_j) = \delta_{ij} (the ). This representation highlights the form's role in defining the Euclidean inner product.

Congruence and Change of Basis

When a basis for the vector space is changed, the matrix representation of a symmetric bilinear form transforms accordingly. Suppose the form is represented by a symmetric matrix A with respect to basis \mathcal{B}, and let \mathcal{C} be another basis with transition matrix P (whose columns are the coordinates of the \mathcal{C}-basis vectors in \mathcal{B}). Then, the matrix A' of the form with respect to \mathcal{C} is given by A' = P^T A P. This transformation preserves the symmetry of the matrix, as (P^T A P)^T = P^T A^T (P^T)^T = P^T A P since A is symmetric. The relation defined by this transformation leads to the concept of matrix congruence. Two symmetric matrices A and B are said to be congruent if there exists an invertible matrix P such that B = P^T A P. This equivalence means that A and B represent the same symmetric bilinear form up to a change of basis, capturing the intrinsic properties of the form independent of the chosen basis. Congruence is an equivalence relation on the set of symmetric matrices, partitioning them into classes that correspond to isomorphic bilinear forms. Under congruence, certain invariants of the symmetric bilinear form are preserved, including its and , while properties like eigenvalues are not directly preserved (unlike under similarity transformations). The of , which equals the of the image of the associated , remains unchanged because the transformation P^T A P has the same nullity as A for invertible P. Similarly, the —a measure of the number of positive and negative eigenvalues in the real case—is , reflecting the form's type (positive definite, indefinite, etc.) across bases. Over the real numbers, symmetric bilinear forms can be diagonalized via , simplifying computations and revealing canonical forms. For instance, consider the A = \begin{pmatrix} 1 & 2 \\ 2 & 3 \end{pmatrix} representing a form on \mathbb{R}^2. Using the change-of-basis P = \begin{pmatrix} 1 & -2 \\ 0 & 1 \end{pmatrix}, the congruent is A' = P^T A P = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}, which is diagonal with entries 1 and -1, illustrating an indefinite form. This process involves finding a basis where the form takes a diagonal representation, achievable through methods like or orthogonalization adapted for bilinear forms.

Orthogonality and Degeneracy

Orthogonal Vectors and Bases

In the context of a symmetric bilinear form B on a V over a F, two vectors u, v \in V are said to be , denoted u \perp v, if B(u, v) = 0. This relation is symmetric due to the bilinearity and symmetry of B, meaning u \perp v v \perp u. extends to : a W \subseteq V is to another U \subseteq V, written W \perp U, if B(w, u) = 0 for all w \in W and u \in U. An orthogonal basis for V with respect to B is a basis \{e_1, \dots, e_n\} such that e_i \perp e_j for all i \neq j, or equivalently, B(e_i, e_j) = 0 whenever i \neq j. With respect to such a basis, the matrix representation of B is diagonal, with diagonal entries B(e_i, e_i). Over fields F of characteristic not equal to 2, every symmetric bilinear form on a finite-dimensional admits an ; this follows from the fact that such forms are diagonalizable. The existence can be established by on the of V: if \dim V = n > 0 and B is nonzero, select a v \in V with B(v, v) \neq 0, decompose V = \operatorname{Span}\{v\} \oplus v^\perp, and apply the induction hypothesis to the restriction of B on the v^\perp, which has n-1. However, orthogonal bases do not always exist in general settings, such as over fields of characteristic 2 or in certain geometric configurations like hyperbolic planes where may fail under additional constraints. In an orthogonal basis \{e_1, \dots, e_n\}, the bilinear form decouples completely: for any u = \sum_{i=1}^n a_i e_i and v = \sum_{i=1}^n b_i e_i, we have B(u, v) = \sum_{i=1}^n a_i b_i B(e_i, e_i). This separation highlights how the form acts independently along each basis direction, simplifying computations and revealing the intrinsic structure of B.

Singularity and Non-degeneracy

A symmetric bilinear form B: V \times V \to F on a vector space V over a field F (with \operatorname{char} F \neq 2) is associated with its radical, defined as the subspace \operatorname{Rad}(B) = \{ u \in V \mid B(u, v) = 0 \ \forall v \in V \}. This radical, also called the kernel of B, consists of all vectors orthogonal to the entire space under B. The form B is degenerate if \operatorname{Rad}(B) \neq \{0\}, meaning there exists a nonzero annihilating all others via B. Conversely, B is non-degenerate if \operatorname{Rad}(B) = \{0\}, ensuring no such nontrivial exists. In finite-dimensional settings with a basis , non-degeneracy is equivalent to the associated A (where B(u, v) = u^T A v) being invertible, i.e., \det A \neq 0. For general bilinear forms, one distinguishes the left radical \operatorname{Rad}_L(B) = \{ u \in V \mid B(u, v) = 0 \ \forall v \in V \} and right radical \operatorname{Rad}_R(B) = \{ v \in V \mid B(u, v) = 0 \ \forall u \in V \}; however, symmetry of B (i.e., B(u, v) = B(v, u)) implies \operatorname{Rad}_L(B) = \operatorname{Rad}_R(B) = \operatorname{Rad}(B). The rank of B is then \operatorname{rank}(B) = \dim V - \dim \operatorname{Rad}(B), which coincides with the matrix rank of A. The form B induces an map, or , \phi_B: V \to V^* (the ), given by \phi_B(u)(v) = B(u, v). This map is an B is non-degenerate, and it is singular precisely when B is degenerate, with \ker \phi_B = \operatorname{Rad}(B). In finite dimensions, identifying V \cong V^{**} yields a on V.

Classification

Signature

The signature of a non-degenerate symmetric bilinear form B on a finite-dimensional real vector space V of dimension n is the pair (p, q), where p is the number of positive entries and q the number of negative entries on the diagonal of the matrix representing B with respect to an \{e_1, \dots, e_n\} in which B(e_i, e_j) = 0 for i \neq j and B(e_i, e_i) = \pm 1, with p + q = n. This classification invariant captures the form's indefinite nature, distinguishing it from positive or negative definite cases by counting the signs of the diagonalized entries. A special case is the neutral signature, occurring when p = q in even-dimensional spaces, which characterizes hyperbolic forms as orthogonal direct sums of hyperbolic planes—each a 2-dimensional with basis vectors u, v satisfying B(u, u) = B(v, v) = 0 and B(u, v) = 1, contributing one positive and one negative eigenvalue upon . For example, the standard hyperbolic plane on \mathbb{R}^2 with \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} has neutral (1, 1). The signature is invariant under congruence, meaning that if two symmetric bilinear forms are represented by matrices A and A' = P^T A P for some invertible matrix P, then they share the same (p, q), independent of the choice of orthogonal basis. For degenerate forms, where the radical \operatorname{Rad}(B) = \{v \in V \mid B(v, w) = 0 \ \forall w \in V\} has positive dimension r = \dim \operatorname{Rad}(B), the extended notation (p, q, r) is used, with p + q + r = n and the form restricted to a complement of the radical yielding the non-degenerate signature (p, q).

Sylvester's Law of Inertia

Sylvester's law of inertia provides a complete classification of non-degenerate symmetric bilinear forms on finite-dimensional real vector spaces up to congruence. Specifically, for a non-degenerate symmetric bilinear form B on a real vector space V of dimension n, there exists a basis of V with respect to which the matrix of B is the diagonal matrix \operatorname{diag}(I_p, -I_q), where I_p is the p \times p identity matrix, I_q is the q \times q identity matrix, and p + q = n. The pair (p, q), known as the inertia of B, is uniquely determined by B and remains invariant under change of basis. This inertia corresponds to the signature of the associated quadratic form, defined as p - q. The proof of this theorem relies on induction on the dimension n of V. For the base case n = 1, the form is simply multiplication by a nonzero real number, which can be scaled to +1 or -1, yielding inertia (1, 0) or (0, 1). Assume the result holds for dimension n-1. Since B is non-degenerate, there exists a nonzero vector v \in V such that B(v, v) \neq 0. Without loss of generality, scale v so that B(v, v) = 1 (if B(v, v) < 0, scale to -1 instead). The orthogonal complement W = \{ w \in V \mid B(v, w) = 0 \} is a subspace of dimension n-1, and the restriction of B to W is non-degenerate. By the induction hypothesis, W admits an orthogonal basis with respect to which B|_W has matrix \operatorname{diag}(I_{p'}, -I_{q'}) for some p' + q' = n-1. Extending this basis by adjoining v yields an orthogonal basis for V in which the matrix of B is \operatorname{diag}(1, I_{p'}, -I_{q'}) if B(v, v) = 1, so the inertia is (p' + 1, q'); the negative case adjusts accordingly to (p', q' + 1). Uniqueness of the inertia follows from the fact that congruent forms share the same maximal positive- and negative-definite subspaces. This classification extends to fields of characteristic not equal to 2 in which -1 is not a sum of squares, ensuring that positive and negative directions remain distinguishable; however, the result is most prominently applied over the real numbers. The theorem was first enunciated and demonstrated by in 1852, in his paper on the reduction of homogeneous quadratic polynomials via real orthogonal substitutions. Later refinements, including explicit proofs and generalizations, appeared in works by contemporaries such as and .

Field Extensions

Real Symmetric Forms

Over the real numbers, symmetric bilinear forms possess particularly rich structure due to the properties of their matrix representations. A symmetric bilinear form B on a finite-dimensional real vector space V is represented by a symmetric matrix A, and the associated quadratic form is Q(\mathbf{v}) = B(\mathbf{v}, \mathbf{v}) = \mathbf{v}^T A \mathbf{v}. The key theorem governing their behavior is the spectral theorem, which asserts that every real symmetric matrix A is orthogonally diagonalizable: there exists an orthogonal matrix Q (with Q^T Q = I) and a diagonal matrix \Lambda = \operatorname{diag}(\lambda_1, \dots, \lambda_n) with real eigenvalues \lambda_i \in \mathbb{R} such that A = Q \Lambda Q^T. This decomposition reveals that the eigenvalues \lambda_i are real and determine the signs in the quadratic form, with the counts of positive, negative, and zero eigenvalues corresponding to the signature of the form. A symmetric bilinear form is positive definite if all eigenvalues satisfy \lambda_i > 0, which is equivalent to the condition Q(\mathbf{v}) > 0 for all nonzero \mathbf{v} \in V. In this case, the signature is (n, 0), and such forms induce inner products on V, enabling the definition of norms, angles, and in a natural way. For computational purposes, positive definite forms admit a : the matrix A factors uniquely as A = L L^T, where L is a lower with positive diagonal entries. This factorization is particularly useful for solving linear systems A \mathbf{x} = \mathbf{b} efficiently via forward and backward substitution, avoiding the need for pivoting and reducing numerical instability in algorithms. In contrast, indefinite symmetric bilinear forms arise when the eigenvalues have mixed signs, leading to quadratic forms that take both positive and negative values. The level sets \{ \mathbf{v} \in V \mid Q(\mathbf{v}) = c \} for c \neq 0 are hyperboloids in the eigenbasis coordinates, reflecting the associated with the form's (p, q) where p + q = n and both p, q > 0. By , this is an invariant under , classifying real symmetric forms up to .

Complex Symmetric Forms

In the complex case, a symmetric bilinear form on a finite-dimensional vector space V over the field \mathbb{C} is defined as a map B: V \times V \to \mathbb{C} that is linear in each argument separately and satisfies B(u, v) = B(v, u) for all u, v \in V. The matrix representation of such a form with respect to a basis is a complex symmetric matrix, satisfying A^T = A, but not necessarily Hermitian (A^* = A). This distinguishes symmetric bilinear forms from the more commonly used Hermitian forms over \mathbb{C}, which are sesquilinear (linear in the first argument and conjugate-linear in the second) and serve as inner products in complex Hilbert spaces. Symmetric bilinear forms over \mathbb{C} arise in certain algebraic and geometric contexts, such as the Killing form on complex Lie algebras used for classifying semisimple Lie algebras, but are less prevalent in applications compared to sesquilinear forms, as the latter preserve the structure needed for notions like positivity and orthogonality in complex analysis. Non-degeneracy for a symmetric bilinear form B over \mathbb{C} is defined analogously to the real case: B is non-degenerate if its \{v \in V \mid B(v, w) = 0 \ \forall w \in V\} is \{0\}, or equivalently, if the associated is invertible. The coincides with the of the V \to V^* induced by B, where V^* is the . An for V exists with respect to B, meaning a basis \{e_1, \dots, e_n\} such that B(e_i, e_j) = 0 for i \neq j. However, unlike in the real case, such a basis diagonalizes B completely over \mathbb{C}, yielding a whose non-zero entries can be normalized to 1 via rescaling, as every non-zero has a . The classification of symmetric bilinear forms over \mathbb{C} simplifies significantly compared to the real case, with no direct analog to Sylvester's law of inertia. Every non-degenerate symmetric bilinear form on \mathbb{C}^n is congruent to the standard form \sum_{i=1}^n x_i y_i, represented by the , via a P such that P^T A P = I_n. For degenerate forms, congruence classes are determined solely by the r (the of the , or number of non-zero diagonal entries after ), yielding a form congruent to the of the standard non-degenerate form on \mathbb{C}^r and the zero form on \mathbb{C}^{n-r}. This rank-invariance follows from the of \mathbb{C}, allowing arbitrary non-zero diagonal entries to be scaled uniformly without introducing signs or other invariants.

Polarities

Orthogonal Polarities

A non-degenerate symmetric bilinear form B: [V](/page/V.) \times [V](/page/V.) \to F on a finite-dimensional [V](/page/V.) over a F induces a \phi_B: [V](/page/V.) \to V^*, where V^* is the of [V](/page/V.), defined by \phi_B(u)(v) = B(u, v) for all u, v \in [V](/page/V.). This map is a linear because non-degeneracy ensures that the left of B is trivial, making \phi_B injective, and by dimension counting, it is also surjective. Since B is symmetric, \phi_B coincides with the map induced by the right action, v \mapsto B(\cdot, v). The orthogonal polarity associated with B is the map that sends each vector u \in V to the orthogonal complement of its polar hyperplane, specifically u \mapsto \{v \in V \mid B(u, v) = 0\}^\perp. Here, the polar hyperplane of u is the kernel of \phi_B(u), which is \{v \in V \mid B(u, v) = 0\} = u^\perp, and its orthogonal complement (u^\perp)^\perp recovers the span of u due to non-degeneracy, as (W^\perp)^\perp = W for any subspace W \subseteq V. This construction relates points in V to hyperplanes via orthogonality, establishing a duality that underlies the polarity. A key property of such polarities is the existence of absolute points, which are vectors u \in V satisfying u \perp u, or equivalently B(u, u) = 0. These points lie on the defined by the associated Q(u) = B(u, u). In vector spaces of odd , non-degenerate orthogonal polarities over fields of not 2 typically admit absolute points, depending on the field's properties and the form's , though their existence varies across fields like the reals or finite fields. Geometrically, in the associated \mathbb{P}(V), an orthogonal polarity defines a between points and hyperplanes, where the absolute points form a conic (in 3) or more generally a . This structure captures the incidence relations preserved by the duality induced by B.

Polarities in

In , a non-degenerate symmetric bilinear form B on an n-dimensional V over a F (with characteristic not equal to 2) induces a projective polarity on the associated \mathbb{P}^{n-1}(F). This polarity maps each point, represented by a one-dimensional \langle v \rangle of V, to the consisting of all points \langle w \rangle such that B(v, w) = 0. Such a map is a between points and hyperplanes that preserves incidence relations in a dual manner. Due to the symmetry of B, the induced polarity is self-dual, meaning it is an : applying the polarity twice returns the original element, as the orthogonal complement operation satisfies (U^\perp)^\perp = U for subspaces U. This property distinguishes polarities arising from symmetric forms from more general correlations. The locus of the polarity, defined as the set of points x \in \mathbb{P}^{n-1}(F) satisfying B(x, x) = 0, forms a , which plays a central role in the of the space. This is the absolute conic or the set of self-orthogonal points under the form. In the \mathbb{P}^2(F), corresponding to a 3-dimensional , the induced by a non-degenerate 3×3 yields conic sections as the loci. For instance, over the reals, the matrix \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{pmatrix} defines a whose absolute is a , illustrating the hyperbolic type. Projective polarities in dimensions at least 3 arise from non-degenerate reflexive sesquilinear forms on the underlying , with symmetric bilinear forms corresponding to the orthogonal case over fields of odd characteristic. This provides a foundational link between algebraic forms and geometric dualities.