Gram matrix
In linear algebra, the Gram matrix (also known as the Gramian matrix) of a finite set of vectors in an inner product space is the Hermitian matrix whose entries are given by the inner products of pairs of those vectors.[1] It is named after the Danish mathematician and actuary Jørgen Pedersen Gram (1850–1916), who contributed to its development in the context of orthogonalization processes and least squares approximations.[2] For a set of m vectors \{v_1, \dots, v_m\} in \mathbb{R}^n, the Gram matrix G is the m \times m matrix with entries G_{ij} = v_i^T v_j, or more generally G_{ij} = \langle v_i, v_j \rangle in an arbitrary inner product space.[3] The Gram matrix is always positive semi-definite, meaning all its eigenvalues are nonnegative, and it is positive definite if and only if the vectors are linearly independent.[3] Its rank equals the dimension of the span of the vectors, providing a way to determine linear dependence: the vectors are linearly independent if and only if the determinant of the Gram matrix (known as the Gram determinant) is nonzero.[4] If the vectors are represented as the columns of a matrix A, then the Gram matrix is G = A^* A (or A^T A in the real Euclidean case), which preserves key algebraic properties like the nullity and invertibility relations between A and G.[3] Gram matrices play a fundamental role in several areas of mathematics and applications. In the Gram-Schmidt orthogonalization process, they facilitate the construction of orthonormal bases from linearly independent sets by enabling projections and normalizations.[5] They are essential in least squares problems, where the normal equations involve solving G x = A^T b for approximations in overdetermined systems.[4] In numerical linear algebra and statistics, Gram matrices underpin analyses of vector correlations and covariance structures.[6] Beyond pure mathematics, they are central to kernel methods in machine learning, where the Gram (or kernel) matrix encodes pairwise similarities in high-dimensional feature spaces without explicit computation of the features, enabling algorithms like support vector machines and kernel principal component analysis.[7]Fundamentals
Definition
In mathematics, particularly in the field of linear algebra, an inner product space is a vector space V over the real numbers \mathbb{R} or the complex numbers \mathbb{C} equipped with an inner product \langle \cdot, \cdot \rangle: V \times V \to \mathbb{R} (or \mathbb{C}). The inner product is a bilinear form that satisfies specific axioms: it is linear in the first argument, conjugate symmetric (i.e., \langle u, v \rangle = \overline{\langle v, u \rangle} for complex spaces), and positive definite (\langle v, v \rangle \geq 0 for all v \in V, with equality if and only if v = 0).[8] These properties generalize the notion of length and angle from Euclidean space, enabling geometric interpretations in abstract settings.[9] Given a finite set of vectors \{v_1, \dots, v_n\} in an inner product space V, the Gram matrix G associated with this set is the n \times n matrix whose (i,j)-th entry is the inner product G_{ij} = \langle v_i, v_j \rangle.[1] This construction captures the pairwise "similarities" or projections between the vectors via the inner product. In the real case, where the inner product is symmetric (\langle u, v \rangle = \langle v, u \rangle), the resulting Gram matrix G is symmetric. In the complex case, the conjugate symmetry of the inner product ensures that G is Hermitian, meaning G_{ij} = \overline{G_{ji}}.[10] To illustrate the construction, consider two vectors in the inner product space \mathbb{R}^2 equipped with the standard dot product \langle u, v \rangle = u_1 v_1 + u_2 v_2. Let v_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} and v_2 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}. The entries are computed as follows:G_{11} = \langle v_1, v_1 \rangle = 1 \cdot 1 + 1 \cdot 1 = 2,
G_{12} = \langle v_1, v_2 \rangle = 1 \cdot 1 + 1 \cdot 0 = 1,
G_{21} = \langle v_2, v_1 \rangle = 1 \cdot 1 + 0 \cdot 1 = 1,
G_{22} = \langle v_2, v_2 \rangle = 1 \cdot 1 + 0 \cdot 0 = 1.
Thus, the Gram matrix is G = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}.