Fact-checked by Grok 2 weeks ago

Orthogonal complement

In linear algebra, the orthogonal complement of a S of an V is the set S^\perp = \{ v \in V \mid \langle v, s \rangle = 0 \text{ for all } s \in S \}, consisting of all elements in V that are orthogonal to every element of S. This concept generalizes the notion of perpendicularity from to vector spaces equipped with an inner product, such as the in \mathbb{R}^n. When S is a W of a finite-dimensional V, W^\perp is itself a of V, and V decomposes orthogonally as the V = W \oplus W^\perp, meaning every in V can be uniquely expressed as the sum of a in W and a in W^\perp. Furthermore, the double orthogonal complement satisfies (W^\perp)^\perp = W, and the dimensions additively relate by \dim W + \dim W^\perp = \dim V. In the specific case of \mathbb{R}^n with the standard , the orthogonal complement of the row space of a A is the null space of A, and vice versa for the column space and left null space, underpinning key results like the rank-nullity theorem. The orthogonal complement plays a central role in applications such as orthogonal projections, where the projection of a vector onto W is the closest point in W to the vector, with the error vector lying in W^\perp; this is foundational in problems and . In Hilbert spaces, which are complete inner product spaces, the orthogonal complement extends to infinite dimensions, enabling decompositions essential for and .

Fundamentals

Definition

In an , which is a equipped with an inner product—a positive-definite that generalizes the and allows measurement of lengths and angles—orthogonality between vectors is defined via this structure. Specifically, two vectors u and v in the space are orthogonal if their inner product satisfies \langle u, v \rangle = 0, indicating perpendicularity in the geometric sense induced by the inner product. For a subspace W of an inner product space V, the orthogonal complement of W, denoted W^\perp, is the set of all vectors in V that are orthogonal to every vector in W. Formally, W^\perp = \{ v \in V \mid \langle v, w \rangle = 0 \ \forall \, w \in W \}. This definition captures the collection of all elements perpendicular to the entire subspace W with respect to the inner product \langle \cdot, \cdot \rangle. The notation W^\perp is standard in linear algebra texts, though in some contexts involving dual spaces, the orthogonal complement relates to the of W under the identification provided by the inner product.

Example

Consider the \mathbb{R}^2 equipped with the standard inner product, which is the . Let W be the one-dimensional spanned by the (1,0), corresponding to the x-axis. To compute the orthogonal complement W^\perp algebraically, identify all vectors (x, y) \in \mathbb{R}^2 such that \langle (x, y), (1, 0) \rangle = x \cdot 1 + y \cdot 0 = x = 0. This condition holds for all vectors of the form (0, y), so W^\perp = \operatorname{[span](/page/Span)}\{(0,1)\}, which is the y-axis. Geometrically, W^\perp consists of all lines through the that are to W; in this case, it forms the vertical line along the y-axis, orthogonal to the horizontal x-axis. This example demonstrates how the orthogonal complement partitions the into mutually perpendicular directions. In higher dimensions, the orthogonal complement of a one-dimensional like this generalizes to an (n-1)-dimensional to the original direction.

Inner Product Spaces

Properties

In an V, if W is a , then the orthogonal complement W^\perp satisfies W \cap W^\perp = \{0\}. To see this, suppose x \in W \cap W^\perp; then \langle x, x \rangle = 0, which implies \|x\|^2 = 0 and thus x = 0, using the positive-definiteness of the inner product. The set W^\perp is itself a of V, closed under and . For vectors x, y \in W^\perp and scalar \alpha, linearity of the inner product gives \langle x + y, w \rangle = \langle x, w \rangle + \langle y, w \rangle = 0 + 0 = 0 for all w \in W, and similarly \langle \alpha x, w \rangle = \alpha \langle x, w \rangle = 0. For a closed W of a V, the double complement property holds: (W^\perp)^\perp = W. This follows from showing W \subseteq (W^\perp)^\perp (since if w \in W, then \langle w, z \rangle = 0 for all z \in W^\perp) and using the closedness to ensure equality via the orthogonal onto W. In a , every closed W admits an orthogonal decomposition: V = W \oplus W^\perp. For any v \in V, the orthogonal projection P v \in W satisfies v - P v \in W^\perp, and uniqueness arises because if v = w_1 + z_1 = w_2 + z_2 with w_i \in W and z_i \in W^\perp, then w_1 - w_2 = z_2 - z_1 \in W \cap W^\perp = \{0\}. If \{w_1, \dots, w_k\} is a basis for the W, then W^\perp is the null space of the matrix whose rows are the coordinates of the w_i with respect to some basis of V. Equivalently, x \in W^\perp \langle x, w_i \rangle = 0 for each i = 1, \dots, k, which uses the of the inner product to extend from the basis to the entire span of W.

Finite Dimensions

In finite-dimensional inner product spaces, the orthogonal complement exhibits particularly tractable properties due to the existence of bases and the ability to compute dimensions directly. For an inner product space V of dimension n and a subspace W \subseteq V, the orthogonal complement W^\perp satisfies the dimension theorem: \dim W + \dim W^\perp = n. This result follows from the direct sum decomposition V = W \oplus W^\perp, which holds uniquely in finite dimensions, ensuring that every vector in V can be expressed as the sum of a unique component in W and one in W^\perp. The dimension of the orthogonal complement also connects to the rank-nullity theorem in matrix terms. If W is the column space of a matrix A \in \mathbb{R}^{n \times k} whose columns form a basis for W, then W^\perp is the null space of A^T, so \dim W^\perp = n - \rank(A). This relation highlights how the "deficiency" in the spanning power of the basis vectors for W directly determines the size of its orthogonal complement. A key application in finite dimensions is the orthogonal onto W. For an \{u_1, \dots, u_k\} of W, the orthogonal projection of a v \in V onto W is given by \proj_W v = \sum_{i=1}^k \langle v, u_i \rangle u_i. This formula provides the unique vector in W closest to v in the inner product norm, with the error v - \proj_W v lying in W^\perp. It is computationally efficient when an is available, often obtained via the Gram-Schmidt process. To illustrate, consider \mathbb{R}^3 with the standard and the W spanned by \{(1,0,0), (0,1,0)\}, which is the xy-. The orthogonal complement W^\perp consists of vectors (0,0,z) for z \in \mathbb{R}, forming the z-axis, a line to the . Here, \dim W = 2 and \dim W^\perp = 1, verifying the dimension theorem. The of W in V, defined as \codim W = n - \dim W, equals \dim W^\perp, offering an interpretation of the orthogonal complement as measuring the " deficiency" of W relative to the full space. This perspective is useful in applications like solving systems of linear equations, where W^\perp captures the solution space to homogeneous constraints.

Generalizations

Bilinear Forms

In the context of a V equipped with a B: V \times V \to \mathbb{F}, where \mathbb{F} is a , the orthogonal complement of a subspace W \subseteq V is defined as W^\perp_B = \{ v \in V \mid B(v, w) = 0 \ \forall w \in W \}. This generalizes the standard notion from inner product spaces, where B is a symmetric positive-definite form, to arbitrary bilinear forms that may not possess such properties. The set W^\perp_B is always a of V. The radical of the B, denoted \mathrm{rad}(B) = V^\perp_B, consists of all vectors in V orthogonal to the entire space and measures the degeneracy of B; specifically, B is non-degenerate \mathrm{rad}(B) = \{0\}. In finite dimensions, for any W, the dimensions satisfy \dim W + \dim W^\perp_B \geq \dim V, with equality holding if B is non-degenerate (or more precisely, if the restriction of B to W^\perp_B induces a non-degenerate form on the ). When B is alternating (hence skew-symmetric), as in forms, or symmetric, as in forms, the orthogonal complement plays a key role in identifying isotropic subspaces, which are subspaces W satisfying W \subseteq W^\perp_B. For non-degenerate alternating forms on even-dimensional spaces, maximal isotropic subspaces have dimension equal to half the dimension of V. Consider the \mathbb{R}^2 with the B((x_1, y_1), (x_2, y_2)) = x_1 y_2 - y_1 x_2, which is the standard (alternating) form given by the . This form is non-degenerate, as its \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} is invertible. For W = \mathrm{span}\{(1, 0)\}, the orthogonal complement is W^\perp_B = \{ (a, b) \in \mathbb{R}^2 \mid B((a, b), (1, 0)) = -b = 0 \} = \mathrm{span}\{(1, 0)\} = W, illustrating that W is isotropic since B vanishes on W \times W.

Banach Spaces

In normed linear spaces, the orthogonal complement generalizes the inner product notion through duality with the continuous X^*. For a W of a normed space X, the W^0 is the closed of X^* consisting of all continuous linear functionals f \in X^* such that f(w) = 0 for every w \in W. The orthogonal complement is then defined as the preannihilator of this : W^\perp = \{ v \in X \mid \langle f, v \rangle = 0 \ \forall \, f \in W^0 \}. This set W^\perp is always a closed of X. The plays a central role in characterizing this construction. It ensures the existence of non-zero continuous linear functionals that separate a given point from a proper closed , implying that the double precisely recovers the : W^\perp = (W^0)^0 = \overline{W}, the of W in the topology of X. If W is already closed, then W^\perp = W, which is thus closed. This topological property highlights the emphasis on continuity and completeness in Banach spaces, where X is complete. In relation to the weak topology on X, induced by the seminorms | \langle f, \cdot \rangle | for f \in X^*, the orthogonal complement W^\perp coincides with the of the associated to the i: W \hookrightarrow X. Specifically, the i^*: X^* \to W^* has W^0, and W^\perp is the set of elements in X annihilated by all such kernels, aligning with weak continuity properties of bounded operators. A concrete example arises in the \ell^\infty of bounded real sequences with the supremum norm, where the subspace c_0 consists of sequences converging to zero. Since c_0 is closed in \ell^\infty, its annihilator c_0^0 in (\ell^\infty)^* leads to the orthogonal complement c_0^\perp = c_0. However, unlike Hilbert spaces, this does not yield a decomposition \ell^\infty = c_0 \oplus c_0^\perp with a bounded onto c_0; in fact, c_0 admits no closed topological complement in \ell^\infty. In contrast to Hilbert spaces, where the Riesz representation theorem identifies X^* with X via the inner product and guarantees an orthogonal X = W \oplus W^\perp for closed W, general Banach spaces lack such a . Thus, while W^\perp provides a topological , it does not ensure a complementary that is "orthogonal" in a sense, underscoring the absence of guaranteed orthogonal projections without an inner product .

Applications

Linear Algebra

In finite-dimensional linear algebra, orthogonal complements play a key role in solving systems of linear equations Ax = b, where A is an m \times n . By the fundamental theorem of linear algebra, the orthogonal complement of the column space of A in \mathbb{R}^m is the null space of A^T, consisting of all vectors y such that A^T y = 0. This relationship ensures that the system Ax = b is consistent b is orthogonal to the null space of A^T, meaning no vector in \operatorname{Null}(A^T) has a nonzero with b. Similarly, the orthogonal complement of the row space of A in \mathbb{R}^n is the null space of A, which describes the solution space as a particular solution plus homogeneous solutions orthogonal to the rows of A. The Gram-Schmidt process utilizes orthogonal complements to construct from a linearly independent set of vectors , such as \mathbb{R}^n. Given vectors \{v_1, v_2, \dots, v_k\}, the process iteratively projects each v_i onto the span of the previous orthogonalized vectors and subtracts , effectively placing the result of the previous span. For instance, the second vector becomes v_2^\perp = v_2 - \frac{v_1 \cdot v_2}{v_1 \cdot v_1} v_1, which is orthogonal to v_1. Normalizing these yields , enabling efficient computations in algorithms reliant on orthogonality. Orthogonal complements are central to the least squares method for approximating solutions to overdetermined systems Ax = b, where no exact solution exists. The goal is to minimize \|Ax - b\|^2 by finding the projection of b onto the column space of A, denoted \operatorname{proj}_{\operatorname{Col}(A)} b = A \hat{x}, where \hat{x} = (A^T A)^{-1} A^T b assuming A has full column rank. The error vector b - \operatorname{proj}_{\operatorname{Col}(A)} b then lies in the orthogonal complement of \operatorname{Col}(A), which is \operatorname{Null}(A^T), ensuring the residual is perpendicular to every column of A. This projection property extends to the , where a A \in \mathbb{R}^{m \times n} with full column rank is factored as A = QR, with Q having orthonormal columns spanning \operatorname{Col}(A) and R upper triangular. In the full A = [Q_1 \, Q_2] \begin{bmatrix} R_1 \\ 0 \end{bmatrix}, the columns of Q_2 form an for the orthogonal complement \operatorname{Col}(A)^\perp = \operatorname{Null}(A^T), providing a complete of \mathbb{R}^m into orthogonal subspaces. This factorization aids in computations by solving R x = Q^T b.

Functional Analysis

In functional analysis, the orthogonal complement plays a central role in s, where every closed subspace M admits an orthogonal P_M: H \to H onto M, defined by P_M x = y where y \in M minimizes \|x - y\|_H. This is a bounded linear with \|P_M\| \leq 1, (P_M^* = P_M), and idempotent (P_M^2 = P_M), satisfying H = M \oplus M^\perp with M^\perp = \ker P_M. The further connects orthogonal complements to duality: for a H, the dual H^* is isometrically isomorphic to H via \ell \in H^* \mapsto z \in H where \ell(x) = \langle x, z \rangle_H, and the kernel of \ell is the orthogonal complement of the span of z. The for operators on Hilbert spaces decomposes the space into orthogonal eigenspaces: for a compact T: H \to H, H is the \overline{\bigoplus_{\lambda \in \sigma(T)} \ker(T - \lambda I)} where the eigenspaces \ker(T - \lambda I) are pairwise orthogonal, closed, and finite-dimensional (except possibly for \lambda = 0). For non-compact operators, the decomposition is more general, involving both and continuous parts; the of the closure of the span of eigenvectors corresponds to the associated with the continuous . This decomposition relies on the to ensure the direct sum is Hilbert, enabling the representation T x = \sum_{\lambda} \lambda \langle x, e_\lambda \rangle_H e_\lambda for an of eigenvectors \{e_\lambda\} in the case. In Sobolev spaces, which are Hilbert spaces of functions with weak derivatives, orthogonal complements appear in weak formulations of partial differential equations (PDEs). For the Dirichlet problem -\Delta u = f on a domain \Omega with u = 0 on \partial \Omega, the weak formulation seeks u \in H^1_0(\Omega) such that \int_\Omega \nabla u \cdot \nabla v \, dx = \int_\Omega f v \, dx for all test functions v \in H^1_0(\Omega), where H^1_0(\Omega) is the closure of compactly supported smooth functions. The variational problem is well-posed via the Lax-Milgram theorem, as the bilinear form is continuous and coercive on H^1_0(\Omega) with respect to the inner product. For Fredholm operators T: H \to H on Hilbert spaces, which are bounded with finite-dimensional and and closed , the \operatorname{ind}(T) = \dim \ker T - \dim \coker T is invariant under compact perturbations. The identifies with the orthogonal complement of the in the , \coker T \cong (\operatorname{ran} T)^\perp \cong \ker T^* via the Riesz , since H is self-. This connection via orthogonal complements in spaces underpins index theory, as in the Atiyah-Singer theorem for elliptic operators on manifolds. A concrete example arises in the L^2[0,1] with inner product \langle f, g \rangle = \int_0^1 f(t) \overline{g(t)} \, dt: the of constant functions, spanned by the indicator $1, has orthogonal complement consisting of mean-zero functions \{f \in L^2[0,1] : \int_0^1 f(t) \, dt = 0\}, since \langle f, 1 \rangle = 0 precisely when the integral vanishes. This decomposition L^2[0,1] = \mathbb{C} \cdot 1 \oplus (\mathbb{C} \cdot 1)^\perp illustrates the onto constants as , a of norm 1.

References

  1. [1]
    [PDF] inner product spaces - UC Davis Mathematics
    If W is a subspace of an inner product space V, then the set of all vectors in V that are orthogonal to every vector in W is called the orthogonal complement of ...
  2. [2]
    Orthogonal Complements
    By the proposition, computing the orthogonal complement of a span means solving a system of linear equations. For example, if. v 1 = D 1 7 2 E v 2 = D − 2 3 1 E.
  3. [3]
    [PDF] MATH 304 Linear Algebra Lecture 24: Orthogonal complement ...
    Definition. Let S ⊂ Rn. The orthogonal complement of S, denoted S⊥, is the set of all vectors x ∈ Rn that are orthogonal to S. That is, S⊥ is the largest ...
  4. [4]
    [PDF] 2 Inner Product Spaces, part 1
    Let U be a subspace of an inner product space V. The orthogonal complement U⊥ is the set. U. ⊥ = {x ∈ V : ∀u ∈ U, ⟨x, u⟩ = 0}. It is easy to check that ...
  5. [5]
    [PDF] MATH 323 Linear Algebra Lecture 35: Orthogonality in inner product ...
    Let S be a nonempty subset of an inner product space W. The orthogonal complement of S, denoted S⊥, is the set of all vectors x ∈ W that are orthogonal to S.
  6. [6]
    [PDF] Inner product spaces
    May 13, 2024 · When V = W ⊕ W⊥, we call W⊥ the orthogonal complement of W in V . Thus, when the annihilator is a complement, we call it the orthogonal ...
  7. [7]
    [PDF] Lecture Notes from August 25, 2022
    Aug 25, 2022 · The orthogonal complement has the following properties: (a) If F is a closed subspace of a Hilbert space, then H = F ⊕ F⊥, so H is the direct ...
  8. [8]
    SCLA Orthogonal Complements - A First Course in Linear Algebra
    Suppose that V V is a vector space with a subspace U. U . If W W is a subspace such that V=U⊕W, V = U ⊕ W , then W W is the complement of V.
  9. [9]
    [PDF] Contents 3 Inner Product Spaces - Evan Dummit
    • Theorem (Basis for Orthogonal Complement): Suppose W is a subspace of the finite-dimensional inner product space V , and that S = {e1,..., ek} is an ...
  10. [10]
    [PDF] BILINEAR FORMS: Geometry controlled algebraically by dot product
    Definition 1.18. The bilinear space (V1 ⊕ V2,B1 ⊕ B2) constructed above is called the orthogonal direct sum of V1 and V2 and is denoted V1 ⊥ V2. Example 1.19.
  11. [11]
    [PDF] Bilinear Forms
    Feb 28, 2005 · If W is a subspace of V then we call W⊥ the orthogonal complement of W. We define the radical of a subspace W of V to be radW = W T W⊥ and call ...
  12. [12]
    [PDF] Functional Analysis - University of Waterloo
    . If X is a Banach space and Y ⊂ X, the annihilator of. Y in X∗ is Y ⊥ = {f ∈ X∗ : f|Y = 0}. If Z ⊂ X∗, the preannihilator of Z in X is Z⊥ = {x ∈ X ...
  13. [13]
    [PDF] FUNCTIONAL ANALYSIS - ETH Zürich
    Jun 8, 2017 · These are notes for the lecture course “Functional Analysis I” held by the second author at ETH Zürich in the fall semester 2015.