Fact-checked by Grok 2 weeks ago

Linear subspace

In linear algebra, a linear subspace (or simply subspace) of a V over a F is a nonempty W \subseteq V that contains the zero vector and is closed under the operations of vector addition and defined on V, thereby inheriting the structure of a itself. This means that for any vectors \mathbf{u}, \mathbf{v} \in W and any scalar c \in F, both \mathbf{u} + \mathbf{v} and c\mathbf{u} must also belong to W. The trivial subspaces include the zero subspace \{\mathbf{0}\} and the entire space V itself. Subspaces play a central role in understanding the and of vector spaces, as they represent the "flats" that pass through the and are preserved under linear transformations. For example, in the \mathbb{R}^n, a one-dimensional subspace is a line through the , while a two-dimensional subspace is a plane through the ; these are precisely the spans of linearly independent vectors. More abstractly, every subspace can be expressed as the span of some set of vectors from V, and the dimension of a is the size of its basis, which is unique up to the choice of basis vectors. The concept of subspaces is foundational to many applications of linear algebra, including the solution of linear systems, where the solution set forms an affine subspace (a translate of a linear subspace), and in matrix theory, where key subspaces such as the column space () and null space () determine the rank and solvability of equations. emphasizes that virtually all algorithms and applications in linear algebra, from to eigenvalue decompositions, operate by projecting onto or analyzing these subspaces, making them indispensable for computational and theoretical advancements. Subspaces also underpin more advanced topics, such as spaces and direct sums, which decompose spaces into simpler components for solving complex problems in fields like physics, , and .

Fundamentals

Definition

In the context of linear algebra, a vector space W over a field F is a set equipped with operations of vector addition and scalar multiplication that satisfy the standard vector space axioms, including associativity, commutativity, existence of a zero vector, and distributivity. A linear subspace of W is a subset V \subseteq W that inherits these operations and itself forms a vector space over F. Formally, V is a linear subspace if it satisfies three conditions: (1) V contains the zero vector of W; (2) V is closed under vector addition, meaning that if \mathbf{u}, \mathbf{v} \in V, then \mathbf{u} + \mathbf{v} \in V; and (3) V is closed under , meaning that if \mathbf{u} \in V and c \in F, then c\mathbf{u} \in V. These axioms ensure that V preserves the of W. An equivalent characterization is that V is a nonempty of W closed under arbitrary linear combinations: if \mathbf{v}_1, \dots, \mathbf{v}_k \in V and c_1, \dots, c_k \in F, then c_1 \mathbf{v}_1 + \dots + c_k \mathbf{v}_k \in V. This follows directly from the closure properties under and , as linear combinations can be built iteratively from these operations.

Examples

A prominent example of a linear subspace is the x-axis in the vector space \mathbb{R}^2, consisting of all vectors of the form \{(x, 0) \mid x \in \mathbb{R}\}. This set satisfies the subspace axioms: it contains the zero vector (0,0); it is closed under addition, as (x,0) + (y,0) = (x+y, 0); and it is closed under scalar multiplication, as c(x,0) = (cx, 0) for any scalar c \in \mathbb{R}. In \mathbb{R}^3, the xy-plane forms a linear subspace, defined by \{(x, y, 0) \mid x, y \in \mathbb{R}\}. It includes the zero vector (0,0,0); addition preserves the form, since (x_1, y_1, 0) + (x_2, y_2, 0) = (x_1 + x_2, y_1 + y_2, 0); and scalar multiplication yields c(x, y, 0) = (cx, cy, 0), maintaining membership in the set. Consider the vector space of all polynomials of degree at most n over \mathbb{R}, denoted P_n(\mathbb{R}). The subset of even polynomials, where p(-x) = p(x) for all x, forms a linear subspace. This set contains the zero polynomial; the sum of two even polynomials is even, as (p + q)(-x) = p(-x) + q(-x) = p(x) + q(x) = (p + q)(x); and scalar multiples preserve evenness, since (cp)(-x) = c p(-x) = c p(x) = (cp)(x). In the vector space of continuous functions C(\mathbb{R}, \mathbb{R}), the set of functions f such that \int_{-1}^{1} f(x) \, dx = 0 is a linear subspace. It includes the zero function, whose integral is zero; the integral of a sum is the sum of integrals, so if both integrals are zero, their sum's integral is zero; and scaling by c multiplies the integral by c, preserving zero. Every W has two trivial linear subspaces: the subspace \{0\}, which satisfies the axioms vacuously as it contains only the and is closed under the operations; and W itself, which is a subspace by . A non-example is the unit circle in \mathbb{R}^2, given by \{(x, y) \in \mathbb{R}^2 \mid x^2 + y^2 = 1\}. This set fails to be a subspace because it is not closed under addition: for instance, (1,0) and (-1,0) are on the circle, but their sum (0,0) is not, as $0^2 + 0^2 = 0 \neq 1. It also excludes the .

Properties

Basic Properties

A linear subspace V of a vector space W is closed under s, meaning that for any finite collection of vectors \mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_k in V and scalars c_1, c_2, \dots, c_k from the underlying , the combination c_1 \mathbf{v}_1 + c_2 \mathbf{v}_2 + \dots + c_k \mathbf{v}_k remains in V. This follows directly from the subspace axioms of closure under addition and , as repeated applications yield any finite linear combination. The relation of inclusion among subspaces is transitive: if U is a subspace of V and V is a subspace of W, then U is a subspace of W. To see this, note that since U satisfies the subspace axioms within V, and V does so within W, the operations in U inherit the required closures from W. Additionally, the of any collection of subspaces of W is itself a subspace of W. For instance, if U and V are subspaces, their contains the zero vector (as both do), and is closed under addition and : if \mathbf{u}, \mathbf{v} are in U \cap V, then \mathbf{u} + \mathbf{v} and c \mathbf{u} lie in both U and V, hence in the . Subspaces exhibit translation invariance only with respect to the zero vector, as they must contain the origin: for any \mathbf{v} in a subspace V, setting the scalar to zero in the axiom yields $0 \cdot \mathbf{v} = \mathbf{0} \in V. Consequently, a translated subspace (e.g., V + \mathbf{b} for \mathbf{b} \neq \mathbf{0}) fails the subspace test unless \mathbf{b} = \mathbf{0}, distinguishing linear subspaces from affine subspaces, which do not necessarily pass through the origin. These properties hold in both finite- and infinite-dimensional settings, though examples often assume finite dimensionality for concreteness, such as lines or planes through the origin in \mathbb{R}^n.

Basis, Independence, and Dimension

A set of vectors \{ \mathbf{v}_1, \dots, \mathbf{v}_k \} in a vector space V is said to be linearly independent if the only solution to the equation a_1 \mathbf{v}_1 + \dots + a_k \mathbf{v}_k = \mathbf{0} is a_1 = \dots = a_k = 0. Equivalently, no vector in the set can be expressed as a of the others. A basis for a V is a set of vectors that is both linearly independent and spans V, meaning every vector in V can be uniquely expressed as a finite of the basis vectors. This uniqueness ensures that the representation of any vector with respect to the basis is well-defined. The of a V, denoted \dim(V), is the number of vectors in any basis for V. This value is well-defined because all bases of V have the same , a result established by showing that if two bases have different sizes, say m < n, then the smaller basis cannot span the space generated by the larger one, leading to a via linear dependence arguments. In a finite-dimensional , any linearly independent set of vectors can be extended to a basis by adding additional vectors from the space until the set spans V. This extension guarantees the existence of bases and facilitates the construction of coordinates in the space. If V is a of a W, then \dim(V) \leq \dim(W), with equality V = W. This follows from the fact that any basis for V can be extended to a basis for W.

Characterizations

Systems of Linear Equations

A homogeneous system of linear equations is given by Ax = 0, where A is an m \times n matrix over a field F, and x \in F^n. The solution set \{ x \in F^n \mid Ax = 0 \} forms a subspace of F^n. This set satisfies the subspace axioms: the zero vector x = 0 is always a solution, as A \cdot 0 = 0; if x_1 and x_2 are solutions, then A(x_1 + x_2) = Ax_1 + Ax_2 = 0 + 0 = 0, ensuring closure under addition; and for any scalar c \in F, A(cx) = c(Ax) = c \cdot 0 = 0, ensuring closure under scalar multiplication. The solution set is precisely the kernel (or null space) of the linear transformation defined by A. In contrast, the solution set of a non-homogeneous system Ax = b with b \neq 0 is an affine subspace (a translate of the kernel), which is not a linear subspace unless b = 0, as it does not contain the zero vector. For example, the system x + y = 0 in \mathbb{R}^2 has solution set \{ (x, -x) \mid x \in \mathbb{R} \}, which is a line through the origin and thus a subspace of \mathbb{R}^2.

Null Space and Span of Vectors

In linear algebra, the null space, also known as the , of a matrix A is defined as the set N(A) = \{ \mathbf{x} \mid A\mathbf{x} = \mathbf{0} \}, where \mathbf{x} is a vector in the domain space. This set consists of all vectors that are mapped to the zero vector under the linear transformation represented by A, and it forms a subspace of the domain. The span of a set of vectors S, denoted \operatorname{Span}(S), is the set of all possible linear combinations of the vectors in S./05%3A_Span_and_Bases/5.01%3A_Linear_Span) It represents the smallest subspace containing S, as it is closed under vector addition and scalar multiplication and includes the zero vector. Every subspace V of a vector space can be expressed both as the span of some set of vectors and as the null space of some linear transformation. Specifically, V = \operatorname{Span}(B) where B is a basis for V, and V = N(A) for an appropriately chosen matrix A. This duality highlights that subspaces arise naturally from both generative constructions (via spans) and solution sets to homogeneous equations (via null spaces). A key property relating span to matrix rank is that the dimension of \operatorname{Span}(S) equals the rank of the matrix whose columns are the vectors in S. This dimension, often called the rank of S, measures the linear independence within S and determines the subspace's size. For example, in \mathbb{R}^3, the span of the standard basis vectors \mathbf{e}_1 = (1, 0, 0) and \mathbf{e}_2 = (0, 1, 0) is the xy-plane, consisting of all vectors of the form (a, b, 0) for scalars a, b \in \mathbb{R}. This subspace has dimension 2, matching the rank of the matrix with these columns./05%3A_Span_and_Bases/5.01%3A_Linear_Span)

Column Space and Row Space

The column space of an m \times n A over a F, denoted \operatorname{Col}(A), is the span of the columns of A, forming a of F^m. This consists of all linear combinations of the column vectors, representing the of the linear transformation defined by A. The row space of A, denoted \operatorname{Row}(A), is the span of the rows of A, which is a of F^n. Equivalently, \operatorname{Row}(A) = \operatorname{Col}(A^T), where A^T is the of A, since the rows of A are the columns of A^T. A key relation between these spaces and the null space is given by the rank-nullity theorem: for an m \times n matrix A, \dim(\operatorname{Col}(A)) + \dim(\operatorname{N}(A)) = n, where \operatorname{N}(A) is the null space of A and \dim(\operatorname{Col}(A)) is the rank of A. In the context of an inner product space, such as \mathbb{R}^n with the standard dot product, the row space of A is orthogonal to the null space of A; that is, every vector in \operatorname{Row}(A) is perpendicular to every vector in \operatorname{N}(A). This orthogonality arises because if \mathbf{x} \in \operatorname{N}(A), then A\mathbf{x} = \mathbf{0}, implying that each row of A dotted with \mathbf{x} is zero. For example, consider the $2 \times 2 A = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} over \mathbb{[R](/page/R)}. Here, \operatorname{Col}(A) = \mathbb{R}^2 and \operatorname{Row}(A) = \mathbb{R}^2, both full-dimensional subspaces.

Operations and Relations

Inclusion and Intersection

In linear , one V is included in another U of a if every vector belonging to V also belongs to U, denoted V \subseteq U. This relation implies that the of V satisfies \dim V \leq \dim U, as any basis for V can be extended to a basis for U. The of two s V and U, defined as the set V \cap U = \{\mathbf{x} \mid \mathbf{x} \in V \text{ and } \mathbf{x} \in U\}, consists of all vectors common to both and forms a of the ambient . To verify this, note that the zero vector belongs to both V and U, hence to their ; if \mathbf{x}, \mathbf{y} \in V \cap U, then \mathbf{x} + \mathbf{y} \in V and \mathbf{x} + \mathbf{y} \in U, so \mathbf{x} + \mathbf{y} \in V \cap U; and for any scalar c and \mathbf{x} \in V \cap U, c\mathbf{x} \in V and c\mathbf{x} \in U, so c\mathbf{x} \in V \cap U. For finite-dimensional subspaces, the dimensions of the and the (the subspace generated by vectors from both) are related by \dim(V \cap U) + \dim(V + U) = \dim V + \dim U, known as Grassmann's formula, which quantifies the overlap between V and U. A example in \mathbb{R}^3 is the of the xy-, spanned by \{(1,0,0), (0,1,0)\}, and the xz-, spanned by \{(1,0,0), (0,0,1)\}, which yields the x-axis, spanned by \{(1,0,0)\} and of dimension 1. For a chain of inclusions V \subseteq U \subseteq W among finite-dimensional subspaces, the dimension of the quotient space U/V (cosets of V in U) satisfies \dim(U/V) + \dim V = \dim U, providing a measure of how U extends V; details on quotient spaces are deferred.

Sum of Subspaces

The sum of two subspaces U and V of a W over a F is the set U + V = \{ u + v \mid u \in U, \, v \in V \}. This construction yields the smallest subspace of W that contains both U and V as subsets, and it satisfies the subspace axioms: it is nonempty (containing the zero vector), closed under vector addition, and closed under by elements of F. Equivalently, U + V is the of the union of the generating sets of U and V, i.e., U + V = \operatorname{Span}(U \cup V). A special case is the direct sum, denoted U \oplus V, which occurs when U \cap V = \{ 0 \}. In this situation, every vector in U + V admits a unique decomposition as a sum of an element from U and an element from V, with no overlap beyond the zero vector. For finite-dimensional subspaces, the dimension formula simplifies to \dim(U \oplus V) = \dim U + \dim V; in general, \dim(U + V) = \dim U + \dim V - \dim(U \cap V). This property highlights how the direct sum decomposes the larger space without redundancy. Consider \mathbb{R}^2 as the ambient space, with U the x-axis subspace \{ (x, 0) \mid x \in \mathbb{R} \} and V the y-axis subspace \{ (0, y) \mid y \in \mathbb{R} \}. Their sum is U + V = \mathbb{R}^2, a direct sum since U \cap V = \{ (0, 0) \}, and \dim(U \oplus V) = 1 + 1 = 2. In contrast, if U and V coincide (e.g., both the x-axis), then U + V = U, with nontrivial intersection leading to non-uniqueness in decompositions. The sum operation is specific to linear subspaces, which must contain the zero vector and be closed under the operations. For affine subspaces—such as two in \mathbb{R}^2 not passing through the —the analogous "Minkowski sum" generates an affine (a translate of a linear subspace), which is not a linear subspace unless the lines share the .

Lattice of Subspaces

The collection of all subspaces of a finite-dimensional V over a F, ordered by the relation \subseteq, forms a (poset). In this poset, the order reflects the hierarchical structure of subspaces, where smaller subspaces are contained within larger ones, and incomparable elements exist, such as two distinct lines through the in \mathbb{R}^3. This poset is a , with the meet operation defined as the of subspaces and the join as the of subspaces; every pair of subspaces possesses both a unique meet and a unique join, ensuring the lattice is complete for finite collections. The serves as the greatest lower bound under inclusion, while the acts as the least upper bound, generating the smallest subspace containing both. Unlike distributive lattices, the subspace lattice is generally non-distributive for dimensions greater than or equal to 3, as demonstrated by configurations like the pentagon sublattice formed by planes and lines in \mathbb{R}^3. The subspace lattice is modular, satisfying the modular law: for subspaces V \subseteq W, the identity V + (U \cap W) = (V + U) \cap W holds, which translates dimensionally to \dim(V + (U \cap W)) + \dim(U \cap V) = \dim(V + U) + \dim(V \cap W). This property arises from the additivity of dimension in vector spaces and distinguishes modular lattices from more general ones, ensuring compatibility with geometric intuitions in higher dimensions. For the finite-dimensional case over a \mathbb{F}_q, the full of subspaces of \mathbb{F}_q^n is particularly structured; the number of k-dimensional subspaces, known as points in the \mathrm{Gr}(k, n), is given by the \dbinom{n}{k}_q = \prod_{i=0}^{k-1} \frac{q^{n-i} - 1}{q^{k-i} - 1}. This enumerative result underpins combinatorial aspects of the , highlighting its finite yet richly interconnected nature. Applications of the subspace appear in , where linear subspace codes form sublattices closed under and sum, enabling bounds on code size and error correction in network coding scenarios. In , the relates to Grassmannians, which parameterize k-dimensional and originated in the 19th-century work of on extension theory, influencing modern .

Orthogonal Complements

In an W, the inner product \langle \cdot, \cdot \rangle is a positive definite on W. For a V \subseteq W, the of V, denoted V^\perp, is the set of all vectors w \in W such that \langle v, w \rangle = 0 for every v \in V. This set V^\perp forms a of W. Key properties of the orthogonal complement include the closure under double complementation: (V^\perp)^\perp = V when W is finite-dimensional. Additionally, V \cap V^\perp = \{0\}, ensuring the trivial intersection. In finite dimensions, W = V \oplus V^\perp, meaning every vector in W decomposes uniquely as a sum of elements from V and V^\perp. The dimension formula states that \dim(V) + \dim(V^\perp) = \dim(W) for finite-dimensional inner product spaces. For example, in \mathbb{R}^3 equipped with the standard , the of the xy-plane (spanned by (1,0,0) and (0,1,0)) is the z-axis, consisting of all scalar multiples of (0,0,1). The relates to : the of any w \in W onto V is the unique in V such that the difference w minus the projection lies in V^\perp.

Algorithms

Basis Computation for Row and Column Spaces

To compute a basis for the row space of an m \times n matrix A, apply Gaussian elimination to reduce A to its reduced row echelon form (RREF). The nonzero rows of this RREF provide a basis for the row space of A, as they span the same space while being linearly independent. This process involves row operations—swapping rows, multiplying rows by scalars, and adding multiples of one row to another—that preserve the row space. For the column space, identify the pivot positions (leading nonzero entries) in the RREF. The corresponding original columns of A at these pivot indices form a basis for the column , since they are linearly and the column . The number of such pivots equals the of A, which is the of both the row and column , by the rank theorem. As an example, consider the matrix A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}. yields the RREF \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, so a basis for the row space is \{ [1, 0], [0, 1] \}. The pivot columns are the first and second columns of A, giving a basis \{ [1, 3]^\top, [2, 4]^\top \} for the column space; the is 2. The computational complexity of on an m \times n is O(m n \min(m, n)), making it efficient for moderate-sized matrices. For numerical computations with , where roundoff errors can affect selection and stability, the is often preferred: it factors A = QR with Q orthogonal and R upper triangular, where the first r columns of Q (with r the ) form an for the column space, offering better stability without explicit pivoting in many cases.

Subspace Membership and Coordinates

To determine whether a \mathbf{v} \in \mathbb{R}^n belongs to a V = \operatorname{[Span](/page/Span)}(B), where B = \{\mathbf{b}_1, \dots, \mathbf{b}_k\} is a spanning set of k s in \mathbb{R}^n, form the equation B \mathbf{x} = \mathbf{v}, with B the n \times k whose columns are the \mathbf{b}_i. The equation has a solution \mathbf{x} if and only if \mathbf{v} \in V. If a solution exists and B is a basis for V, then \mathbf{x} provides the unique coordinates of \mathbf{v} with respect to B, satisfying \mathbf{v} = \sum_{i=1}^k x_i \mathbf{b}_i, so [\mathbf{v}]_B = \mathbf{x}. The standard algorithm to test membership and compute coordinates augments the matrix B with \mathbf{v} to form [B \mid \mathbf{v}] and applies to reduce it to . Consistency holds if the of B equals the of [B \mid \mathbf{v}]; in this case, back-substitution on the reduced system yields the coordinates \mathbf{x}. This process has O(k n^2), where n is the ambient dimension and k \leq n the number of spanning vectors. In inner product spaces equipped with an inner product \langle \cdot, \cdot \rangle, membership can also be tested via onto V. For an \{\mathbf{b}_1, \dots, \mathbf{b}_k\} of V, the projection is \operatorname{proj}_V \mathbf{v} = \sum_{i=1}^k \frac{\langle \mathbf{v}, \mathbf{b}_i \rangle}{\|\mathbf{b}_i\|^2} \mathbf{b}_i, and \mathbf{v} \in V if and only if \operatorname{proj}_V \mathbf{v} = \mathbf{v}. If the basis is orthonormal (so \|\mathbf{b}_i\| = 1 and \langle \mathbf{b}_i, \mathbf{b}_j \rangle = 0 for i \neq j), the formula simplifies to \operatorname{proj}_V \mathbf{v} = \sum_{i=1}^k \langle \mathbf{v}, \mathbf{b}_i \rangle \mathbf{b}_i, with coordinates given by the scalars \langle \mathbf{v}, \mathbf{b}_i \rangle. For example, in \mathbb{R}^2 with the standard basis B = \{(1,0)^T, (0,1)^T\} spanning V = \mathbb{R}^2, test \mathbf{v} = (1,2)^T. The augmented matrix [B \mid \mathbf{v}] is the $2 \times 3 identity augmented with \mathbf{v}, which row reduces to itself and is consistent, confirming \mathbf{v} \in V with coordinates [1, 2]^T.

Basis for Null Space and Subspace Operations

The null space of a matrix A \in \mathbb{R}^{m \times n}, denoted \mathcal{N}(A), consists of all vectors x \in \mathbb{R}^n such that Ax = 0. To compute a basis for \mathcal{N}(A), first perform Gaussian elimination to obtain the reduced row echelon form (RREF) of A, denoted R. The pivot columns of R correspond to the basic (dependent) variables, while the non-pivot columns identify the free variables. The dimension of the null space equals the number of free variables, which is n - r where r = \rank(A). For each free variable, construct a special solution by setting that free variable to 1, all other free variables to 0, and solving for the basic variables; these special solutions form a basis for \mathcal{N}(A). Consider the matrix A = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 0 & 1 \end{pmatrix}, which is already in RREF with pivots in columns 1 and 3. The free variable is x_2. Setting x_2 = 1 yields x_1 = -2 and x_3 = 0, so a basis for \mathcal{N}(A) is the vector \begin{pmatrix} -2 \\ 1 \\ 0 \end{pmatrix}. For the sum of two subspaces V and U of \mathbb{R}^n, where V = \span\{v_1, \dots, v_k\} and U = \span\{u_1, \dots, u_l\}, a basis can be obtained by forming the matrix B whose columns are the vectors v_1, \dots, v_k, u_1, \dots, u_l, then computing the RREF of B and selecting the columns corresponding to the pivot positions as the basis vectors for V + U. This process extends a basis of V by including vectors from a basis of U that are not in V, using subspace membership tests via solving linear systems. The dimension satisfies \dim(V + U) = \dim V + \dim U - \dim(V \cap U), so the basis size follows directly. To find a basis for the intersection V \cap U, where V is the column space of a matrix A and U is the column space of a matrix B, form the combined matrix [A \, | \, -B] and compute a basis for its null space; if \{z_1, \dots, z_p\} is this basis, partitioned as (x_i, y_i) where x_i corresponds to variables for A and y_i for B, then \{A x_1, \dots, A x_p\} (or equivalently -B y_1, \dots, -B y_p) forms a basis for V \cap U. This approach identifies vectors that can be expressed as linear combinations from both sets of columns by solving A x = B y. The of these basis computations is dominated by steps, which require O(m n^2) arithmetic operations for an m \times n with m \geq n.

References

  1. [1]
    [PDF] Chapter 5 - Vector Spaces and Subspaces - MIT Mathematics
    In short, all linear combinations cv C dw stay in the subspace. First fact : Every subspace contains the zero vector. The plane in R3 has to go through .0; 0; 0 ...
  2. [2]
    [PDF] Linear Algebra Definition. A vector space (over R) is an ordered ...
    Definition. Suppose V is a vector space. We say a subset U of V is a linear subspace (of V ) (i) if 0 ∈ U; (ii) u + v ∈ U whenever u, v ∈ U; (iii) cu ∈ U ...
  3. [3]
    Subspaces
    A subspace turns out to be exactly the same thing as a span, except we don't have a particular set of spanning vectors in mind.
  4. [4]
    [PDF] 10.1 Subspaces
    For clarity, the subpaces we discuss here are sometimes called linear subspaces or vector subspaces. subspace is a line that goes through the origin, a two- ...
  5. [5]
    [PDF] 1 Vector Spaces - Penn Math
    Definition 1.3 (Subspaces). Let V be a vector space over a field F and let W ⊆ V . W is a subspace if W itself is a vector space under the same field F and the ...<|control11|><|separator|>
  6. [6]
    [PDF] The Fundamental Theorem of Linear Algebra Gilbert Strang The ...
    Feb 5, 2008 · Virtually all algorithms and all applications of linear algebra are understood by moving to subspaces. The key algorithm is elimination.<|control11|><|separator|>
  7. [7]
    Vector Spaces » Subspaces » - A First Course in Linear Algebra
    A subspace is a vector space that is contained within another vector space. So every subspace is a vector space in its own right, but it is also defined ...
  8. [8]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    onalized. Moving to multilinear forms, the chapter shows that the subspace of. alternating 𝑛-linear forms on an 𝑛-dimensional vector space has dimension one. ...
  9. [9]
    [PDF] Linear Algebra and It's Applications by Gilbert Strang
    Page 1. Page 2. Linear Algebra and Its Applications. Fourth Edition. Gilbert Strang ... Linear algebra moves steadily to n vectors in m- dimensional space. We ...
  10. [10]
    [PDF] Subspaces of Vector Spaces Math 130 Linear Algebra
    A nonempty subset W of a vector space V is a subspace of V if and only if W is closed under linear combinations, that is, whenever w1, w2,...,wk all belong to W ...
  11. [11]
    FCLA Subspaces - A First Course in Linear Algebra
    In particular, being closed under vector addition and scalar multiplication means a subspace is also closed under linear combinations. 🔗. We can combine two ...
  12. [12]
    [PDF] Subspaces
    A subspace W of a vector space V is itself a vector space, using the vector addition and scalar multi- plication operations from V .
  13. [13]
    [PDF] Subspaces
    Therefore, S is a SUBSPACE of R3. Other examples of Sub Spaces: • The line defined by the equation y = 2x, also defined by the vector definition t. 2t is a ...
  14. [14]
    [PDF] lecture 12: properties of vector spaces and subspaces
    Example. The set of even polynomials over R, those for which f(x) = f(−x), is a subspace of P(R). Indeed, denote the set of even polynomials by E(R).
  15. [15]
    [PDF] 4.1. Vector Spaces and Subspaces
    Sep 29, 2010 · ... functions with integral zero is a function whose integral is zero. If we multiply f by a scalar, we still get a function whose integral is zero.
  16. [16]
    [PDF] 1 Vector Spaces in R - Michael Sullivan
    The unit circle in R2, denoted U, is not a vector space. Proof: The points (1, 0) and (−1, 0) are both in U, but (1, 0) + ...
  17. [17]
    Linear Independence
    If you make a set of vectors by adding one vector at a time, and if the span got bigger every time you added a vector, then your set is linearly independent.
  18. [18]
    Linear independence - StatLect
    Two or more vectors are said to be linearly independent if none of them can be written as a linear combination of the others. On the contrary, if at least one ...
  19. [19]
    Basis and Dimension
    Any m linearly independent vectors in V form a basis for V . · Any m vectors that span V form a basis for V .
  20. [20]
    Vector Basis -- from Wolfram MathWorld
    A vector basis of a vector space V is defined as a subset v_1,...,v_n of vectors in V that are linearly independent and span V.
  21. [21]
    [PDF] The dimension of a vector space - Keith Conrad
    D. Definition 2.6. If V is a vector space over F and V has a finite basis then the (common) size of any basis of V is called the dimension of V (over F).
  22. [22]
    [PDF] Linear Span and Bases - UC Davis Math
    Jan 23, 2007 · Theorem 7 (Basis Extension Theorem). Every linearly independent list of vectors in a finite-dimensional vector space V can be extended to a ...
  23. [23]
    [PDF] Mathematics 108A: Reduction and Extension Theorems - UCSB Math
    Apr 30, 2010 · Every linearly independent list in a finite-dimensional vector space V can be extended to a basis.
  24. [24]
    [PDF] The Dimension of a Vector Space - Sites at Lafayette
    The dimension of a vector space V, denoted dim(V), is the number of vectors in a basis for V.
  25. [25]
    [PDF] Subspaces - Purdue Math
    Feb 16, 2007 · Figure 4.3.2: The solution set to the homogeneous system of linear equations in. Example 4.3.3 is a subspace of R3.
  26. [26]
    Construction of Subspaces - Ximera - The Ohio State University
    There are two ways to describe subspaces: first as solutions to linear systems, and second as the span of a set of vectors.
  27. [27]
    Kernel -- from Wolfram MathWorld
    The kernel (also called the null space) is defined by Ker(f)={x:x in Asuch thatf(x)=0}, so the kernel gives the elements from the original set that are mapped ...<|control11|><|separator|>
  28. [28]
    kernel of a linear mapping - PlanetMath.org
    Mar 22, 2013 · The set of all vectors in V V that T T maps to 0 0 is called the kernel (or nullspace ) of T T , and is denoted kerT ker ⁡ T . So ...
  29. [29]
    Sources of subspaces: kernels and ranges of linear transformations
    We shall show later that every subspace of a vector space is a kernel of some linear transformation and the range of some other linear transformation.
  30. [30]
    Rank of a matrix - StatLect
    The rank of a matrix is the dimension of the linear span of its columns, which is the same as the dimension of the linear span of its rows.
  31. [31]
  32. [32]
  33. [33]
    [PDF] MATH 304 Linear Algebra Lecture 24: Orthogonal complement ...
    That is, the nullspace of a matrix is the orthogonal complement of its row space. Proof: The equality Ax = 0 means that the vector x is orthogonal to rows of ...
  34. [34]
    [PDF] BASIC LINEAR ALGEBRA - KSU Math
    25 Suppose W is a linear subspace of the vector space V . Prove that dim W ≤ dim V . (This means that if A is a linear basis for W, and B is a linear basis for ...Missing: implies | Show results with:implies
  35. [35]
  36. [36]
    [PDF] 1. The intersection of two vector spaces The key idea ... - OSU Math
    Jun 3, 2013 · What are the intersections of the following pairs of subspaces? (a) The x-y plane and the y-z plane in R³. (b) The line through (1, 1, 1) and ...
  37. [37]
    [PDF] Math 396. Quotient spaces 1. Definition Let F be a field, V a vector ...
    In particular, this shows dim(V/W) = dimV − dimW. The first step is to check that all vj's span V/W, and the second step is to verify their linear independence.Missing: subset | Show results with:subset
  38. [38]
    [PDF] Subspaces and Linear Equations
    Theorem 1: A subspace of V is itself a vector space with the same addition and multiplication operations as V. This is a corollary of:.
  39. [39]
    Linear Algebra » Part 3: Vector Spaces » Direct Sums
    It turns out that a direct product of two vector spaces can be considered as a sum. In this case, it is called the direct sum and denoted as A⊕B.<|control11|><|separator|>
  40. [40]
    [PDF] math 204 c03 – direct sums and projections
    Mar 12, 2012 · Example. R2 = R1 ⊕ R2, where R1 is the x-axis and R2 is the y-axis. There are many more choices. Any two lines that are not parallel and pass ...
  41. [41]
    [PDF] A Linear Algebra and Vector Space Theory - Stat@Duke
    If X and Y are two subspaces of V, then the span of their union Z = S(X ∪ sY ) is the set of all vectors of the form z = x + y where x ∈ X and y ∈ Y. The span ...
  42. [42]
    [PDF] Lecture 8 Lattices
    Mar 1, 2011 · Example 7 (Vector Spaces) Let V be a vector space, and L be the set of linear subspaces ordered by inclusion. L is lattice with meet S ∧ T ...
  43. [43]
    Modular Lattice - an overview | ScienceDirect Topics
    A modular lattice is one in which every element is modular. The lattice L(n, F) of subspaces of the n-dimensional vector space Fn over a skew field F is a ...
  44. [44]
    [PDF] Gaussian Coefficients - University of Toronto
    Dec 10, 2008 · k i counts the number of distinct k-dimensional subspaces of an n-dimensional vector space over Fq, i.e., the size of the Grassmannian G(n, k).
  45. [45]
    [1911.00721] The Lattice Structure of Linear Subspace Codes - arXiv
    Nov 2, 2019 · In this paper, we prove that a linear code in a projective space forms a sublattice of the corresponding projective lattice if and only if the code is closed ...
  46. [46]
    Hermann Grassmann (1809 - 1877) - Biography - MacTutor
    Grassmann died of heart problems after a period of slowly failing health. Grassmann's mathematical methods were slow to be adopted but eventually they inspired ...
  47. [47]
    [PDF] RES.18-011 (Fall 2021) Lecture 24: Bilinear Forms
    example of a bilinear form, has an additional property. Defnition 24.8. A dot, or inner product is a symmetric bilinear form such that. ⟨x, x⟩ ≥ 0, a and ...
  48. [48]
    [PDF] 2 Inner Product Spaces, part 1
    Let U be a subspace of an inner product space V. The orthogonal complement U⊥ is the set. U. ⊥ = {x ∈ V : ∀u ∈ U, ⟨x, u⟩ = 0}. It is easy to check that ...
  49. [49]
    Orthogonal Complements and Decompositions - Ximera
    Clearly, every scalar multiple of the standard unit vector in is orthogonal to every vector in the -plane. We say that the set is an orthogonal complement of .
  50. [50]
    [PDF] Gaussian elimination
    Oct 14, 2013 · The (first) r nonzero rows are a basis of the row space of A, so r ... Performing Gaussian elimination on the augmented matrix leads to a row-.
  51. [51]
    Gaussian Elimination — Linear Algebra, Geometry, and Computation
    Gaussian Elimination has two stages. Given an augmented matrix A representing a linear system, each stage iterates over the rows of A, starting with the first ...
  52. [52]
    [PDF] Row Space, Column Space, and Null Space - Sites at Lafayette
    In terms of making calculations for row and null space, you should always apply Gauss-Jordan elimination to find the row echelon form of matrix A; it will ...Missing: computation | Show results with:computation
  53. [53]
    Gaussian elimination (2-D Polynomial Interpolation Polynomial ...
    Gaussian elimination (2-D Polynomial Interpolation Polynomial Interpolation) ; Time Complexity. O ( n 3 ) ; Space Complexity. O ( n 2 ) words. (Requires ...
  54. [54]
    [PDF] Column Spaces and QR - Stanford Computer Graphics Laboratory
    This is called the “reduced”. QR factorization of A, since the columns of Q1 contain a basis for the column space of A rather than for all of Rm; it takes up ...
  55. [55]
    Bases as Coordinate Systems
    In this section, we interpret a basis of a subspace V as a coordinate system on V, and we learn how to write a vector in V in that coordinate system.
  56. [56]
    What is computational complexity of $Ax=b$ when size of A increasing
    Feb 23, 2016 · Gauss elimination needs O(n3) (source). There are approximative methods, like Gauss-Seidel which are faster.Better Gaussian Elimination for solving $Ax=b - Math Stack Exchangelinear algebra - What is the time complexity for solving $Ax = b$ if $A ...More results from math.stackexchange.com
  57. [57]
    Orthogonal Projection
    Let W be a subspace of R n , and define T : R n → R n by T ( x )= x W . Then: T is a linear transformation. T ( x )= x if and only if x is in W . T ( x )= 0 if ...
  58. [58]
    [PDF] 3.2 Computing the Nullspace by Elimination:A=CR
    Ax = 0, compute the reduced row echelon form R0 of A. Remove the m - r zero rows of R0 to produce R = I F P and A = CR. Then the special solutions to Ax ...
  59. [59]
    None
    Below is a merged summary of the content on "Basis for Null Space Using RREF and Basis of Sum/Intersection of Subspaces" from *Linear Algebra and Its Applications (5th Edition)*, consolidating all information from the provided segments into a comprehensive response. To maximize detail and clarity, I will use tables where appropriate to organize key information (e.g., sections, algorithms, examples, and resources) while retaining narrative explanations for conceptual overviews and methods. The response avoids redundancy by synthesizing overlapping details and ensures all unique information is included.
  60. [60]
    [PDF] Complexity
    Consider Gaussian elimination whilst working on row i. We know that the first i − 1 entries of row i are zero. We must preform Ri → 1 c. Ri. but we know ...