Fact-checked by Grok 2 weeks ago

Row and column spaces

In linear algebra, the row space of an m \times n A is the subspace of \mathbb{R}^n (or \mathbb{C}^n) spanned by the row vectors of A, equivalently defined as the column space of the A^T. The column space of A, denoted C(A), is the of \mathbb{R}^m (or \mathbb{C}^m) spanned by the column vectors of A, representing the or of the linear transformation associated with A. These spaces are fundamental to the four fundamental subspaces of a , which also include the nullspace N(A) (solutions to Ax = 0) and the left nullspace N(A^T) (solutions to A^T y = 0). The dimensions of the row space and column space are equal and define the rank r of A, the number of linearly independent rows or columns. This rank satisfies the rank-nullity theorem: for the column space and nullspace, \dim C(A) + \dim N(A) = n, and similarly \dim C(A^T) + \dim N(A^T) = m for the row space and left nullspace. Key properties include the invariance of the row space under row operations (row equivalence preserves it) and the fact that a vector b lies in the column space if and only if the system Ax = b is consistent. Bases for these spaces can be obtained from the reduced row-echelon form of A: the pivot columns of A span the column space, while the nonzero rows of the echelon form span the row space. For a square nonsingular matrix, both spaces coincide with the full ambient space \mathbb{R}^n. These concepts underpin applications in solving linear systems, determining matrix invertibility, and analyzing linear transformations.

Introduction

Intuitive overview

In linear algebra, the row space of a matrix is the generated by taking all possible linear combinations of its row vectors, while the column space is the analogous formed by linear combinations of its column vectors. These spaces provide insight into the structure of the and the linear transformations it represents, highlighting the directions in which the can "operate" within the ambient . Intuitively, the column space can be viewed as the set of all possible outputs—or the "reach"—of the defined by the when applied to input vectors from the domain . In contrast, the row relates to the constraints on those inputs, as it corresponds to the column space of the 's and captures the directions orthogonal to certain behaviors in linear systems. For visualization, imagine a 2×2 whose columns are two vectors in the : if the columns point in independent directions, their fills the entire , representing a surjective ; if they align, the span reduces to a line, limiting the outputs to that . These ideas trace their origins to 19th-century developments in solving systems of linear equations, with mathematicians like James Joseph Sylvester introducing the term "matrix" in 1850. The dimension of the row or column space, known as the rank, quantifies the extent of this linear independence.

Basic example

Consider the matrix A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}. The column space of A is the span of its columns, which are the vectors \begin{pmatrix} 1 \\ 3 \end{pmatrix} and \begin{pmatrix} 2 \\ 4 \end{pmatrix}. To determine if these vectors are linearly independent, suppose c_1 \begin{pmatrix} 1 \\ 3 \end{pmatrix} + c_2 \begin{pmatrix} 2 \\ 4 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, leading to the system c_1 + 2c_2 = 0 and $3c_1 + 4c_2 = 0. Solving yields c_1 = c_2 = 0, confirming independence. Since there are two independent vectors in \mathbb{R}^2, the column space is all of \mathbb{R}^2, geometrically the entire plane. The row space of A is the span of its rows, (1, 2) and (3, 4). Similarly, these are linearly because the of A is $1 \cdot 4 - 2 \cdot 3 = -2 \neq 0, implying full , so the row space is also \mathbb{R}^2. In contrast, for the rank-deficient matrix B = \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix}, the columns are \begin{pmatrix} 1 \\ 2 \end{pmatrix} and \begin{pmatrix} 2 \\ 4 \end{pmatrix} = 2 \begin{pmatrix} 1 \\ 2 \end{pmatrix}, showing linear dependence. Thus, the column space is the of \begin{pmatrix} 1 \\ 2 \end{pmatrix}, a line through the origin in \mathbb{R}^2. The rows (1, 2) and (2, 4) = 2(1, 2) are likewise dependent, so the row space is the same line in \mathbb{R}^2. For the column space, the equation A\mathbf{x} = \mathbf{b} is solvable \mathbf{b} lies in the of the columns of A.

Column space

Definition

The column space of a is a fundamental concept in linear algebra, analogous to the row space but defined with respect to the columns. For an m \times n A over a \mathbb{F}, the columns of A are vectors in \mathbb{F}^m, denoted as \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_n, where each \mathbf{c}_j is a column vector. The column space of A, denoted C(A), is the subspace of \mathbb{F}^m spanned by these column vectors: C(A) = \operatorname{span}\{ \mathbf{c}_1, \mathbf{c}_2, \dots, \mathbf{c}_n \}. This subspace consists of all linear combinations of the columns of A. Equivalently, the column space can be expressed as the set of all vectors of the form A x where x \in \mathbb{F}^n: C(A) = \{ A x \mid x \in \mathbb{F}^n \}. This formulation emphasizes its role as the image or range of the linear transformation T: \mathbb{F}^n \to \mathbb{F}^m defined by T(x) = A x. Additionally, C(A) is the row space of the transpose A^T, denoted R(A^T).

Basis and spanning set

The columns of an m \times n A form a spanning set for the column C(A), which is the subspace of \mathbb{F}^m consisting of all linear combinations of these columns; however, the columns are typically linearly dependent unless n equals the of A. To extract a basis from this spanning set, perform on A to obtain its reduced (RREF), denoted R; identify the positions (leading 1s) in R, and select the corresponding columns from the original A (the columns of A) as the basis vectors, since row operations preserve the column dependencies. The columns of R are linearly and indicate which original columns C(A). The algorithm proceeds as follows: apply row operations to transform A into R, identify the columns in R, and take those same columns from the original A as the basis vectors for C(A). For example, consider the matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix}. Row reduction yields the RREF R = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, where the is in column 1, so the first column of the original A, (1, 2, 3)^T, serves as a basis for C(A). By definition, any basis for C(A) is a linearly subset of the column vectors that spans the entire column space.

Dimension and rank

The of the column space of an m \times n A over a \mathbb{F}, denoted \dim(C(A)), equals the number of linearly columns in A. This captures the maximal number of columns that span C(A) without redundancy. The rank of A, denoted \rank(A), is defined as \rank(A) = \dim(C(A)), and it equals the dimension of the row space \dim(R(A)). This equivalence holds for any matrix, providing a unified measure of linear dependence among rows and columns. To compute \dim(C(A)), reduce A to its row-reduced echelon form (RREF), where the number of pivot columns equals the dimension. In this form, the pivot columns indicate the , matching the count of linearly independent columns in the original . A A has full column if \dim(C(A)) = n, meaning its columns form a basis for \mathbb{F}^m when n \leq m. In this case, the column space spans a of n in \mathbb{F}^m, ensuring the linear transformation defined by A is injective.

Row space

Definition

The row space of a matrix is a fundamental concept in linear algebra, analogous to the column space but defined with respect to the rows. For an m \times n matrix A over a field \mathbb{F}, the rows of A are vectors in \mathbb{F}^n, denoted as \mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_m, where each \mathbf{r}_i is a row vector. The row space of A, denoted R(A), is the subspace of \mathbb{F}^n spanned by these row vectors: R(A) = \operatorname{span}\{ \mathbf{r}_1, \mathbf{r}_2, \dots, \mathbf{r}_m \}. This subspace consists of all linear combinations of the rows of A. Equivalently, the row space can be expressed as the set of all vectors of the form y^T A where y \in \mathbb{F}^m: R(A) = \{ y^T A \mid y \in \mathbb{F}^m \}. This formulation emphasizes the linear combinations directly, reinforcing that R(A) is generated by the rows. Additionally, R(A) is the column space of the transpose A^T, denoted C(A^T).

Basis and spanning set

The rows of an m \times n A form a spanning set for the row space R(A), which is the of \mathbb{F}^n consisting of all linear combinations of these rows; however, the rows are typically linearly dependent unless m equals the of A. To extract a basis from this spanning set, perform on A to obtain its reduced (RREF), denoted R; the nonzero rows of R are linearly independent and span the same row space as A, thus forming a basis for R(A). The algorithm proceeds as follows: apply row operations to transform A into R, identify the pivot positions (leading 1s) in R, and select the corresponding pivot rows of R as the basis vectors; alternatively, the original rows of A in the positions of these pivots also form a basis, as row operations preserve the row space. For example, consider the matrix A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix}. Row reduction yields the RREF R = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, where the single nonzero row (1, 2, 3) serves as a basis for R(A). By definition, any basis for R(A) is a linearly subset of the row vectors that spans the entire row space.

Dimension and rank

The of the row space of an m \times n A over a F, denoted \dim(R(A)), equals the number of linearly rows in A. This captures the maximal number of rows that span R(A) without redundancy. The rank of A, denoted \rank(A), is defined as \rank(A) = \dim(R(A)), and it equals the dimension of the column space \dim(C(A)). This equivalence holds for any matrix, providing a unified measure of linear dependence among rows and columns. To compute \dim(R(A)), reduce A to its row-reduced echelon form (RREF), where the number of non-zero rows equals the dimension. In this form, the pivot columns (or rows) indicate the , matching the count of linearly independent rows in the original . A A has full row if \dim(R(A)) = m, meaning its rows form a basis for F^n when m \leq n. In this case, the row space spans the entire , ensuring no loss of information in linear transformations defined by A.

Properties and relations

Equality of dimensions

A fundamental result in linear algebra states that for any m \times n A, the of its row space equals the of its column space: \dim(R(A)) = \dim(C(A)). This equality, known as the row rank-column rank theorem, holds over any and forms the basis for defining the of A as this common . To see why the dimensions are equal, consider Gaussian elimination, which transforms A into its reduced row echelon form (RREF), denoted R, via left multiplication by an invertible matrix P: PA = R. Row operations preserve the row space, so R(A) = R(R), and the dimension of R(R) is the number r of nonzero rows (pivot rows) in R. The pivot columns of A (corresponding to the leading 1's in R) form a basis for C(A), which thus also has dimension r. More generally, there exist invertible matrices P and Q such that PAQ = R, where R has r pivots along the diagonal of an r \times r identity block, confirming the equality without altering the relevant subspaces. As a consequence, the rank of A is unambiguously defined as \rank(A) = \dim(R(A)) = \dim(C(A)), independent of whether computed from rows or columns, enabling consistent applications in solving systems and analyzing linear maps. This insight emerged as a key development in 19th-century invariant theory.

Connections to null spaces

The row space of a matrix A \in \mathbb{R}^{m \times n}, denoted R(A), is orthogonal to the null space N(A) = \{ x \in \mathbb{R}^n \mid A x = 0 \}. This orthogonality arises because any vector in N(A) satisfies A x = 0, implying that x is perpendicular to every row of A, and thus to every vector in R(A). Over the real numbers, R(A) and N(A) are orthogonal complements in \mathbb{R}^n, meaning their direct sum spans the entire space: \mathbb{R}^n = R(A) \oplus N(A). A key relation follows from the rank-nullity theorem applied to the row space: \dim R(A) + \dim N(A) = n. This equation quantifies the decomposition, where the dimension of the row space (equal to the rank of A) plus the nullity (dimension of N(A)) equals the number of columns. Similarly, the column space C(A) is orthogonal to the left null space N(A^T) = \{ y \in \mathbb{R}^m \mid A^T y = 0 \}. For any y \in N(A^T), the condition A^T y = 0 ensures that y is orthogonal to every column of A, and hence to all vectors in C(A). Over the real or complex numbers, N(A^T) is the orthogonal complement of C(A) in \mathbb{R}^m, so \mathbb{R}^m = C(A) \oplus N(A^T). The corresponding dimension relation is \dim C(A) + \dim N(A^T) = m.

Fundamental theorem of linear algebra

The fundamental theorem of linear algebra identifies and relates the four fundamental subspaces associated with a linear transformation represented by an m \times n matrix A over a F: the column space C(A) \subseteq F^m, the null space N(A) \subseteq F^n, the row space R(A) = C(A^T) \subseteq F^n, and the left null space N(A^T) \subseteq F^m. These subspaces satisfy the direct sum decompositions F^m = C(A) \oplus N(A^T) and F^n = R(A) \oplus N(A), meaning every vector in the codomain or domain can be uniquely expressed as the sum of a vector from the and one from the corresponding . The dimensions of these subspaces are linked by the rank r of A: \dim C(A) = \dim R(A) = r, \dim N(A) = n - r, and \dim N(A^T) = m - r. This equality of dimensions between the column and row spaces follows from the fact that the rank of A equals the rank of A^T, while the nullity dimensions arise from the rank-nullity theorem applied to A and A^T, providing a complete accounting of the dimensions of the domain and codomain. Geometrically, C(A) is the image of the linear map defined by A, representing all possible outputs; N(A) is the , the set of inputs to zero; R(A) is the image of the map, and N(A^T) is its , which can be viewed as the of A in the sense that its measures the "failure" of the image to span the . When F = \mathbb{R} with the standard inner product, the theorem gains an additional layer: the null space N(A) is the of the row space R(A) in \mathbb{R}^n, denoted N(A) = R(A)^\perp, and similarly N(A^T) = C(A)^\perp in \mathbb{R}^m. This orthogonality underscores the perpendicular decomposition of the spaces, enhancing applications in and projections.

Advanced topics

Over general fields

The definitions of row and column spaces extend naturally to matrices over an arbitrary F, where the row space of an m \times n matrix A is the subspace of F^n spanned by its row vectors, and the column space is the subspace of F^m spanned by its column vectors. Linear combinations are taken with coefficients in F, and the of A is defined as the dimension of either subspace, which are equal. This generalizes the standard case over \mathbb{R} or \mathbb{C}, with computations via valid over any . Over a field F, the row and column spaces are vector spaces, which are free modules of rank equal to the matrix rank, ensuring they always admit bases. When extending to commutative rings R, the row space becomes the R-submodule of R^n generated by the rows of A, and similarly for the column space in R^m. Unlike over fields, these submodules may not be free and can contain torsion elements; for instance, over R = \mathbb{Z}, the row module of a matrix like \begin{pmatrix} 2 \\ 0 \end{pmatrix} is isomorphic to $2\mathbb{Z}, which has no basis as a free \mathbb{Z}-module. For principal ideal domains (PIDs) such as \mathbb{Z} or k where k is a field, the Smith normal form provides an analog to the row-reduced echelon form over fields. There exist unimodular matrices P and Q (invertible over the PID with determinant a unit) such that P A Q = \begin{pmatrix} d_1 & 0 & \cdots & 0 \\ 0 & d_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 0 \end{pmatrix}, where d_1 \mid d_2 \mid \cdots \mid d_r are the diagonal invariant factors, r is the rank, and off-diagonal entries are zero; this form is unique up to units in the PID. The invariant factors d_i characterize the structure of the row and column modules, revealing torsion invariants in the cokernel R^n / \langle \text{rows of } A \rangle \cong \bigoplus_{i=1}^r R / d_i R \oplus R^{n-r}.

Applications in data analysis

In (), a technique widely used in , the column space of the centered data matrix X \in \mathbb{R}^{n \times p} (with n samples and p features) represents the capturing the principal directions of variance in the data. The () provides a computational framework for by factoring X = U \Sigma V^T, where the columns of U form an for the column space of X, the diagonal entries of \Sigma are the singular values indicating variance magnitudes, and the columns of V form the principal components as an for the row space of X. By selecting the top k singular values and corresponding vectors, projects the data onto a lower-dimensional spanned by the first k columns of U, retaining the most significant variance while reducing noise and in applications like feature extraction and . In , a foundational method for fitting linear models to , the column space of the A \in \mathbb{R}^{m \times n} determines the solvability and quality of the approximation to the response vector b \in \mathbb{R}^m. If b lies in the column space C(A), an exact solution Ax = b exists; otherwise, the solution \hat{x} minimizes the \|Ax - b\|_2 by projecting b onto C(A) using the orthogonal P = A(A^T A)^{-1} A^T, yielding the fitted values p = Pb in C(A) and error e = b - p orthogonal to C(A). This projection ensures the minimal error in overdetermined systems common in analysis, such as on observational datasets, and extends to generalized for handling correlated errors. Low-rank approximations via truncated SVD enable efficient by exploiting the inherent low-dimensional structure in visual data. For an represented as a A \in \mathbb{R}^{m \times n}, the full A = U \Sigma V^T allows approximation by the rank-k truncation A_k = U_k \Sigma_k V_k^T, where U_k comprises the first k left singular vectors spanning the dominant column of A, \Sigma_k the top k singular values, and V_k the corresponding right singular vectors for the row ; this preserves essential image features while reducing storage from O(mn) to O(k(m + n)). Such approximations achieve high compression ratios—for instance, retaining 90-95% of perceptual quality with k \approx 100-150 for typical —by discarding minor singular components associated with or fine details. In applications since the 2000s, the row space and column space of the provide distinct interpretive lenses: in the standard orientation with rows as samples and columns as , the row space ( of the feature space) models relationships among features through principal directions and linear combinations, while the column space ( of the space) captures similarities and projections among samples. This duality, revealed via where the column space is spanned by the first r columns of U in the A \approx U_r \Sigma_r V_r^T, underpins techniques like and kernel methods, enabling scalable analysis of high-dimensional datasets.

References

  1. [1]
    Column and Row Spaces - A First Course in Linear Algebra
    Informally, the row space is the set of all linear combinations of the rows of A A . However, we write the rows as column vectors, thus the necessity of using ...
  2. [2]
    [PDF] Row Space, Column Space, and Nullspace
    The row space of A is the subspace of <n> spanned by the row vectors of A. The column space of A is the subspace of <m> spanned by the column vectors of A.Missing: properties | Show results with:properties
  3. [3]
    [PDF] 3.5 Dimensions of the Four Subspaces - MIT Mathematics
    The row space contains all combinations of the rows. This row space is the column space of AT. For the left nullspace we solve ATy = 0—that system is n by m. In ...
  4. [4]
    [PDF] Linear Algebra and It's Applications by Gilbert Strang
    Linear algebra moves steadily to n vectors in m- dimensional space. We still want combinations of the columns (in the column space). We still get m ...
  5. [5]
    [PDF] LADR4e.pdf - Linear Algebra Done Right - Sheldon Axler
    Sheldon Axler received his undergraduate degree from Princeton University, followed by a PhD in mathematics from the University of California at Berkeley.
  6. [6]
    Range or Column space
    The column space of a matrix is the image or range of the corresponding matrix transformation. We will denote it as Range(A). So it is a subspace of ℝm in case ...
  7. [7]
    [PDF] A Brief History of Linear Algebra and Matrix Theory
    Sylvester first introduced the term. ''matrix,'' which was the Latin word for womb, as a name for an array of numbers. Matrix algebra was nurtured by the work ...
  8. [8]
    [PDF] 3.5 Dimensions of the Four Subspaces - MIT Mathematics
    Fundamental Theorem of Linear Algebra, Part 1. The column space and row space both have dimension r. The nullspaces have dimensions n − r and m − r. By ...Missing: intuitive | Show results with:intuitive
  9. [9]
    [PDF] 6 The Three Matrix Spaces and Coordinate Systems
    The column space consists of (1,2), (3,4), and (5,6) and every linear combination thereof (which turns out to be all of R2). The row space consists of (1,2 ...
  10. [10]
    [PDF] Column Space and Row Space of A Nullspaces of A and A
    The Column Space C(A) of a Matrix A. C(A) contains all combinations Av of the columns of A (all v1,v2,v3). Av =. 1 2 3. 4 5 6.. ⎡. ⎣ v1 v2 v3. ⎤. ⎦ = v1.
  11. [11]
  12. [12]
    Row Space -- from Wolfram MathWorld
    Row Space. The vector space generated by the rows of a matrix viewed as vectors. The row space of a n×m matrix A ...Missing: textbook | Show results with:textbook
  13. [13]
    Lecture 10: The four fundamental subspaces | Linear Algebra
    Introduction to Linear Algebra. 5th ed. Wellesley-Cambridge Press, 2016 ... 5:18So the rows span the row space. 5:22Are the rows a basis for the row ...
  14. [14]
    [PDF] Linear Algebra - UC Davis Mathematics
    The goal of this text is to teach you to organize information about vector spaces in a way that makes problems involving linear functions of many variables easy ...
  15. [15]
    notes of finding a basis for the row space of A - Math (Princeton)
    The nonzero rows of a matrix in reduced row echelon form are clearly independent and therefore will always form a basis for the row space of A. Thus the ...
  16. [16]
    [PDF] The row space
    It is the span of columns, the range of the linear transformation carried out by the matrix. If a matrix has n rows, its column space is a subspace of Rn.
  17. [17]
    [PDF] Row Space, Column Space, and the Rank-Nullity Theorem
    Jul 22, 2013 · The column space of an m × n matrix A is the subspace of Rm consisting of the vectors v ∈ Rm such that the linear system. Ax = v is consistent.
  18. [18]
    [PDF] Lec 27: Rank of a matrix. Let A be an m×n matrix. Columns of A are ...
    Theorem. Dimensions of the row space and column space are equal for any matrix. A. [See the proof on p. 275 of the book.] The dimension of the row space of A ...
  19. [19]
    Matrices and determinants - MacTutor History of Mathematics
    The beginnings of matrices and determinants goes back to the second century BC although traces can be seen back to the fourth century BC.
  20. [20]
    [PDF] The Four Fundamental Subspaces: 4 Lines - MIT
    Every matrix is invertible from row space to column space, and AC ... Gilbert Strang, Introduction to Linear Algebra, Third edition, Wellesley-Cambridge.
  21. [21]
    [PDF] Linear Algebra
    Mar 2, 2012 · We will also use the notation Kn for the space of row vectors with ... Linear algebra over arbitrary fields. From now, we denote by K an ...
  22. [22]
    [PDF] Rings, Determinants, the Smith Normal Form, and Canonical Forms ...
    A ring is a set with commutative and associative operations, where multiplication distributes over addition, and has a zero and identity element.
  23. [23]
    [PDF] Smith Normal Form and Combinatorics
    Multiply a row or column by a unit in R. Over a field, SNF is row reduced echelon form (with all unit entries equal to 1). If R is a PID, such as Z or K[x] (K ...
  24. [24]
  25. [25]
    [PDF] Lecture 16: Projection matrices and least squares
    Last lecture, we learned that P = A(AT A)-1 AT is the matrix that projects a vector b onto the space spanned by the columns of A. If b is perpendicular to.
  26. [26]
    The Method of Least Squares
    The method of least squares finds the best approximate solution by minimizing the sum of the squares of the differences between the entries of Ax and b.
  27. [27]
    [PDF] The Singular Value Decomposition (SVD) and Low-Rank Matrix ...
    Remark 5.3 (Lossy Compression via Truncated Decompositions) Using the SVD to produce low-rank matrix approximations is another example of a useful paradigm ...
  28. [28]
    [PDF] 6 The SVD and Image Compression
    The SVD is structured in a way that makes it easy to construct low-rank approximations of matrices, and it is therefore the basis of several data compression ...
  29. [29]
    [PDF] Contrastive Clustering - Xi Peng
    features from augmented samples, after that ICH and CCH respectively apply contrastive learning in the row and col- umn space of the feature matrix. After ...<|separator|>