Fact-checked by Grok 2 weeks ago

Underdetermined system

An underdetermined of linear equations is a in which the number of equations is fewer than the number of unknowns, typically resulting in either no or infinitely many s. In linear algebra, such systems are commonly expressed in the matrix form \mathbf{Ax} = \mathbf{b}, where \mathbf{A} is an m \times n with m < n, \mathbf{x} is the n \times 1 vector of unknowns, and \mathbf{b} is the m \times 1 constant vector. The existence of solutions depends on the rank of \mathbf{A} equaling the rank of the augmented matrix [\mathbf{A} | \mathbf{b}]; if consistent, the solution set is an affine subspace with dimension at least n - m, parameterized by at least n - m free variables. To analyze underdetermined systems, Gaussian elimination or row reduction to reduced row echelon form (RREF) is employed, revealing the free variables and expressing dependent variables in terms of them, such as x_1 = c_1 + d_1 r, where r is a free parameter that can take any real value. The general solution consists of a particular solution plus the null space of \mathbf{A}, which spans the homogeneous solutions. Geometrically, in \mathbb{R}^3, a consistent underdetermined system with two equations often intersects along a line, representing a continuum of solutions. Underdetermined systems arise in various applications, including signal processing, optimization, and engineering modeling, where additional constraints or regularization techniques like the Moore-Penrose pseudoinverse may be used to find minimal-norm solutions. Numerical methods, such as those implemented in libraries like NumPy's linalg.pinv or SciPy's null space computation, facilitate practical solving.

Fundamentals

Definition

An underdetermined system refers to a system of linear equations given by Ax = b, where A is an m \times n matrix with m < n, indicating fewer equations than unknowns, and x and b are vectors in \mathbb{R}^n and \mathbb{R}^m, respectively (or over the complex numbers \mathbb{C}). This structure typically results in either no solutions or infinitely many solutions, depending on the consistency of the system. In contrast, a square system has m = n, potentially yielding a unique solution if A is invertible, while an overdetermined system with m > n often lacks an exact solution unless b lies in the column space of A. The general form of such a can be expressed as a set of m linear equations in n variables x_1, x_2, \dots, x_n, where the A has r(A) \leq m < n. The matrix formulation of linear systems, including underdetermined ones, was advanced in the 19th century through developments in matrix theory, notably by James Joseph Sylvester, who introduced the term 'matrix' in 1850. A simple illustrative example is the equation x + y = 1, representing one equation in two variables, which admits infinitely many solutions of the form x = t, y = 1 - t for any scalar t.

Properties and Characteristics

An underdetermined system of linear equations, represented as Ax = b where A is an m \times n matrix with m < n, exhibits several key intrinsic properties stemming from its structure. A fundamental characteristic is the multiplicity of solutions: such systems have either no solution or infinitely many solutions, never a unique one. This arises because the number of equations is insufficient to uniquely determine all variables when the system is consistent. In contrast, overdetermined systems (with m > n) often lack solutions due to overconstraint, though least-squares approximations may be used. The consistency of the system depends on whether b lies in the column space of A; if it does, the system is consistent and admits solutions, whereas if b is outside the column space, no solutions exist. For consistent underdetermined systems, the is non-unique due to the presence of free variables. The number of these is given by n - \rank(A), which quantifies the dimensionality of the solution space and leads to infinitely many solutions parameterized by these free variables. The rank-nullity theorem provides a precise framework for understanding this non-uniqueness: the nullity of A, or dimension of the null space, equals n - \rank(A), and since m < n implies \nullity(A) > 0 under full row rank, there is a non-trivial null space contributing to multiple solutions. Geometrically, the for a consistent forms an affine of \mathbb{R}^n with dimension n - \rank(A), translating the null space of A by a particular solution. This affine structure highlights the system's inherent ambiguity, where solutions differ by vectors in the null space.

Solutions

General Solution Structure

The general solution to a non-homogeneous underdetermined linear system Ax = b, where A is an m \times n matrix with m < n and the system is consistent, takes the form x = x_p + x_h, with x_p denoting a particular satisfying A x_p = b and x_h any vector in the null space of A (i.e., satisfying the homogeneous equation A x_h = 0). This expression highlights the affine structure of the solution set, which is a translation of the null space by the particular . A particular solution x_p can be found using Gaussian elimination on the augmented matrix [A \mid b], reducing it to row echelon form or reduced row echelon form (RREF). In the RREF, pivot columns correspond to basic (pivot) variables, while non-pivot columns indicate free variables. To obtain x_p, assign zero to each free variable and back-substitute to solve for the pivot variables. Alternatively, for a minimum Euclidean norm particular solution (when the rows of A are linearly independent), the Moore-Penrose pseudoinverse provides x_p = A^+ b, where A^+ = A^T (A A^T)^{-1}. The full parameterization of the solution uses the rank r of A, involving n - r free parameters t_1, \dots, t_{n-r}. Each component of x is then expressed as x_i = (x_p)_i + \sum_{k=1}^{n-r} c_{i k} t_k, where the coefficients c_{i k} derive from a basis for the , obtained via the free columns in the RREF of A. This yields an affine subspace of dimension n - r in \mathbb{R}^n. For illustration, consider the underdetermined system $2x + 3y = 5 in \mathbb{R}^2. Row reduction of the augmented matrix \begin{bmatrix} 2 & 3 & \mid & 5 \end{bmatrix} gives \begin{bmatrix} 2 & 3 & \mid & 5 \end{bmatrix} (already in RREF, with x as pivot variable and y free). Setting y = 0 yields the particular solution x_p = (5/2, 0). The null space basis is found by solving $2x + 3y = 0, giving x = -\frac{3}{2} t, y = t for parameter t. Thus, the general solution is x = \frac{5}{2} - \frac{3}{2} t, \quad y = t, parameterizing a line in \mathbb{R}^2.

Homogeneous Systems

A homogeneous system is a special case of the linear equation Ax = b where the constant vector b = 0, resulting in the equation Ax = 0. Such systems are always consistent, as the zero vector x = 0 satisfies the equation, providing the trivial solution. The solution space of a homogeneous system Ax = 0 is the of the matrix A, denoted N(A), which consists of all vectors x \in \mathbb{R}^n such that Ax = 0. The null space forms a subspace of \mathbb{R}^n. By the , the dimension of N(A), known as the nullity of A, equals n - \rank(A). In underdetermined systems where m < n, \rank(A) \leq m < n, so the nullity is at least n - m > 0, ensuring non-trivial solutions exist./03%3A_The_Fundamental_Subspaces/3.02%3A_Null_Space) To find a basis for the null space, perform (row reduction) on A to obtain its reduced . The free variables corresponding to non-pivot columns generate the basis vectors by setting each free variable to 1 while others to 0 and solving for pivot variables. The null space is then spanned by these basis vectors v_1, v_2, \dots, v_k, where k = \nullity(A). The general solution is given by all linear combinations: x = c_1 v_1 + c_2 v_2 + \dots + c_k v_k, \quad c_i \in \mathbb{R}. The null space exhibits key subspace properties: it is closed under vector addition and scalar multiplication, contains the zero vector, and is non-trivial if its dimension exceeds 0, meaning solutions beyond the trivial one exist./03%3A_The_Fundamental_Subspaces/3.02%3A_Null_Space) For example, consider the underdetermined matrix A = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, which is $2 \times 2 with \rank(A) = 1. Row reduction yields the same form, with the second column as a free variable. Setting it to 1 gives the basis vector \begin{pmatrix} 0 \\ 1 \end{pmatrix}, so N(A) = \operatorname{span} \left\{ \begin{pmatrix} 0 \\ 1 \end{pmatrix} \right\}, and solutions are x = c \begin{pmatrix} 0 \\ 1 \end{pmatrix} for c \in \mathbb{R}.

Extensions

Polynomial Systems

A polynomial system is underdetermined if it consists of m equations in n variables where m < n, typically considered over the complex numbers \mathbb{C} or real numbers \mathbb{R}. Such systems arise naturally in algebraic geometry, where the equations define relations among variables without fully specifying unique solutions. The of an underdetermined system forms an algebraic variety, the common zero locus of the defining polynomials. By dimension theory in algebraic geometry, this variety has Krull dimension at least n - m, assuming the polynomials generate an ideal of height at most m. Generically, for systems satisfying suitable independence conditions (such as being a complete intersection), the dimension is exactly n - m > 0, implying a positive-dimensional solution set with infinitely many solutions over algebraically closed fields like \mathbb{C}. This contrasts with the linear case, which is a special instance where solutions parameterize an affine subspace of dimension precisely n - m. Underdetermination in systems thus generally yields infinite solutions organized into positive-dimensional components, such as curves or surfaces, rather than isolated points. To analyze or solve these systems, key concepts include ideal membership testing and Gröbner bases, which provide a canonical basis for the generated by the polynomials, enabling elimination of variables and determination of the variety's . Gröbner bases facilitate computational approaches to finding solutions or verifying properties like , even in underdetermined cases. For example, consider the underdetermined system of two equations in three variables: \begin{align*} x^2 + y^2 &= 1, \\ x^2 + y^2 + z^2 &= 1. \end{align*} Subtracting the equations yields z^2 = 0, so z = 0, and the first equation describes a in the xy-plane, forming a one-dimensional (the unit in the plane z = 0) as the over \mathbb{R}. This illustrates the typical positive-dimensional outcome, with $3 - 2 = 1. Unlike linear underdetermined systems, which always produce infinite solutions forming a flat affine space over \mathbb{R} or \mathbb{C}, nonlinear polynomial systems may yield only finite isolated real solutions despite the underdetermination, due to the geometry of real algebraic varieties. For instance, the system \begin{align*} x^2 + y^2 &= 0, \\ x^2 + y^2 + z^2 &= 1 \end{align*} has complex dimension 1 (a union of lines), but over \mathbb{R}, the first equation forces x = y = 0, leading to z^2 = 1 and exactly two isolated points: (0, 0, 1) and (0, 0, -1). This phenomenon arises from the nonlinearity restricting the real points on the complex variety.

Systems with Additional Constraints

Underdetermined linear systems can be augmented with additional linear equality constraints, which effectively increase the number of independent equations and may alter the dimensionality of the solution set. For instance, if the original system has m < n equations where A \in \mathbb{R}^{m \times n} and b \in \mathbb{R}^m, adding k independent equality constraints Cx = d with C \in \mathbb{R}^{k \times n} and d \in \mathbb{R}^k results in an effective system with m + k equations. If m + k = n and the combined coefficient matrix has full rank, the system becomes determined, yielding a unique solution; if m + k > n, it may become overdetermined with potentially no solution or a unique one under consistency. Inequality constraints, such as nonnegativity x \geq 0, further restrict the without introducing new equations. These constraints define half-spaces in \mathbb{R}^n, and their with the affine of the original equations forms a , which is the bounded or unbounded of feasible points. Additional linear Gx \leq h with G \in \mathbb{R}^{p \times n} and h \in \mathbb{R}^p can reduce the infinite of an underdetermined system to a finite number of points or even render it empty if inconsistent. The resulting polyhedral set is pointed if it contains no line, meaning the lineality (nullspace of the matrices) is trivial. Feasibility of such constrained systems, particularly with inequalities, is characterized by theorems of the alternative like . For the system Ax = b, x \geq 0, it states that either there exists a feasible x \geq 0 satisfying the equations, or there exists y \geq 0 such that y^T A \geq 0 and y^T b < 0, but not both. This provides a certificate of infeasibility via a separating hyperplane when no nonnegative solution exists. For general inequalities Ax \leq b, feasibility holds if and only if there is no y \geq 0 with y^T A = 0 and y^T b < 0. Consider the underdetermined system x_1 + x_2 = 1 in two variables with the nonnegativity constraint x_1 \geq 0, x_2 \geq 0. Without inequalities, the solution set is the infinite line x_2 = 1 - x_1; with them, it becomes the bounded line segment from (1,0) to (0,1), a polyhedron with vertices at these endpoints. If the right-hand side were -1, the system x_1 + x_2 = -1, x_1 \geq 0, x_2 \geq 0 would be infeasible, as certified by y = 1 > 0 where y (1,1) = (1,1) \geq (0,0) but y (-1) = -1 < 0. Thus, constraints can transform underdetermined systems into ones with finite or empty solution sets, effectively making them determined in terms of bounded feasibility regions.

Applications

Optimization Contexts

In , underdetermined systems occur when the number of decision variables exceeds the number of equality constraints, resulting in feasible regions defined as with infinitely many feasible solutions due to the positive of the . These represent the set of points satisfying Ax ≤ b (or Ax = b in equality form), where the underdetermined nature implies that the null of A has greater than zero, allowing for a continuum of solutions along recession directions. Optimization over such regions typically seeks to minimize a linear objective function, with optimal solutions attained at vertices of the , though the presence of multiple variables often leads to alternative optima spanning edges or faces. A key application in underdetermined systems is the computation of minimum norm solutions, which select a unique point from the affine by minimizing the . For a consistent Ax = b with m < n and full row , the x = A^+ b, where A^+ denotes the Moore-Penrose pseudoinverse, provides the unique minimizer of |x|_2 subject to Ax = b. \mathbf{x} = \mathbf{A}^+ \mathbf{b} = \arg\min_{\mathbf{x}} \|\mathbf{x}\|_2 \quad \text{subject to} \quad \mathbf{Ax} = \mathbf{b} This pseudoinverse is defined as A^+ = V \Sigma^+ U^T from the SVD A = U \Sigma V^T, where \Sigma^+ inverts the nonzero singular values, ensuring the minimum property for underdetermined cases. Such solutions are particularly useful in regularization contexts, where the minimum acts as a for in ill-posed problems. To promote sparsity in underdetermined systems, optimization often shifts to L1-norm minimization, which favors solutions with few nonzero entries over the L2 norm. The basis pursuit problem formulates this as minimizing |x|_1 subject to Ax = b, and under conditions like the (RIP) of order 2s on A, it recovers all exactly s-sparse solutions, where the sparsity level s satisfies s ≲ m / log(n/s). This approach links directly to , where underdetermined linear measurements (m << n) of sparse signals are inverted via L1 optimization to enable sub-Nyquist sampling and . In , basis pursuit exemplifies this: given measurements b = Ax with a sparse underlying signal x (support size k << n) and sensing matrix A of size m × n (m < n), solving \min |x|_1 s.t. Ax = b yields the unique sparsest solution when A satisfies the null space property. This has high impact in applications like MRI and , where sparsity promotion via L1 minimization reduces data requirements while preserving signal fidelity.

Sparse Solution Techniques

In compressive sensing, underdetermined linear systems A \mathbf{x} = \mathbf{b} with m < n enable the exact recovery of sparse signals \mathbf{x} that have at most s nonzero entries, provided the measurement matrix A satisfies certain conditions; this exploits the fact that sparsity reduces the effective ality, allowing unique from fewer measurements than the ambient dimension. Seminal results show that minimizing the \ell_0-norm (counting nonzeros) or its convex \ell_1-norm proxy recovers the exact sparse solution with high probability for random matrices, assuming s is sufficiently small relative to m. Key algorithms for sparse recovery include greedy iterative methods and approaches. Orthogonal matching pursuit (OMP) is a that builds the support of \mathbf{x} iteratively by selecting the column of A most correlated with the current \mathbf{r} = \mathbf{b} - A \mathbf{x}_{\text{current}}; specifically, at each step, it chooses the index j = \arg\max_i | \langle \mathbf{a}_i, \mathbf{r} \rangle | / \| \mathbf{a}_i \|_2, where \mathbf{a}_i are the columns of A, then projects the residual orthogonally onto the selected . Basis pursuit, conversely, formulates recovery as the linear \min \| \mathbf{x} \|_1 subject to A \mathbf{x} = \mathbf{b}, which promotes sparsity through the \ell_1-norm. For noisy measurements A \mathbf{x} + \mathbf{e} = \mathbf{b} with \| \mathbf{e} \|_2 \leq \epsilon, basis pursuit denoising extends this to \min \| \mathbf{x} \|_1 subject to \| A \mathbf{x} - \mathbf{b} \|_2 \leq \epsilon, ensuring stable recovery bounds for compressible signals. A sufficient for and exact recovery in these methods is the (RIP) of order $2s, which requires that for all s-sparse \mathbf{h}, (1 - \delta) \| \mathbf{h} \|_2^2 \leq \| A \mathbf{h} \|_2^2 \leq (1 + \delta) \| \mathbf{h} \|_2^2 with \delta < \sqrt{2} - 1; random matrices like Gaussian or partial satisfy RIP with high probability when m \gtrsim s \log(n/s). This property ensures that \ell_1-minimization (and methods under similar incoherence) recovers the sparse solution exactly or approximately. A representative example is recovering a 10-sparse signal in \mathbb{R}^{1000} from 50 linear measurements, where RIP guarantees exact reconstruction via \ell_1-minimization if the matrix is suitably random, demonstrating how underdetermination (50 equations for 1000 unknowns) leverages sparsity for stable inversion. Overall, underdetermination facilitates sparsity exploitation by allowing algorithms to identify minimal-support solutions amid the infinite general solutions, a capability absent in overdetermined systems. As a special case, \ell_1-norm optimization underpins many of these techniques.

References

  1. [1]
    8.3: Underdetermined Systems - Mathematics LibreTexts
    Sep 17, 2022 · We have systems of linear equations where we have more unknowns than equations, in this case we call the system “underdetermined.”Question · Do This
  2. [2]
    [PDF] 11.1 Linear Systems
    1. A linear system with fewer equations than variables is called underdetermined. Such a system usually has infinitely many different solutions. 2. A linear ...
  3. [3]
    14.3: Underdetermined Systems - Engineering LibreTexts
    Aug 18, 2025 · An underdetermined system of linear equations is a system where there are more variables than equations. Such a system typically has infinitely ...<|control11|><|separator|>
  4. [4]
    18.3. Underdetermined Linear Systems - Topics in Signal Processing
    Consider a matrix Φ ∈ C M × N with M < N . · Define an under-determined system of linear equations: · This system has N unknowns and M linear equations. · There ...
  5. [5]
    [PDF] Underdetermined and Overdetermined Linear Algebraic Systems
    Mar 1, 1999 · An underdetermined system has more unknowns than equations, while an overdetermined system has more equations than variables.
  6. [6]
    [PDF] A Brief History of Linear Algebra - University of Utah Math Dept.
    Around 4000 years ago, the people of Babylon knew how to solve a simple 2X2 system of linear equations with two unknowns.
  7. [7]
    160 Linear Systems - Computer Science : University of Rochester
    If we have more variables than equations, the system is said to be underdetermined . The equations will generally constrain the solution to a linear subspace of ...<|separator|>
  8. [8]
    [PDF] MTH5112-5212 Linear Algebra
    If an underdetermined system is consistent, it must have infinitely many solutions. ... Notice that the row space and column space of a matrix are generally ...
  9. [9]
    [PDF] Least-norm solutions of underdetermined equations - EE263
    Example: transferring mass unit distance f. I unit mass at rest subject to forces xi for i 1 < t i, i = 1;:::;10. I y1 is position at t = 10, y2 is velocity ...
  10. [10]
    [PDF] Introduction to Linear Algebra Jason R. Wilson
    If m<n, we call Ax = b an underdetermined linear system. It turns out that ... by the rank-nullity theorem. Since nullity(T) > 0, T is not one-to-one ...
  11. [11]
    12.3: Solving linear systems by factoring the coefficient matrix
    Mar 5, 2021 · The solution set \(U\) for an inhomogeneous linear system is called an affine subspace of \(\mathbb{F}n\) since it is a genuine subspace of ...
  12. [12]
    [PDF] Linear Algebra and It's Applications by Gilbert Strang
    Revising this textbook has been a special challenge, for a very nice reason. So many people have read this book, and taught from it, and even loved it.<|control11|><|separator|>
  13. [13]
    [PDF] Linear Equations in Linear Algebra - University of Utah Math Dept.
    Solutions of Nonhomogeneous Systems. When a nonhomogeneous linear system has many solutions, the general solution can be written in parametric vector form as ...
  14. [14]
    [PDF] Applied Linear Algebra and Differential Equations
    The general solution to the original underdetermined linear system is the sum of the null space and the particular solution and is given by.. x1 x2 x3.
  15. [15]
    [PDF] Homogeneous systems (1.5) Linear Independence ... - UCSD Math
    A homogeneous system is ALWAYS consistent, since the zero solution, aka the trivial solution, is always a solution to that system. 2. A homogeneous system ...
  16. [16]
    [PDF] The Rank-Nullity Theorem - Purdue Math
    Feb 16, 2007 · For a given matrix A, be able to determine the rank from the nullity, or the nullity from the rank. Know the relationship between the rank of a ...
  17. [17]
    [PDF] The Null Space of a Matrix
    Apr 3, 2023 · Find a basis for the null space of A. Let x = (x1,x2,...,xn). Solve the following system by row reducing A to row-reduced echelon ...<|control11|><|separator|>
  18. [18]
    Null space and column space basis (video) - Khan Academy
    Feb 11, 2012 · So if this is the reduce row echelon form of A, let's figure out its null space. So the null space is the set of all of vectors in R4, because we have 4 columns ...
  19. [19]
    The Nullspace of a Matrix - Linear Algebra - CliffsNotes
    This is the nullspace of the matrix. Example 3: Find the nullspace of the matrix. By definition, the nullspace of A consists of all vectors x such that A x = 0.
  20. [20]
    [PDF] Solving Systems of Polynomial Equations Bernd Sturmfels
    The set of solutions to a system of polynomial equations is an algebraic variety, the basic object of algebraic geometry. The algorithmic study of algebraic ...
  21. [21]
    [PDF] Algebraic Geometry
    Feb 20, 2005 · These notes are an introduction to the theory of algebraic varieties. In contrast to most such accounts they study abstract algebraic varieties, ...
  22. [22]
    [PDF] Lecture 3 Polyhedra
    linear equations: the solution set of a system of linear equations. S = {x | Ax = b} is an affine set; moreover, all affine sets can be represented this way.
  23. [23]
    [PDF] CLASS NOTES: Math 168 POLYHEDRAL CONVEXITY
    May 14, 2012 · Definition 1.1.2 The set of solutions of a system of linear inequalities is called a polyhedron. In its general form a polyhedron is then a set ...
  24. [24]
    [PDF] 1 Linear Programming and Farkas' Lemma - cs.Princeton
    The inequality (6) implies that ΛT A ≤ cT . So Λ is a feasible solution to the Dual. The inequality (7) implies that ΛT b > (k − ), and since the ...
  25. [25]
    [PDF] Lecture 7
    Sep 23, 2008 · We will first need to show two lemmas before we are able to do this. Theorem 1 (Farkas' Lemma) Let A ∈ Rm×n and b ∈ Rm×1. Then exactly one of ...
  26. [26]
    [PDF] Chapter 7. Linear programming and reductions
    vertex, it will notice that taking out an inequality and adding another leads to an underdetermined system of equations that has an infinity of solutions.
  27. [27]
    [PDF] Chapter 7 - Linear programming and reductions - People @EECS
    It is a general rule of linear programs that the optimum is achieved at a vertex of the feasible region. The only exceptions are cases in which there is no ...
  28. [28]
    A generalized inverse for matrices
    Received 26 July 1954. This paper describes a generalization of the inverse of a non-singular matrix, as the unique solution of a certain set of equations.
  29. [29]
    [PDF] Atomic Decomposition by Basis Pursuit - Stanford University
    Dozens of interesting dictionaries have been proposed over the last few years; we focus. Page 4. 132. S. S. CHEN, D. L. DONOHO, AND M. A. SAUNDERS in this paper ...
  30. [30]
    [PDF] Compressive sampling - Emmanuel Candès
    Compressive sampling allows reconstructing images/signals from fewer samples than needed by Nyquist theory, enabling super-resolved reconstruction.
  31. [31]
    Robust Uncertainty Principles: Exact Signal Reconstruction ... - arXiv
    Sep 10, 2004 · Abstract: This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time ...Missing: URL | Show results with:URL
  32. [32]
    [math/0502327] Decoding by Linear Programming - arXiv
    Feb 15, 2005 · Authors:Emmanuel Candes, Terence Tao. View a PDF of the paper titled Decoding by Linear Programming, by Emmanuel Candes and 1 other authors.
  33. [33]
    Orthogonal matching pursuit: recursive function approximation with ...
    Abstract: We describe a recursive algorithm to compute representations of functions with respect to nonorthogonal and possibly overcomplete dictionaries of ...
  34. [34]