Basic feasible solution
In linear programming, a basic feasible solution (BFS) is a feasible solution to the constraints Ax = b, x \geq 0 (where A is an m \times n matrix with m < n) that is obtained by selecting a basis of m linearly independent columns of A, setting the corresponding n-m nonbasic variables to zero, and solving for the basic variables such that x_B = A_B^{-1}b \geq 0.[1] This solution corresponds to a vertex (or extreme point) of the feasible region polyhedron, meaning it cannot be expressed as a convex combination of two distinct other feasible solutions.[2] Basic feasible solutions play a central role in solving linear programs, particularly through the simplex algorithm, which starts from an initial BFS and iteratively pivots to adjacent vertices—other BFSs—to improve the objective function value until optimality is reached or infeasibility/unboundedness is detected.[3] A fundamental result in linear programming states that if a linear program has an optimal solution, then there exists an optimal basic feasible solution, ensuring that the simplex method will find an optimum at one of the polyhedron's vertices without needing to evaluate interior points.[2] This property holds for both minimization and maximization problems, provided the feasible region is nonempty and bounded.[1] To initiate the simplex method, finding an initial BFS is often necessary; this can be done directly if the problem is in canonical form with nonnegative right-hand sides (e.g., using slack variables), or via Phase I of the two-phase simplex method, which introduces artificial variables to construct a starting BFS and then eliminates them.[3] BFSs are thus not only computationally efficient cornerstones of optimization algorithms but also embody the combinatorial structure of linear programs, linking continuous optimization to discrete vertex enumeration.[1]Fundamentals
Linear programming preliminaries
A linear program in standard form is formulated as the minimization of a linear objective function \mathbf{c}^T \mathbf{x}, where \mathbf{c} \in \mathbb{R}^n is the cost vector and \mathbf{x} \in \mathbb{R}^n is the vector of decision variables, subject to the linear equality constraints A \mathbf{x} = \mathbf{b} and the non-negativity constraints \mathbf{x} \geq \mathbf{0}. Here, A is an m \times n constraint matrix with m < n, and \mathbf{b} \in \mathbb{R}^m is the right-hand-side vector. This equational form is typically obtained by converting general linear inequalities into equalities through the introduction of slack variables. For an inequality constraint \mathbf{a}_i^T \mathbf{x} \leq b_i (where \mathbf{a}_i^T is the i-th row of A), a non-negative slack variable s_i \geq 0 is added to yield the equality \mathbf{a}_i^T \mathbf{x} + s_i = b_i; the resulting augmented system maintains the standard form after incorporating all such transformations. For greater-than-or-equal inequalities, surplus variables are used analogously, often combined with artificial variables if needed for initial feasibility.[4] The formulation assumes that the m rows of A are linearly independent, implying that A has full row rank m; this ensures the constraints are non-redundant and avoids either infeasible systems or eliminable equations. If linear dependence exists among the rows, redundant constraints can be identified and removed to achieve this rank condition without altering the feasible region.[5][6] In this notation, m denotes the number of equality constraints, while n denotes the total number of variables, which include both the original decision variables and any added slack or surplus variables; these variables are categorized as basic variables or non-basic variables when applying solution procedures like the simplex method.[7]Feasible solutions
In the standard form of a linear program, a feasible solution is a vector \mathbf{x} \in \mathbb{R}^n that satisfies the equality constraints A\mathbf{x} = \mathbf{b} and the non-negativity constraints \mathbf{x} \geq \mathbf{0}, where A is an m \times n matrix with m < n and \mathbf{b} \in \mathbb{R}^m.[1] This definition ensures that the solution adheres to all problem constraints without regard to the objective function value.[8] The collection of all feasible solutions constitutes the feasible region, which is a convex polyhedron given by P = \{ \mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{b}, \mathbf{x} \geq \mathbf{0} \}.[9] This polyhedron represents the solution space where the linear program is solvable, assuming it is non-empty. If the polyhedron is empty, the linear program is infeasible, meaning no solution satisfies all constraints simultaneously.[10] Additionally, the feasible region can be bounded, forming a polytope with finite extent, or unbounded, extending infinitely along certain rays while still satisfying the constraints.[10] Boundedness is determined by the absence of recession directions that keep \mathbf{x} \geq \mathbf{0} and A\mathbf{x} = \mathbf{b}, while emptiness arises when the constraints are contradictory, such as when \mathbf{b} lies outside the cone generated by the columns of A under non-negativity.[1] To illustrate feasibility checking for a small system, consider the two-variable equality x_1 + x_2 = 1 with x_1 \geq 0, x_2 \geq 0. Substituting values shows that points like (0.5, 0.5) satisfy both the equation and non-negativity, confirming a non-empty feasible region (the line segment from (1,0) to (0,1)), which is bounded.[11] In contrast, for the system x_1 + x_2 = -1 with x_1 \geq 0, x_2 \geq 0, the left side is at least 0 while the right side is negative, yielding no solutions and an empty feasible region, rendering the system infeasible.[12] Such checks for small systems can be performed algebraically or graphically, highlighting how constraint compatibility determines feasibility.[8]Bases and basic solutions
In linear programming, a basis is defined as a selection of m linearly independent columns from the constraint matrix A \in \mathbb{R}^{m \times n}, where m \leq n and the rank of A is m. These columns form an invertible square matrix B \in \mathbb{R}^{m \times m}, known as the basis matrix, with the corresponding variables termed basic variables. The remaining n - m columns of A are associated with non-basic variables. This selection ensures that the basis spans the column space of A, providing a unique representation for solutions to the system A \mathbf{x} = \mathbf{b}.[13][14] A basic solution is obtained by setting the non-basic variables \mathbf{x}_N = \mathbf{0} and solving for the basic variables \mathbf{x}_B in the equation B \mathbf{x}_B = \mathbf{b}. The explicit solution is given by \mathbf{x}_B = B^{-1} \mathbf{b}, with the full vector \mathbf{x} = (\mathbf{x}_B, \mathbf{0}) representing a point in \mathbb{R}^n that satisfies the equality constraints. Basic solutions are unique for each basis and correspond to the vertices of the arrangement defined by the hyperplanes A \mathbf{x} = \mathbf{b}, though they may include negative components in \mathbf{x}_B.[13][14]Basic feasible solutions
In linear programming, particularly for problems in standard form given by \min \{ c^T x \mid A x = b, x \geq 0 \} where A is an m \times n matrix with m < n and full row rank, a basic feasible solution arises from selecting a basis as described in the prior section on bases and basic solutions. Specifically, a basic feasible solution (BFS) is a basic solution where all variables are non-negative, ensuring it satisfies both the equality constraints and the non-negativity conditions.[2] Basic feasible solutions are equivalent to the extreme points, or vertices, of the feasible polyhedron \{ x \geq 0 \mid A x = b \}, meaning every BFS corresponds to a vertex of this convex set, and conversely, every vertex can be represented as a BFS for some basis.[15] This correspondence underscores their structural importance in the geometry of linear programs. For a fixed basis, the associated basic solution—and thus the BFS if feasible—is unique, provided the solution is non-degenerate, which occurs when all basic variables are strictly positive.[16] In degenerate cases, multiple bases may yield the same BFS, but non-degeneracy ensures a one-to-one mapping. BFS serve as foundational starting points for optimization algorithms, such as the simplex method, which begins at a BFS and pivots to neighboring BFS to improve the objective value until optimality is reached.[17] This approach leverages the finite number of BFS to systematically explore the feasible region.Properties
Key structural properties
In linear programming, a basic feasible solution (BFS) is intrinsically tied to the selection of a basis for the constraint matrix A \in \mathbb{R}^{m \times n} (with m \leq n and full row rank m), where the basis consists of m linearly independent columns. In the non-degenerate case, each BFS corresponds uniquely to such a basis, obtained by solving Bx = b for the basic variables (with B the basis submatrix) and setting nonbasic variables to zero, ensuring feasibility x \geq 0.[18] The total number of possible BFS is thus bounded by the number of potential bases, which is at most \binom{n}{m}, reflecting the combinatorial choice of m columns from n. This finite count underscores the algebraic structure of linear programs, enabling algorithms like the simplex method to enumerate vertices exhaustively in principle.[18] A core structural property is that the feasible region P = \{x \geq 0 \mid Ax = b\} (assuming boundedness) is the convex hull of its BFS. Consequently, any feasible solution x \in P can be expressed as a convex combination of BFS: x = \sum_{i} \lambda_i x^i, where each x^i is a BFS, \lambda_i \geq 0, and \sum_i \lambda_i = 1. This representation, analogous to the Birkhoff-von Neumann theorem for transportation problems but general for linear programs, highlights how BFS generate the entire polytope algebraically.[18]Degeneracy and non-degeneracy
In linear programming, a basic feasible solution is degenerate if at least one basic variable equals zero, meaning that more than the required number of constraints hold with equality at that point.[19][20] This situation arises from redundant constraints or specific right-hand side values that cause a basic variable x_{B_j} to vanish in the solution.[21] A basic feasible solution is non-degenerate if all basic variables are strictly positive, ensuring that exactly m constraints (where m is the number of equations) are active and defining the solution uniquely. In this case, each vertex of the feasible region corresponds to a unique basis, providing a one-to-one mapping between bases and extreme points.[22] Degeneracy implies that a single feasible point may correspond to multiple bases, as additional constraints become tight without changing the solution values.[20] This can lead to cycling in the simplex method, where the algorithm repeatedly cycles through the same set of bases without improving the objective, potentially causing infinite loops.[23][20] To resolve degeneracy, symbolic perturbation techniques add small positive values \epsilon_k (with \epsilon_1 > \epsilon_2 > \cdots > \epsilon_m > 0) to the right-hand side coefficients, ensuring all basic variables become positive while preserving the original optimal solution in the limit as \epsilon_k \to 0.[24] Alternatively, the lexicographic ordering method breaks ties in pivot selection by prioritizing ratios based on a lexicographic comparison of coefficient vectors, avoiding cycles without numerical perturbation.[24]Examples
Two-dimensional example
To illustrate basic feasible solutions in a two-dimensional setting, consider the linear programming problem of minimizing $3x + 2y subject to x + y \leq 4, $2x + y \leq 5, and x \geq 0, y \geq 0.[25] This problem is converted to standard form by introducing nonnegative slack variables s_1 and s_2 to transform the inequalities into equalities: \begin{align} x + y + s_1 &= 4, \\ 2x + y + s_2 &= 5, \\ x, y, s_1, s_2 &\geq 0. \end{align} [25] Here, the system has two equations and four variables, so a basic solution is found by selecting two basic variables (corresponding to a nonsingular basis matrix) and setting the remaining two nonbasic variables to zero, then solving for the basic variables. The solution is feasible if all variables are nonnegative.[26] The feasible bases yield the following basic feasible solutions (BFS), which correspond to the vertices of the feasible region:- Basis \{s_1, s_2\} (nonbasic: x = 0, y = 0): s_1 = 4, s_2 = 5, giving (x, y) = (0, 0). All components are nonnegative, confirming feasibility.[25]
- Basis \{y, s_2\} (nonbasic: x = 0, s_1 = 0): Solving yields y = 4, s_2 = 1, giving (x, y) = (0, 4). All components are nonnegative (s_2 = 5 - 0 - 4 = 1 \geq 0), confirming feasibility.[25]
- Basis \{x, s_1\} (nonbasic: y = 0, s_2 = 0): Solving yields x = 2.5, s_1 = 1.5, giving (x, y) = (2.5, 0). All components are nonnegative (s_1 = 4 - 2.5 - 0 = 1.5 \geq 0), confirming feasibility.[25]
- Basis \{x, y\} (nonbasic: s_1 = 0, s_2 = 0): Solving the system x + y = 4, $2x + y = 5 gives x = 1, y = 3, yielding (x, y) = (1, 3). All components are nonnegative, confirming feasibility.[25]