Fact-checked by Grok 2 weeks ago

Basic feasible solution

In , a basic feasible solution (BFS) is a feasible to the constraints Ax = b, x \geq 0 (where A is an m \times n with m < n) that is obtained by selecting a basis of m linearly independent columns of A, setting the corresponding n-m nonbasic variables to zero, and solving for the basic variables such that x_B = A_B^{-1}b \geq 0. This solution corresponds to a vertex (or extreme point) of the feasible region polyhedron, meaning it cannot be expressed as a convex combination of two distinct other feasible solutions. Basic feasible solutions play a central role in solving linear programs, particularly through the simplex algorithm, which starts from an initial BFS and iteratively pivots to adjacent vertices—other BFSs—to improve the objective function value until optimality is reached or infeasibility/unboundedness is detected. A fundamental result in linear programming states that if a linear program has an optimal solution, then there exists an optimal basic feasible solution, ensuring that the simplex method will find an optimum at one of the polyhedron's vertices without needing to evaluate interior points. This property holds for both minimization and maximization problems, provided the feasible region is nonempty and bounded. To initiate the simplex method, finding an initial BFS is often necessary; this can be done directly if the problem is in canonical form with nonnegative right-hand sides (e.g., using slack variables), or via Phase I of the two-phase simplex method, which introduces artificial variables to construct a starting BFS and then eliminates them. BFSs are thus not only computationally efficient cornerstones of optimization algorithms but also embody the combinatorial structure of linear programs, linking continuous optimization to discrete vertex enumeration.

Fundamentals

Linear programming preliminaries

A linear program in standard form is formulated as the minimization of a linear objective function \mathbf{c}^T \mathbf{x}, where \mathbf{c} \in \mathbb{R}^n is the cost vector and \mathbf{x} \in \mathbb{R}^n is the vector of decision variables, subject to the linear equality constraints A \mathbf{x} = \mathbf{b} and the non-negativity constraints \mathbf{x} \geq \mathbf{0}. Here, A is an m \times n constraint matrix with m < n, and \mathbf{b} \in \mathbb{R}^m is the right-hand-side vector. This equational form is typically obtained by converting general linear inequalities into equalities through the introduction of . For an inequality constraint \mathbf{a}_i^T \mathbf{x} \leq b_i (where \mathbf{a}_i^T is the i-th row of A), a non-negative s_i \geq 0 is added to yield the equality \mathbf{a}_i^T \mathbf{x} + s_i = b_i; the resulting augmented system maintains the standard form after incorporating all such transformations. For greater-than-or-equal inequalities, are used analogously, often combined with if needed for initial feasibility. The formulation assumes that the m rows of A are linearly independent, implying that A has full row rank m; this ensures the constraints are non-redundant and avoids either infeasible systems or eliminable equations. If linear dependence exists among the rows, redundant constraints can be identified and removed to achieve this rank condition without altering the feasible region. In this notation, m denotes the number of equality constraints, while n denotes the total number of variables, which include both the original decision variables and any added slack or surplus variables; these variables are categorized as basic variables or non-basic variables when applying solution procedures like the simplex method.

Feasible solutions

In the standard form of a linear program, a feasible solution is a vector \mathbf{x} \in \mathbb{R}^n that satisfies the equality constraints A\mathbf{x} = \mathbf{b} and the non-negativity constraints \mathbf{x} \geq \mathbf{0}, where A is an m \times n matrix with m < n and \mathbf{b} \in \mathbb{R}^m. This definition ensures that the solution adheres to all problem constraints without regard to the objective function value. The collection of all feasible solutions constitutes the feasible region, which is a convex polyhedron given by P = \{ \mathbf{x} \in \mathbb{R}^n \mid A\mathbf{x} = \mathbf{b}, \mathbf{x} \geq \mathbf{0} \}. This polyhedron represents the solution space where the linear program is solvable, assuming it is non-empty. If the polyhedron is empty, the linear program is infeasible, meaning no solution satisfies all constraints simultaneously. Additionally, the feasible region can be bounded, forming a polytope with finite extent, or unbounded, extending infinitely along certain rays while still satisfying the constraints. Boundedness is determined by the absence of recession directions that keep \mathbf{x} \geq \mathbf{0} and A\mathbf{x} = \mathbf{b}, while emptiness arises when the constraints are contradictory, such as when \mathbf{b} lies outside the cone generated by the columns of A under non-negativity. To illustrate feasibility checking for a small system, consider the two-variable equality x_1 + x_2 = 1 with x_1 \geq 0, x_2 \geq 0. Substituting values shows that points like (0.5, 0.5) satisfy both the equation and non-negativity, confirming a non-empty feasible region (the line segment from (1,0) to (0,1)), which is bounded. In contrast, for the system x_1 + x_2 = -1 with x_1 \geq 0, x_2 \geq 0, the left side is at least 0 while the right side is negative, yielding no solutions and an empty feasible region, rendering the system infeasible. Such checks for small systems can be performed algebraically or graphically, highlighting how constraint compatibility determines feasibility.

Bases and basic solutions

In linear programming, a basis is defined as a selection of m linearly independent columns from the constraint matrix A \in \mathbb{R}^{m \times n}, where m \leq n and the rank of A is m. These columns form an invertible square matrix B \in \mathbb{R}^{m \times m}, known as the basis matrix, with the corresponding variables termed basic variables. The remaining n - m columns of A are associated with non-basic variables. This selection ensures that the basis spans the column space of A, providing a unique representation for solutions to the system A \mathbf{x} = \mathbf{b}. A basic solution is obtained by setting the non-basic variables \mathbf{x}_N = \mathbf{0} and solving for the basic variables \mathbf{x}_B in the equation B \mathbf{x}_B = \mathbf{b}. The explicit solution is given by \mathbf{x}_B = B^{-1} \mathbf{b}, with the full vector \mathbf{x} = (\mathbf{x}_B, \mathbf{0}) representing a point in \mathbb{R}^n that satisfies the equality constraints. Basic solutions are unique for each basis and correspond to the vertices of the arrangement defined by the hyperplanes A \mathbf{x} = \mathbf{b}, though they may include negative components in \mathbf{x}_B.

Basic feasible solutions

In linear programming, particularly for problems in standard form given by \min \{ c^T x \mid A x = b, x \geq 0 \} where A is an m \times n matrix with m < n and full row rank, a basic feasible solution arises from selecting a basis as described in the prior section on bases and basic solutions. Specifically, a basic feasible solution (BFS) is a basic solution where all variables are non-negative, ensuring it satisfies both the equality constraints and the non-negativity conditions. Basic feasible solutions are equivalent to the extreme points, or vertices, of the feasible polyhedron \{ x \geq 0 \mid A x = b \}, meaning every BFS corresponds to a vertex of this convex set, and conversely, every vertex can be represented as a BFS for some basis. This correspondence underscores their structural importance in the geometry of linear programs. For a fixed basis, the associated basic solution—and thus the BFS if feasible—is unique, provided the solution is non-degenerate, which occurs when all basic variables are strictly positive. In degenerate cases, multiple bases may yield the same BFS, but non-degeneracy ensures a one-to-one mapping. BFS serve as foundational starting points for optimization algorithms, such as the , which begins at a BFS and pivots to neighboring BFS to improve the objective value until optimality is reached. This approach leverages the finite number of BFS to systematically explore the feasible region.

Properties

Key structural properties

In linear programming, a basic feasible solution (BFS) is intrinsically tied to the selection of a basis for the constraint matrix A \in \mathbb{R}^{m \times n} (with m \leq n and full row rank m), where the basis consists of m linearly independent columns. In the non-degenerate case, each BFS corresponds uniquely to such a basis, obtained by solving Bx = b for the basic variables (with B the basis submatrix) and setting nonbasic variables to zero, ensuring feasibility x \geq 0. The total number of possible BFS is thus bounded by the number of potential bases, which is at most \binom{n}{m}, reflecting the combinatorial choice of m columns from n. This finite count underscores the algebraic structure of linear programs, enabling algorithms like the simplex method to enumerate vertices exhaustively in principle. A core structural property is that the feasible region P = \{x \geq 0 \mid Ax = b\} (assuming boundedness) is the convex hull of its BFS. Consequently, any feasible solution x \in P can be expressed as a convex combination of BFS: x = \sum_{i} \lambda_i x^i, where each x^i is a BFS, \lambda_i \geq 0, and \sum_i \lambda_i = 1. This representation, analogous to the for transportation problems but general for linear programs, highlights how BFS generate the entire polytope algebraically.

Degeneracy and non-degeneracy

In linear programming, a basic feasible solution is degenerate if at least one basic variable equals zero, meaning that more than the required number of constraints hold with equality at that point. This situation arises from redundant constraints or specific right-hand side values that cause a basic variable x_{B_j} to vanish in the solution. A basic feasible solution is non-degenerate if all basic variables are strictly positive, ensuring that exactly m constraints (where m is the number of equations) are active and defining the solution uniquely. In this case, each vertex of the feasible region corresponds to a unique basis, providing a one-to-one mapping between bases and extreme points. Degeneracy implies that a single feasible point may correspond to multiple bases, as additional constraints become tight without changing the solution values. This can lead to cycling in the , where the algorithm repeatedly cycles through the same set of bases without improving the objective, potentially causing infinite loops. To resolve degeneracy, symbolic perturbation techniques add small positive values \epsilon_k (with \epsilon_1 > \epsilon_2 > \cdots > \epsilon_m > 0) to the right-hand side , ensuring all basic variables become positive while preserving the original optimal in the limit as \epsilon_k \to 0. Alternatively, the lexicographic ordering method breaks ties in pivot selection by prioritizing ratios based on a lexicographic comparison of vectors, avoiding cycles without numerical .

Examples

Two-dimensional example

To illustrate basic feasible solutions in a two-dimensional setting, consider the problem of minimizing $3x + 2y subject to x + y \leq 4, $2x + y \leq 5, and x \geq 0, y \geq 0. This problem is converted to standard form by introducing nonnegative variables s_1 and s_2 to transform the inequalities into equalities: \begin{align} x + y + s_1 &= 4, \\ 2x + y + s_2 &= 5, \\ x, y, s_1, s_2 &\geq 0. \end{align} Here, the system has two equations and four variables, so a basic solution is found by selecting two basic variables (corresponding to a nonsingular ) and setting the remaining two nonbasic variables to zero, then solving for the basic variables. The solution is feasible if all variables are nonnegative. The feasible bases yield the following basic feasible solutions (BFS), which correspond to the vertices of the :
  • Basis \{s_1, s_2\} (nonbasic: x = 0, y = 0): s_1 = 4, s_2 = 5, giving (x, y) = (0, 0). All components are nonnegative, confirming feasibility.
  • Basis \{y, s_2\} (nonbasic: x = 0, s_1 = 0): Solving yields y = 4, s_2 = 1, giving (x, y) = (0, 4). All components are nonnegative (s_2 = 5 - 0 - 4 = 1 \geq 0), confirming feasibility.
  • Basis \{x, s_1\} (nonbasic: y = 0, s_2 = 0): Solving yields x = 2.5, s_1 = 1.5, giving (x, y) = (2.5, 0). All components are nonnegative (s_1 = 4 - 2.5 - 0 = 1.5 \geq 0), confirming feasibility.
  • Basis \{x, y\} (nonbasic: s_1 = 0, s_2 = 0): Solving the system x + y = 4, $2x + y = 5 gives x = 1, y = 3, yielding (x, y) = (1, 3). All components are nonnegative, confirming feasibility.
Other potential bases, such as \{s_1, y\} with x = 0, s_2 = 0, lead to negative values (e.g., y = 5, s_1 = -1 < 0), rendering them infeasible. These four BFS fully characterize the extreme points of the feasible region for this problem.

Network flow example

In network flow problems, particularly balanced transportation problems, basic feasible solutions (BFS) represent initial flow allocations that satisfy supply and demand constraints without cycles in the underlying bipartite graph structure. These problems model the shipment of goods from multiple sources to multiple destinations, formulated as linear programs where the constraint matrix A is the incidence matrix of the bipartite network, with rows corresponding to supply and demand nodes and columns to possible shipment routes (arcs). A BFS corresponds to a spanning tree of the network graph with m + n - 1 basic arcs for m sources and n destinations, ensuring the flows are feasible and the basis is of full rank (though degeneracy may occur if some basic flows are zero). Consider a simple balanced transportation problem with two sources (factories A and B) and two destinations (warehouses X and Y). Factory A has a supply of 20 units, factory B has 30 units, warehouse X demands 25 units, and warehouse Y demands 25 units, ensuring total supply equals total demand at 50 units. The flow constraints are: x_{AX} + x_{AY} = 20, \quad x_{BX} + x_{BY} = 30, x_{AX} + x_{BX} = 25, \quad x_{AY} + x_{BY} = 25, with all x_{ij} \geq 0. The incidence matrix A for these equality constraints (noting one equation is redundant due to balance) has full row rank of 3 and columns for each x_{ij}, capturing the network's node-arc relationships. An initial BFS can be obtained using the North-West corner rule, which systematically allocates flows starting from the top-left cell of the transportation tableau and proceeds rightward then downward, exhausting supplies or demands at each step. Apply the rule: Allocate 20 units to x_{AX} (minimum of A's supply 20 and X's demand 25), exhausting A's supply and leaving 5 units for X. Next, allocate 5 units to x_{BX} (minimum of B's supply 30 and remaining X demand 5), exhausting X's demand and leaving 25 units for B. Finally, allocate 25 units to x_{BY} (minimum of B's remaining supply 25 and Y's demand 25), exhausting both. This yields the BFS with flows x_{AX} = 20, x_{BX} = 5, x_{AY} = 0, x_{BY} = 25. This solution is basic because it uses exactly three basic variables (x_{AX}, x_{BX}, x_{BY}), matching the problem's rank, and feasible as all constraints are satisfied with non-negative flows. Graphically, the basic arcs form a spanning tree: A connected to X, X connected to B (via the shared node), and B connected to Y, avoiding cycles while spanning all four nodes. Such tree-structured are pivotal in network simplex methods for efficiently exploring the feasible region.

Geometric Interpretation

Feasible region vertices

In linear programming, the feasible region defined by a problem in standard form—minimizing \mathbf{c}^\top \mathbf{x} subject to A\mathbf{x} = \mathbf{b}, \mathbf{x} \geq \mathbf{0}, where A is an m \times n matrix with m < n—forms a polyhedron in \mathbb{R}^n, potentially bounded or unbounded, consisting of all points satisfying the equality and non-negativity constraints. This polyhedron is the convex hull of its extreme points, known as vertices, which are the "corner" points where the boundary facets intersect in a manner that cannot be expressed as nontrivial convex combinations of other feasible points. A fundamental result in linear programming establishes that the basic feasible solutions (BFS) of the polyhedron are precisely its vertices or extreme points. Specifically, for a non-degenerate polyhedron, a feasible solution \mathbf{x} is a BFS if and only if it corresponds to a basis of m linearly independent columns of A, with the remaining n - m variables set to zero, ensuring \mathbf{x} lies at a vertex where exactly n - m non-negativity constraints are binding alongside the m equalities. This equivalence holds because any vertex solution must satisfy at least n tight constraints (the m equalities plus at least n - m non-negativities at zero) to prevent movement in any direction while remaining feasible, aligning directly with the definition of a BFS. To illustrate in two dimensions (n=2, say m=1), consider a simple linear program where the feasible region is a polygonal area bounded by the line a_1 x_1 + a_2 x_2 = b and the axes x_1 \geq 0, x_2 \geq 0. The vertices occur at the intersections of this line with the axes—for instance, (b/a_1, 0) if a_1 > 0 and b/a_1 \geq 0, or (0, b/a_2) if a_2 > 0 and b/a_2 \geq 0—each representing a BFS where one variable is zero and the other solves the , corresponding to binding non-negativity and constraints. These points, as seen in standard two-dimensional examples, mark the corners of the feasible , emphasizing how BFS capture the extremal structure essential for optimization algorithms like the simplex method. The proof of the BFS-vertex equivalence proceeds by showing that a BFS cannot be a nontrivial of distinct feasible points. Suppose \mathbf{x} is a BFS with variables \mathbf{x}_B solving B \mathbf{x}_B = \mathbf{b} and nonbasic variables zero, where B is invertible. If \mathbf{x} = \lambda \mathbf{y} + (1-\lambda) \mathbf{z} for $0 < \lambda < 1, \mathbf{y} \neq \mathbf{z} in the polyhedron, then substituting into the constraints yields B (\lambda \mathbf{y}_B + (1-\lambda) \mathbf{z}_B) = \mathbf{b}, implying \mathbf{y}_B = \mathbf{z}_B = \mathbf{x}_B; but differences in nonbasic components would violate feasibility unless \mathbf{y} = \mathbf{z}, a contradiction. Thus, \mathbf{x} is extreme. The converse—that every vertex is a BFS—follows from the fact that any extreme point must have at least n tight constraints, allowing selection of a basis from them.

Higher-dimensional aspects

In higher dimensions, the feasible region of a linear program defined by Ax = b, x \geq 0 with A an m \times n matrix (typically m < n) forms a polyhedron embedded in \mathbb{R}^n, whose dimension is at most n - m (assuming A has full row rank m), arising as the intersection of half-spaces from the inequality constraints after incorporating the equalities. This polyhedron generalizes the bounded feasible sets encountered in lower dimensions, with its boundary consisting of faces of varying dimensions determined by the active constraints. Basic feasible solutions (BFS) represent the 0-dimensional faces, or vertices, of this polyhedron, where exactly n linearly independent constraints are tight, ensuring the solution is extreme and cannot be expressed as a nontrivial convex combination of other feasible points. Adjacent BFS are linked by 1-dimensional faces, or edges, which connect vertices differing by a single basis variable—typically involving the entry or exit of one variable in the —thereby forming the 1-skeleton graph of the polyhedron that underlies algorithms traversing the feasible region. Higher-dimensional faces, such as 2-faces (corresponding to two basis changes) up to (n-1)-faces (facets), encapsulate subsets of the polyhedron where fewer constraints are active, providing a hierarchical structure to the feasible set. The number of BFS, equivalent to the number of vertices, directly influences the polyhedron's combinatorial complexity; for a polyhedron with m facets in n dimensions, this number can reach \Omega(n^{m/2}) in the worst case, reflecting the exponential growth that challenges enumeration-based methods in linear programming. This vertex count underscores the structural intricacy beyond low dimensions, where exhaustive listing becomes infeasible for large-scale problems. Visualizing these high-dimensional polyhedra is inherently difficult due to perceptual limits in three or fewer dimensions, necessitating techniques like orthogonal projections onto 2D or 3D subspaces, cross-sections through hyperplanes, or radial projections to reveal vertex distributions, edge connectivities, and overall convexity without distorting key geometric properties. Such methods aid in understanding the polyhedron's topology and the distribution of , though they may obscure interior structures or higher-face relations.

Dual Aspects

Primal-dual correspondence

In linear programming, the standard form of the primal problem is to minimize c^T x subject to Ax = b and x \geq 0, where A is an m \times n matrix, b \in \mathbb{R}^m, c \in \mathbb{R}^n, and x \in \mathbb{R}^n. The corresponding dual problem is to maximize b^T y subject to A^T y \leq c, where y \in \mathbb{R}^m is unrestricted in sign. This formulation arises from the of the primal constraints, transposing the roles of variables and constraints while reversing the optimization direction. The strong duality theorem asserts that if the primal problem has a finite optimal value, then the also attains the same optimal value, provided both are feasible; otherwise, at least one is unbounded. This equality holds under the assumption of feasibility and boundedness, ensuring no duality gap in linear programs. Complementary slackness provides the precise relationship between optimal and solutions: for optimal x^* and y^*, x_j^* (c_j - a_j^T y^*) = 0 for each j = 1, \dots, n, where a_j denotes the j-th column of A. This condition implies that if a variable x_j^* > 0 (a basic variable in a basic feasible solution), the corresponding dual constraint is tight (a_j^T y^* = c_j); conversely, if the dual slack c_j - a_j^T y^* > 0 (a non-basic in the dual sense), then the primal variable x_j^* = 0. Thus, complementary slackness links the non-basic variables in the to positive slacks in the , ensuring optimality only when these products vanish. The basis of a basic feasible solution in the primal corresponds to the dual solution through the transpose relation: given a primal basis matrix B (selected m columns of A), the optimal dual variables satisfy y^* = B^{-T} c_B, where c_B are the corresponding components of c. This computation uses the inverse of the transposed basis to solve the system of tight dual constraints associated with the primal basics, highlighting the structural symmetry via A^T.

Basic feasible solutions in the dual

In the , which takes the form of minimizing b^T y subject to A^T y \geq c and y \geq 0, where the is maximizing c^T x subject to A x \leq b and x \geq 0 with A an m \times n , a basic feasible solution (BFS) is characterized analogously to the case as a of the feasible region. Specifically, a BFS y is a nonnegative satisfying the constraints where exactly m of the inequalities (from the n constraints A^T y \geq c and the m nonnegativity constraints y \geq 0) hold with , and the corresponding vectors are linearly . This ensures y lies at an of the , providing a foundational structure for algorithms like the dual simplex method. The basis underlying such a BFS consists of selecting m rows from the A^T (which has n rows corresponding to the variables), corresponding to the tight inequalities (A^T y)_j = c_j for those selected indices j, supplemented by any necessary tight nonnegativity constraints to reach exactly m equalities. In the nondegenerate case where all variables are positive, the basis comprises precisely these m selected rows of A^T, ensuring the solution is uniquely determined by solving the resulting square . This selection mirrors the basis, where columns of A are chosen, but transposed to reflect the structure. A key relation between primal and dual BFS arises through the associated basic solutions defined by a common basis. For a primal feasible basis B (an m \times m invertible submatrix of A), the corresponding dual basic solution is y = B^{-T} c_B, which satisfies equality in the dual constraints associated with the primal basic variables (i.e., y^T A_B = c_B^T). This y forms a dual BFS if it is nonnegative and the remaining dual constraints hold (i.e., y^T A_N \geq c_N^T, where N indexes nonbasic primal variables), a condition equivalent to nonpositive reduced costs in the primal simplex tableau. Thus, a primal feasible basis yields a dual feasible solution precisely when these complementary conditions on the reduced costs are met, ensuring dual feasibility without violating the inequality directions. For illustration, consider a BFS where the variables correspond to a specific set of m indices; the associated y will be feasible under the that the reduced costs for the primal nonbasic variables are nonpositive, linking the solution's feasibility directly to the 's boundary structure. In such cases, the tight dual constraints align with the support of the variables via the basis inverse, demonstrating how feasibility propagates to BFS when the tableau satisfies the requisite sign conditions.

Optimization Methods

Simplex algorithm for BFS

The , developed by in 1947, solves problems by iteratively improving an initial basic feasible solution (BFS) until optimality is reached. It begins with a BFS, which corresponds to a of the defined by the linear constraints, and proceeds by selecting a non-basic variable to enter the basis (the entering variable) and a basic variable to leave the basis (the leaving variable), thereby pivoting to an adjacent BFS. This pivot operation updates the basis while maintaining feasibility and non-degeneracy in non-degenerate cases, with the objective function value increasing (for maximization problems) at each step until no further improvement is possible. If an obvious initial BFS is not available, such as when the right-hand sides of the constraints are non-negative but the is infeasible, Phase I of the algorithm employs a two-phase method to first find one. In this approach, an auxiliary linear program is constructed by introducing artificial variables for each constraint and minimizing their sum, starting from an artificial BFS where these variables are basic and non-negative. The iterations are applied to this auxiliary problem until the artificial variables are driven to zero, yielding a feasible BFS for the original problem if the minimum is zero; otherwise, the original problem is infeasible. This phase ensures the algorithm can handle general constraints without relying on slack variables alone. Pivoting rules dictate the selection of entering and leaving variables to guide the algorithm efficiently. The entering variable is typically chosen as the one with the most negative (for maximization), promising the largest objective improvement per unit increase, while the leaving variable is selected to maintain feasibility by taking the minimum ratio of basic variable coefficients to the entering variable's column entries. In degenerate cases, where multiple ratios are zero, —repeating bases without progress—can occur, but Bland's rule prevents this by always selecting the lowest-index eligible variable for both entering and leaving positions. Introduced in 1977, this lexicographic smallest-index rule guarantees finite termination without cycling, though it may increase iteration counts compared to other rules in practice. The algorithm terminates when no entering variable exists, meaning all reduced costs are non-negative (for maximization), at which point the current BFS is optimal. This optimality condition holds because the simplex explores vertices along improving directions, and the polyhedral structure of the ensures that any better solution would require violating feasibility or the basis structure. In finite steps, bounded by the number of possible bases, it converges to the global optimum for bounded problems.

Recovering BFS from solutions

In , an optimal solution obtained via interior-point methods is often an interior point within the optimal face of the , featuring more than m positive variables (where m is the number of equality constraints in standard form). Such solutions are feasible and optimal but not , as feasible solutions (BFS) have at most m positive variables corresponding to linearly independent columns of the constraint matrix A. Degenerate cases may also yield non- representations even at vertices. To recover an equivalent BFS—one with the same objective value—a crossover procedure is employed to shift the solution to a vertex while remaining within the optimal face, preserving both feasibility and optimality. A standard for this recovery involves moving along directions in the null space of A that also lie in the null space of the objective row c, ensuring no change in the objective value. Specifically, given an optimal x > 0 satisfying Ax = b and minimizing (or maximizing) c^T x, solve for a nonzero direction y such that Ay = 0, \quad c^T y = 0. This has m + 1 equations in n variables (n > m), so nontrivial solutions exist if the current of x exceeds m positives, indicating linear dependence among the corresponding columns. To reduce the , select the sign of y such that at least one component y_i has the opposite sign to x_i > 0 (typically by trying both directions). Compute the maximum step size t = \min\left\{ -\frac{x_i}{y_i} \;\middle|\; y_i < 0 \right\} (or the analogous minimum for the opposite direction if needed), ensuring x + t y \geq 0. Update x \leftarrow x + t y and set any components that reach exactly zero to zero. This step maintains A(x + t y) = b and c^T (x + t y) = c^T x, thus preserving feasibility and the objective value. Repeat until precisely m positive variables remain, at which point the support columns form a basis, yielding a BFS. The process terminates in at most n - m iterations, each solvable in polynomial time via linear algebra (e.g., using Gaussian elimination or SVD for the null space). An alternative greedy approach leverages reduced costs to guide basis selection. First, obtain dual multipliers \pi satisfying optimality conditions, such that the reduced costs \bar{c}j = c_j - \pi^T A{\cdot j} = 0 for all j with x_j > 0 (exactly in theory, approximately in practice for interior solutions). Identify candidate for elimination among those with zero reduced costs, as adjusting them does not violate optimality. Iteratively select a variable j with small x_j > 0 and zero \bar{c}_j, set it to zero, and resolve dependencies by solving the over the remaining positive variables to restore feasibility—effectively finding adjustments \delta with A \delta = 0 (restricted to the support) and \delta_j = -x_j. If no such \delta exists without altering the objective (i.e., if c^T \delta \neq 0), backtrack and select another variable. This selection builds a basis incrementally from the support, ensuring all selected columns have zero reduced costs and are linearly independent, resulting in an optimal BFS with the original objective value. Computational implementations, such as those linking interior-point solvers to codes, demonstrate efficiency, often converging in few iterations for practical problems.

Primal-dual optimal bases

In , a primal-dual optimal basis refers to a basis B for the constraint A that simultaneously yields feasible and optimal basic solutions for both the and problems. For the standard problem \max \{ c^T x \mid A x = b, x \geq 0 \}, the basis B is primal feasible if x_B = B^{-1} b \geq 0. The associated solution is y = c_B^T B^{-1}, and the basis is dual feasible (and thus optimal, by ) if y A \geq c^T, or equivalently, the reduced costs \bar{c}_N = c_N^T - y A_N \leq 0 for the nonbasic columns A_N. Such a basis exists whenever the linear program has a finite optimum, as guaranteed by the or interior-point approaches. Interior-point methods, which approximate solutions along the central path, often produce near-optimal and solutions that are not . To extract a primal-dual optimal basis from these approximations, specialized algorithms can be applied, such as those involving a limited number of pivots (at most n in strongly time) to recover an exact basis satisfying both feasibility conditions. This extraction is crucial because interior-point methods excel at finding high-accuracy solutions but may require post-processing to obtain representations needed for further analysis. Primal-dual optimal bases enable key applications in optimization practice. In , the basis provides shadow prices y for constraints and allowable ranges for coefficients before optimality is lost, facilitating "what-if" scenarios in decision-making. For warm starts in reoptimization—such as when right-hand sides or objectives change slightly—the basis serves as an initial feasible point for the simplex method, reducing computational effort compared to starting from scratch. These bases also support hybrid algorithms combining interior-point efficiency with simplex's exactness. To illustrate, consider the primal \max \{ 3x_1 + 5x_2 \mid x_1 + x_2 \leq 4, 2x_1 + 3x_2 \leq 12, x \geq 0 \}. Introduce slack variables s_1, s_2 \geq 0 to obtain the standard form \max \{ 3x_1 + 5x_2 \mid x_1 + x_2 + s_1 = 4, 2x_1 + 3x_2 + s_2 = 12, x, s \geq 0 \}. The optimal is at x = (0, 4), with objective value 20. An optimal basis consists of the columns for x_2 and s_2: B = \begin{pmatrix} 1 & 0 \\ 3 & 1 \end{pmatrix}, \quad B^{-1} = \begin{pmatrix} 1 & 0 \\ -3 & 1 \end{pmatrix}. The basic solution is x_B = B^{-1} b = \begin{pmatrix} 1 & 0 \\ -3 & 1 \end{pmatrix} \begin{pmatrix} 4 \\ 12 \end{pmatrix} = \begin{pmatrix} 4 \\ 0 \end{pmatrix}, so x_2 = 4, s_2 = 0 (with x_1 = 0, s_1 = 0), which is feasible. The dual solution is y = c_B^T B^{-1}, where c_B = (5, 0)^T, so y = (5, 0) \begin{pmatrix} 1 & 0 \\ -3 & 1 \end{pmatrix} = (5, 0). The reduced costs are non-positive: for x_1, \bar{c}_1 = 3 - y (1, 2)^T = 3 - 5 = -2 \leq 0; for s_1, \bar{c}_{s_1} = 0 - y (1, 0)^T = -5 \leq 0. Thus, the basis is primal-dual optimal.

References

  1. [1]
    [PDF] Linear Programming - CMU School of Computer Science
    Definition 2 A linear program (LP) is feasible if there exists a feasible solution, otherwise it is said to be infeasible. Definition 3 An optimal solution x∗ ...
  2. [2]
    [PDF] Basic Feasible Solutions
    A basic feasible solution (BFS) is a feasible solution that is not the average of two other feasible solutions, and is also known as a vertex solution.
  3. [3]
    [PDF] Solving Linear Programs - MIT
    ... basic feasible solutions. In general, given a canonical form for any linear program, a basic feasible solution is given by setting the variable isolated in ...
  4. [4]
    [PDF] Converting an LP to standard form
    To convert to standard form, we introduce two new variables, s1 ≥ 0 and s2 ≥ 0. The first measures how much over 1 the quantity x + y is, and the second ...
  5. [5]
    [PDF] 451: Linear Programming - Carnegie Mellon University
    Nov 5, 2020 · For an LP in standard form we may assume that matrix A has full rank m, otherwise we can remove redundant equations. So we are dealing with ...
  6. [6]
    [PDF] Chapter 4 - Linear Programming
    This assumption can be justified as follows. If the m rows of the matrix A are not linearly independent, then either the constraint Ax = b is contradictory, in ...
  7. [7]
    [PDF] Linear Programming Standard Equality Form and Solution Properties
    In LP standard form, select m independent columns from A. A basic solution is found by setting other variables to zero. A basic feasible solution (BFS) is a ...
  8. [8]
    [PDF] Definition of a Linear Program
    Definition: A feasible solution to a linear program is a solution that satisfies all constraints. Definition: The feasible region in a linear program is the set ...
  9. [9]
    [PDF] Optimization Methods in Finance Part 01: Linear Programming
    Jan 26, 2018 · Fact. The feasible region of an LP is a convex polyhedron. Definition (Polyhedron in standard form). P = {x ∈ Rn : Ax = b, x ≥ 0}. A is m × n ...
  10. [10]
    [PDF] 3 Introduction to Linear Programming - UW Math Department
    Case 3 The LP is infeasible (it has no feasible solution). This means that the feasible re- gion contains no points. Case 4 The LP is unbounded. This means ...
  11. [11]
    [PDF] Lecture 1: What is a linear program? - Faculty Web Pages
    Aug 16, 2022 · A point x in the feasible region is called a feasible solution. You should think of it as follows: a point (x, y) in this region is a ...
  12. [12]
    [PDF] Linear Programming
    Linear programming uses mathematical models to find the best solution with respect to an evaluation criterion, using decision variables, objective function, ...
  13. [13]
    Basic and basic feasible solutions
    Basic and basic feasible solutions. Discussion on linear programming problems in standard often refers to a special class of solutions called basic solutions.
  14. [14]
    [PDF] Basics on Linear Programming
    is called optimal if c. T x. ∗. ≤ cT x for all feasible solution x. □ A feasible LP with no optimal solution is unbounded. Page 9. Basic Definitions (2). 7 / 22.
  15. [15]
    [PDF] Lecture 3 1 Geometry of Linear Programs
    Sep 2, 2014 · (2) x is an extreme point of P. (3) x is a basic feasible solution of P. Proof: We first prove that (1) ⇒ (2) ...
  16. [16]
    [PDF] Lecture 11 1 From last class
    Sep 25, 2016 · (Uniqueness) x is the unique solution corresponding to ˆB. ... Claim 6 The new solution x is the basic feasible solution corresponding to.
  17. [17]
    [PDF] Lecture 12 1 Finding an initial basic feasible solution
    Oct 2, 2014 · The corresponding basic feasible solution is x = 0, z = b. We use this to initialize the simplex algorithm. The simplex method can be one of two ...
  18. [18]
    [PDF] Alexander Schrijver
    ... SCHRIJVER Theory of Linear and Integer Programming. TOMESCU Problems in Combinatorics and Graph Theory (Translated by R. A. Melter). TUCKER Applied ...
  19. [19]
    [PDF] An Example of Degeneracy in Linear Programming
    An LP is degenerate if in a basic feasible solution, one of the basic variables takes on a zero value. Degeneracy is caused by redundant constraint(s) and ...
  20. [20]
    [PDF] Tutorial 7: Degeneracy in linear programming - MIT OpenCourseWare
    A basic feasible solution is called degenerate if one of its RHS coefficients. (excluding the objective value) is 0. This bfs is degenerate. Page 4. 4. Nooz.
  21. [21]
    [PDF] Linear Programming: Chapter 3 Degeneracy - Robert Vanderbei
    Definitions. A dictionary is degenerate if one or more “rhs”-value vanishes. Example: ζ = 6 + w3 + 5x2 + 4w1. x3 = 1 − 2w3 − 2x2 + 3w1.
  22. [22]
    Chapter 2 Standard Linear Program | Introduction to Optimization
    A degeneracy occurs when more than n n equations have a single solution, for example, when three lines in R2 R 2 or four planes in R3 R 3 intersecting at a ...
  23. [23]
    [PDF] CO350 Linear Programming Chapter 8: Degeneracy and Finite ...
    Jun 22, 2005 · In some extreme cases, degeneracy may stall the simplex method inde nitely: We may visit the same sequence of bases over and over again. This ...
  24. [24]
    [PDF] Linear Programming: Part (i): Strict Feasibility and Degeneracy
    Apr 10, 2023 · History Degeneracy techniques for resolving degeneracy: • (symbolic) perturbation Charnes '52 [10];. • lexicographic Dantzig-Orden-Wolfe '55 [14] ...<|control11|><|separator|>
  25. [25]
    [PDF] Introduction to Mathematical Optimization R. Clark Robinson
    Preface v. Chapter 1. Linear Programming. 1. 1.1. Basic Problem. 1. 1.2. Graphical Solution. 2. 1.3. Simplex Method. 5. 1.3.1. Slack Variables. 5.
  26. [26]
    [PDF] Optimization Techniques in Finance - 3. Linear programming
    Recall that a feasible solution to the problem (1) (or feasible vector) is any x satisfying all the constraints. The feasible set is the the set of all feasible.<|control11|><|separator|>
  27. [27]
    [PDF] OPRE 6201 : 4. Network Problems 1 The Transportation Problem
    The only difference between the least-cost method and the northwest-corner method is in the choice of enter- ing variables. Here, the strategy is to always ...
  28. [28]
    [PDF] Chapter 9, Operations Research (OR)
    Feb 7, 2007 · Step 1: Find an initial basic feasible solution xB for the transportation problem (see the next section), and a corresponding basis B ⊆ S × D.
  29. [29]
    Using Northwest Corner for Transportation Problem
    Aug 25, 2025 · Example Question. Question: Let's say you have 2 factories (sources, denoted as F) and 2 warehouses (destinations, denoted as W).
  30. [30]
    [PDF] The Construction of Initial BF Solutions for Transportation Problems
    Northwest corner rule: Begin by selecting x11 (that is, start in the northwest corner of the transportation simplex tableau). Thereafter, if xij was the last ...
  31. [31]
    [PDF] 1 LP Geometry
    But then, by Theorem 1.1, basic feasible solutions correspond precisely to the vertices of the polyhedron defining the constraint region for the LP P!! This ...
  32. [32]
    [PDF] 4.3 Basic feasible solutions and vertices of polyhedra
    Due to the fundamental theorem of Linear Programming, to solve any LP it 'suffices' to consider the vertices (finitely many) of the polyhedron P of the ...
  33. [33]
    [PDF] Lecture 2
    Nov 30, 2022 · The use of a geometric representation helps build intuition about linear programming. ... basic feasible solutions for linear programs in stan-.Missing: interpretation | Show results with:interpretation
  34. [34]
    [PDF] Lecture 7 1 Overview 2 Geometry of Linear Programs
    Sep 16, 2015 · In other words, x is not a convex combination of any other points in P. ... The exact form of the LP to find an initial basic feasible solution.
  35. [35]
    [PDF] On the Complexity of Computing the Diameter of a Polytope
    May 22, 2006 · However, in general, the number of vertices of a polytope may be Ω(nm/2). Only when the dimension is fixed, can breadth-first search be used to ...
  36. [36]
    Visualizing Multidimensional Linear Programming Problems - arXiv
    Apr 6, 2022 · The article proposes an n-dimensional mathematical model of the visual representation of a linear programming problem.
  37. [37]
    [PDF] Linear programming
    Note: the dual of the dual is the primal. Page 23. ○ Linear programming duality theorem: min cx : Ax=b, x ≥ 0 = max by : A. T y ≤ c when they are both ...
  38. [38]
    [PDF] Duality in Linear Programming - MIT
    Since the dual of the dual is the primal, we can interchange the words primal and dual in Table 4.1 and the correspondences will still hold. 4.4 THE FUNDAMENTAL ...
  39. [39]
    [PDF] Linear Programming: Chapter 5 Duality - Robert Vanderbei
    Oct 17, 2007 · Theorem Dual of dual is primal. Page 6. Weak Duality Theorem. If (x1 ... Complementary Slackness. Theorem. At optimality, we have xjzj = 0 ...
  40. [40]
    [PDF] Relations between Primal and Dual
    Relations between Primal and Dual. If the primal problem is. Maximize ctx subject to Ax = b, x ≥ 0 then the dual is. Minimize bty subject to Aty ≥ c (and y ...
  41. [41]
    [PDF] Chapter 4 Duality
    Figure 4.1: The constraints, feasible region, and optimal solution of the linear ... Ax = b. (b) There exists a vector y ∈ <M such that bT y < 0 and AT y ...
  42. [42]
    [PDF] Origins of the Simplex Method - DTIC
    The success of the simplex method in solving very large problems encountered in practice depends on two properties found in almost every practical problem. ...
  43. [43]
    [PDF] THE (DANTZIG) SIMPLEX METHOD FOR LINEAR PROGRAMMING
    The initial feasible solution issue was addressed, perhaps surprisingly, by adding m more variables, called artificial variables, which form the initial basis, ...
  44. [44]
    A Two-Phase Method for the Simplex Tableau | Operations Research
    The first phase of the method determines feasibility, provided it exists, the second phase, which follows, searches for optimality.Missing: reference | Show results with:reference
  45. [45]
  46. [46]
    [PDF] On Finding Primal- and Dual-Optimal Bases - Stanford CS Theory
    We show that if there exists a strongly polynomial time algorithm that finds a basis which is optimal for both the primal and the dual problems, given an ...Missing: correspondence | Show results with:correspondence