Reduced cost
In linear programming, the reduced cost of a decision variable, often denoted as \bar{c}_j, measures the change in the objective function value per unit increase in that variable from its current bound (typically zero for non-basic variables), while keeping all other variables fixed at their optimal values.[1] It is computed as \bar{c}_j = c_j - \sum_{i=1}^m y_i a_{ij}, where c_j is the original objective coefficient for variable x_j, y_i are the shadow prices (dual variables) for the constraints, and a_{ij} are the constraint coefficients.[1] For basic variables in the optimal solution, the reduced cost is always zero, as they are already at positive levels contributing to optimality.[2] Reduced costs play a central role in sensitivity analysis within the simplex method, helping analysts assess how perturbations in objective coefficients affect the optimal solution.[3] Specifically, for a non-basic variable in a maximization problem, a negative reduced cost indicates the amount by which the coefficient must increase to make the variable enter the basis and potentially improve the objective; conversely, in minimization problems, a positive reduced cost shows the deterioration if the variable is forced into the solution.[2] This concept also equates to the shadow price of the variable's nonnegativity constraint, providing insight into the marginal cost of relaxing bounds on decision variables.[1] In practice, reduced costs are output by solvers like Excel's Solver or specialized optimization software, aiding decision-makers in resource allocation, production planning, and other applications by quantifying the opportunity cost of excluding a variable from the active solution.[3] For instance, if a product's reduced cost is -2 in a profit maximization model, its selling price must rise by at least $2 per unit to justify production.[1] Zero reduced costs for non-basic variables signal multiple optimal solutions, allowing flexibility in implementation without changing the objective value.[2]Fundamentals
Definition
In linear programming, the reduced cost of a variable represents the change in the objective function value per unit increase in that variable when it is introduced into the solution at its current non-basic level. Specifically, it quantifies the amount by which the objective coefficient of a non-basic variable must be altered for that variable to become basic and enter the solution at a positive value, thereby improving the objective function. This measure is computed within the framework of the simplex method, where variables are classified as basic (included in the current solution) or non-basic (set to zero). The interpretation of reduced cost varies depending on whether the problem is a maximization or minimization, and the sign convention used. In maximization problems, a positive reduced cost for a non-basic variable indicates the potential increase in the objective value (e.g., profit) if that variable were allowed to enter the basis, signaling that the current solution is suboptimal. Conversely, in minimization problems, a negative reduced cost signifies the potential decrease in the objective value (e.g., cost) upon introducing the variable, again pointing to suboptimality if such a value exists.[1] For basic variables in an optimal solution, the reduced cost is always zero, as these variables are already at levels that do not allow further improvement in the objective function without violating constraints. Non-basic variables, however, may have non-zero reduced costs that guide the optimization process toward feasibility and optimality. Consider a simple maximization problem with objective function z = 3x_1 + 2x_2, subject to constraints like x_1 + x_2 \leq 4 and non-negativity. Suppose the current basic feasible solution has x_1 = 4, x_2 = 0, yielding z = 12. If the reduced cost for x_2 is -1, introducing x_2 would decrease z by 1 unit per unit of x_2, confirming optimality as no better solution exists by pivoting x_2 into the basis.[1]Mathematical Formulation
In linear programming, the reduced cost is formulated within the standard primal problem of minimizing the objective \mathbf{c}^T \mathbf{x} subject to the equality constraints \mathbf{Ax} = \mathbf{b} and non-negativity \mathbf{x} \geq \mathbf{0}, where \mathbf{A} \in \mathbb{R}^{m \times n} is the coefficient matrix, \mathbf{b} \in \mathbb{R}^m is the right-hand-side vector, \mathbf{c} \in \mathbb{R}^n is the cost vector, and \mathbf{x} \in \mathbb{R}^n are the decision variables.[4] The associated dual problem is to maximize \mathbf{b}^T \mathbf{y} subject to \mathbf{A}^T \mathbf{y} \leq \mathbf{c}, where \mathbf{y} \in \mathbb{R}^m represents the dual variables or shadow prices.[4] The reduced cost for the j-th variable x_j at an optimal solution is defined as \bar{c}_j = c_j - \mathbf{a}_j^T \mathbf{y}, where \mathbf{a}_j is the j-th column of \mathbf{A} and \mathbf{y} is the optimal dual solution.[4] In vector notation, the full reduced cost vector is \bar{\mathbf{c}} = \mathbf{c} - \mathbf{A}^T \mathbf{y}.[4] By strong duality, \bar{c}_j \geq 0 for all j corresponding to nonbasic variables at optimality in the minimization problem, ensuring no further improvement is possible.[4] Within the simplex method, the reduced costs are derived from the current basic feasible solution, where the basis matrix \mathbf{B} consists of columns of \mathbf{A} for the basic variables, and \mathbf{c}_B are the corresponding costs. The shadow prices are \mathbf{y}^T = \mathbf{c}_B^T \mathbf{B}^{-1}, so \bar{c}_j = c_j - \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j for nonbasic j.[5] In the simplex tableau, these reduced costs appear as the coefficients in the objective row (row 0) for the nonbasic variables, specifically as c_j - z_j where z_j = \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j.[5] For maximization problems, the reduced cost convention aligns similarly but with the objective sense reversed; a positive reduced cost for a nonbasic variable indicates that increasing it would improve (increase) the objective value, signaling a direction for basis improvement.[6] This formulation ensures that the optimality conditions tie directly to the dual feasibility \mathbf{A}^T \mathbf{y} \leq \mathbf{c} (for minimization) or \mathbf{A}^T \mathbf{y} \geq \mathbf{c} (for maximization).[4]Role in the Simplex Method
Entering Variable Selection
In the simplex method applied to maximization problems, the entering variable is selected from the nonbasic variables by identifying the one with the most positive reduced cost, as this indicates the largest potential increase in the objective function value per unit increase in that variable.[7] The reduced costs are computed for all nonbasic variables using the current basis, typically via the formula \bar{c}_j = c_j - \pi^T A_j, where \pi are the simplex multipliers (dual variables) and A_j is the column for variable j, but in tableau form, the negative of the reduced costs appear in the objective row.[8] The optimal pivot strategy, often called Dantzig's rule, involves calculating all reduced costs and choosing the most positive one to enter the basis, which theoretically accelerates convergence by maximizing improvement per iteration.[7] However, this full pricing step becomes computationally expensive in large problems with many variables, as it requires updating and evaluating reduced costs for potentially thousands of nonbasics, leading to high per-iteration time complexity. In practice, approximations mitigate this expense, such as partial pricing (scanning nonbasics in a fixed order until a positive reduced cost is found) or advanced heuristics like the Devex rule, which prioritizes variables likely to yield steep improvements based on historical pivots.[9] Consider a simple maximization example: maximize Z = 40x_1 + 30x_2 subject to x_1 + x_2 \leq 12, $2x_1 + x_2 \leq 16, x_1, x_2 \geq 0. Introducing slacks s_1, s_2 \geq 0, the initial tableau is:| Basic | x_1 | x_2 | s_1 | s_2 | RHS |
|---|---|---|---|---|---|
| s_1 | 1 | 1 | 1 | 0 | 12 |
| s_2 | 2 | 1 | 0 | 1 | 16 |
| Z | 40 | 30 | 0 | 0 | 0 |
Optimality Conditions
In the simplex method applied to maximization problems in linear programming, a basic feasible solution is deemed optimal when all reduced costs for non-basic variables in the final tableau are less than or equal to zero. This condition indicates that increasing any non-basic variable would not improve (and might worsen) the objective function value, as the reduced cost \bar{c}_j = c_j - \pi^T A_j \leq 0 for each non-basic column j, where \pi are the simplex multipliers (dual variables) and A_j is the constraint column for variable j. For minimization problems, the optimality criterion reverses: all reduced costs must be greater than or equal to zero to ensure no further decrease in the objective is possible.[13] A reduced cost of exactly zero for a non-basic variable signals the presence of multiple optimal solutions, as introducing that variable into the basis at a positive level would yield an alternative basic feasible solution with the same objective value. This occurs because the variable lies on the boundary of the dual feasible region, allowing degeneracy in the sense of non-unique bases without changing the optimum. In contrast, strictly negative reduced costs (for maximization) in a tableau confirm a unique optimum, as any deviation would degrade the solution.[14] Reduced costs play a key role in confirming overall optimality by ensuring dual feasibility alongside primal feasibility, with complementary slackness providing the linkage. Specifically, the non-negativity of reduced costs (in the maximization convention, interpreted as \pi^T A_j - c_j \geq 0) satisfies the dual constraints, while complementary slackness holds because basic variables (with positive values) have zero reduced costs, and non-basic variables (at zero) permit positive reduced costs without violation. This pairwise condition—that a primal variable is positive only if its corresponding reduced cost is zero, and vice versa—guarantees that the primal and dual solutions align at the optimum.[15] To illustrate the transition to optimality, consider a maximization problem solved via the simplex method. In an initial or intermediate suboptimal tableau, the objective row (showing reduced costs) might appear as:| Basis | x1 | x2 | x3 | s1 | s2 | RHS |
|---|---|---|---|---|---|---|
| s1 | 1 | 1 | 0 | 1 | 0 | 4 |
| s2 | 2 | 1 | 1 | 0 | 1 | 5 |
| z | 3 | 1 | 4 | 0 | 0 | 0 |
| Basis | x1 | x2 | x3 | s1 | s2 | RHS |
|---|---|---|---|---|---|---|
| x3 | 1/2 | 1/2 | 1 | 1/2 | 0 | 2.5 |
| s2 | 3/2 | 1/2 | 0 | -1/2 | 1 | 1.5 |
| z | -1 | -2 | 0 | -1 | 0 | 10 |