Fact-checked by Grok 2 weeks ago

Reduced cost

In linear programming, the reduced cost of a decision variable, often denoted as \bar{c}_j, measures the change in the objective function value per unit increase in that variable from its current bound (typically zero for non-basic variables), while keeping all other variables fixed at their optimal values. It is computed as \bar{c}_j = c_j - \sum_{i=1}^m y_i a_{ij}, where c_j is the original objective coefficient for variable x_j, y_i are the shadow prices ( variables) for the constraints, and a_{ij} are the constraint coefficients. For basic variables in the optimal solution, the reduced cost is always zero, as they are already at positive levels contributing to optimality. Reduced costs play a central role in within the simplex method, helping analysts assess how perturbations in objective coefficients affect the optimal solution. Specifically, for a non-basic in a maximization problem, a negative reduced cost indicates the amount by which the coefficient must increase to make the variable enter the basis and potentially improve the objective; conversely, in minimization problems, a positive reduced cost shows the deterioration if the variable is forced into the solution. This concept also equates to the shadow price of the variable's nonnegativity , providing insight into the of relaxing bounds on decision variables. In practice, reduced costs are output by solvers like Excel's Solver or specialized optimization software, aiding decision-makers in , , and other applications by quantifying the of excluding a variable from the active . For instance, if a product's reduced cost is -2 in a model, its selling price must rise by at least $2 per unit to justify production. Zero reduced costs for non-basic variables signal multiple optimal solutions, allowing flexibility in implementation without changing the objective value.

Fundamentals

Definition

In linear programming, the reduced cost of a variable represents the change in the objective function value per unit increase in that variable when it is introduced into the solution at its current non-basic level. Specifically, it quantifies the amount by which the objective coefficient of a non-basic variable must be altered for that variable to become basic and enter the solution at a positive value, thereby improving the objective function. This measure is computed within the framework of the simplex method, where variables are classified as basic (included in the current solution) or non-basic (set to zero). The interpretation of reduced cost varies depending on whether the problem is a maximization or minimization, and the used. In maximization problems, a positive reduced cost for a non-basic indicates the potential increase in the objective value (e.g., ) if that variable were allowed to enter the basis, signaling that the current solution is suboptimal. Conversely, in minimization problems, a negative reduced cost signifies the potential decrease in the objective value (e.g., ) upon introducing the variable, again pointing to suboptimality if such a value exists. For basic variables in an optimal , the reduced cost is always zero, as these variables are already at levels that do not allow further improvement in the objective function without violating constraints. Non-basic variables, however, may have non-zero reduced costs that the optimization process toward feasibility and optimality. Consider a simple maximization problem with objective function z = 3x_1 + 2x_2, subject to constraints like x_1 + x_2 \leq 4 and non-negativity. Suppose the current has x_1 = 4, x_2 = 0, yielding z = 12. If the reduced cost for x_2 is -1, introducing x_2 would decrease z by 1 unit per unit of x_2, confirming optimality as no better exists by pivoting x_2 into the basis.

Mathematical Formulation

In linear programming, the reduced cost is formulated within the standard primal problem of minimizing the objective \mathbf{c}^T \mathbf{x} subject to the equality constraints \mathbf{Ax} = \mathbf{b} and non-negativity \mathbf{x} \geq \mathbf{0}, where \mathbf{A} \in \mathbb{R}^{m \times n} is the coefficient matrix, \mathbf{b} \in \mathbb{R}^m is the right-hand-side vector, \mathbf{c} \in \mathbb{R}^n is the cost vector, and \mathbf{x} \in \mathbb{R}^n are the decision variables. The associated dual problem is to maximize \mathbf{b}^T \mathbf{y} subject to \mathbf{A}^T \mathbf{y} \leq \mathbf{c}, where \mathbf{y} \in \mathbb{R}^m represents the dual variables or shadow prices. The reduced cost for the j-th variable x_j at an optimal solution is defined as \bar{c}_j = c_j - \mathbf{a}_j^T \mathbf{y}, where \mathbf{a}_j is the j-th column of \mathbf{A} and \mathbf{y} is the optimal . In vector notation, the full reduced cost vector is \bar{\mathbf{c}} = \mathbf{c} - \mathbf{A}^T \mathbf{y}. By , \bar{c}_j \geq 0 for all j corresponding to nonbasic variables at optimality in the minimization problem, ensuring no further improvement is possible. Within the simplex method, the reduced costs are derived from the current basic feasible solution, where the basis matrix \mathbf{B} consists of columns of \mathbf{A} for the basic variables, and \mathbf{c}_B are the corresponding costs. The shadow prices are \mathbf{y}^T = \mathbf{c}_B^T \mathbf{B}^{-1}, so \bar{c}_j = c_j - \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j for nonbasic j. In the simplex tableau, these reduced costs appear as the coefficients in the objective row (row 0) for the nonbasic variables, specifically as c_j - z_j where z_j = \mathbf{c}_B^T \mathbf{B}^{-1} \mathbf{a}_j. For maximization problems, the reduced cost convention aligns similarly but with the objective sense reversed; a positive reduced cost for a nonbasic indicates that increasing it would improve (increase) value, signaling a direction for basis . This formulation ensures that the optimality conditions tie directly to the feasibility \mathbf{A}^T \mathbf{y} \leq \mathbf{c} (for minimization) or \mathbf{A}^T \mathbf{y} \geq \mathbf{c} (for maximization).

Role in the Simplex Method

Entering Variable Selection

In the simplex method applied to maximization problems, the entering variable is selected from the nonbasic variables by identifying the one with the most positive reduced cost, as this indicates the largest potential increase in function value per unit increase in that variable. The reduced costs are computed for all nonbasic variables using the current basis, typically via the formula \bar{c}_j = c_j - \pi^T A_j, where \pi are the simplex multipliers ( variables) and A_j is the column for variable j, but in tableau form, the negative of the reduced costs appear in the objective row. The optimal pivot strategy, often called Dantzig's rule, involves calculating all reduced costs and choosing the most positive one to enter the basis, which theoretically accelerates by maximizing improvement per . However, this full pricing step becomes computationally expensive in large problems with many variables, as it requires updating and evaluating reduced costs for potentially thousands of nonbasics, leading to high per- time . In practice, approximations mitigate this expense, such as partial pricing (scanning nonbasics in a fixed order until a positive reduced cost is found) or advanced heuristics like the rule, which prioritizes variables likely to yield steep improvements based on historical pivots. Consider a simple maximization example: maximize Z = 40x_1 + 30x_2 subject to x_1 + x_2 \leq 12, $2x_1 + x_2 \leq 16, x_1, x_2 \geq 0. Introducing slacks s_1, s_2 \geq 0, the initial tableau is:
Basicx_1x_2s_1s_2RHS
s_1111012
s_2210116
Z4030000
The reduced costs appear in the Z row: 40 for x_1 and 30 for x_2. The most positive is 40, so x_1 is selected as the entering variable. The minimum on the pivot column (quotients 12/1=12 and 16/2=8) identifies s_2 as leaving, with pivot at 2. After pivoting, the new tableau has x_1 basic, updated reduced costs (now 10 for x_2), and Z=320. To avoid in degenerate linear programs, where the method might loop infinitely without progress, anti-cycling rules incorporate reduced costs into selection. Bland's rule, for instance, chooses the entering as the eligible nonbasic (positive reduced cost) with the smallest index and the leaving similarly among minimum-ratio ties, ensuring terminates in a finite number of steps without cycles. This rule maintains computational simplicity while guaranteeing non-cycling behavior, though it may lead to more iterations than aggressive strategies like Dantzig's.

Optimality Conditions

In the simplex method applied to maximization problems in linear programming, a basic feasible solution is deemed optimal when all reduced costs for non-basic variables in the final tableau are less than or equal to zero. This condition indicates that increasing any non-basic variable would not improve (and might worsen) the objective function value, as the reduced cost \bar{c}_j = c_j - \pi^T A_j \leq 0 for each non-basic column j, where \pi are the simplex multipliers (dual variables) and A_j is the constraint column for variable j. For minimization problems, the optimality criterion reverses: all reduced costs must be greater than or equal to zero to ensure no further decrease in the objective is possible. A reduced cost of exactly zero for a non-basic signals the presence of multiple optimal solutions, as introducing that into the basis at a positive level would yield an alternative with the same objective value. This occurs because the lies on the of the dual feasible region, allowing degeneracy in the sense of non-unique bases without changing the optimum. In contrast, strictly negative reduced costs (for maximization) in a tableau confirm a unique optimum, as any deviation would degrade the solution. Reduced costs play a key role in confirming overall optimality by ensuring dual feasibility alongside primal feasibility, with complementary slackness providing the linkage. Specifically, the non-negativity of reduced costs (in the maximization convention, interpreted as \pi^T A_j - c_j \geq 0) satisfies the constraints, while complementary slackness holds because basic variables (with positive values) have zero reduced costs, and non-basic variables (at zero) permit positive reduced costs without violation. This pairwise condition—that a primal variable is positive only if its corresponding reduced cost is zero, and vice versa—guarantees that the primal and solutions align at the optimum. To illustrate the transition to optimality, consider a maximization problem solved via the simplex method. In an initial or intermediate suboptimal tableau, the objective row (showing ) might appear as:
Basisx1x2x3s1s2RHS
s1110104
s2211015
z314000
Here, positive (3 for x_1, 1 for x_2, 4 for x_3) indicate the solution is not optimal, prompting selection of an entering variable (e.g., x_3). After pivoting operations, the final optimal tableau could be:
Basisx1x2x3s1s2RHS
x31/21/211/202.5
s23/21/20-1/211.5
z-1-20-1010
Now, all reduced costs for non-basics (-1 for x_1, -2 for x_2, -1 for s_1) are non-positive, confirming optimality with z = 10. If a non-basic reduced cost were zero here, pivoting on it would reveal an alternate optimum.

Interpretations

Economic Interpretation

In , the reduced cost of a non-basic represents the marginal value associated with its coefficient, specifically indicating how much that coefficient must improve (for a maximization problem) before the variable becomes economically viable to introduce into the optimal . For instance, a negative reduced cost signifies that increasing the variable by one unit would decrease the objective value by the absolute amount of the reduced cost, reflecting the net penalty or shortfall in contribution relative to resource costs. This interpretation arises from the reduced cost's role as the shadow price on the non-negativity for that variable, quantifying the imputed cost of forcing the activity into production. In the context of , particularly for non-basic variables held at zero in the optimal , the reduced cost measures the minimum required to justify allocating additional resources to that activity. It embodies the at which the marginal benefit of the activity equals the of the resources it consumes, ensuring that only activities with non-positive reduced costs (in maximization) are excluded without regret. This guides decision-makers in evaluating whether scarce resources, such as labor or materials, should be diverted to an underutilized option, preventing inefficient expansions. A practical example occurs in , where reduced costs determine the threshold for introducing a new product line. Consider a furniture manufacturer optimizing and bench production subject to wood and labor constraints; if the reduced cost for s is -$0.30, the per must rise by at least $0.30 to make production worthwhile, as this would offset the resource and enter the optimal mix without reducing overall . Such analysis helps firms assess market-driven changes, like price adjustments, to expand product offerings efficiently. Non-zero reduced costs directly connect to opportunity costs by quantifying the foregone benefits of not engaging the variable, equivalent to the net loss in objective value from maintaining it at zero. In economic terms, this represents the implicit penalty for exclusion, derived from the difference between the variable's direct contribution and the shadow-priced value of its resource demands, thereby highlighting trade-offs in constrained environments.

Sensitivity Analysis

In linear programming sensitivity analysis, reduced costs quantify the robustness of an optimal solution to perturbations in the objective function coefficients, particularly by indicating the extent to which a non-basic 's coefficient can change before it becomes eligible to enter the basis. Specifically, for a maximization problem, a non-basic variable x_j remains non-basic as long as the change in its coefficient c_j does not increase its reduced cost \bar{c}_j above zero, preserving the current basis's optimality. This leverages the economic interpretation of reduced costs as the marginal improvement needed for a variable to contribute positively to the objective. The allowable range for c_j of a non-basic variable is determined directly from its reduced cost: in a maximization problem, c_j can increase by up to -\bar{c}_j (where \bar{c}_j < 0) without altering the basis, while decreases are typically unbounded as they further deter the variable from entering the basis. For instance, if \bar{c}_j = -1.5, the coefficient can be increased by at most 1.5 units before x_j enters the basis upon re-optimization. These ranges are computed by ensuring that the updated reduced costs for all non-basic variables remain non-positive after the perturbation. Solvers like Excel's provide sensitivity reports that illustrate these ranges numerically; consider a product mix problem maximizing profit $4x_1 + 6x_2 subject to constraints, where the optimal basis has x_1 basic and x_2 non-basic with \bar{c}_2 = -2. The report would show that c_2 (the coefficient for x_2) has an allowable increase of 2 (to 8) and unlimited decrease, meaning the basis remains optimal for c_2 \leq 8. A key limitation is that reduced costs inform sensitivity only for non-basic variables' objective coefficients; for basic variables, changes affect the dual prices and thus the reduced costs of multiple variables, often requiring a full re-optimization or tableau adjustment to determine ranges rather than a simple bound.

Advanced Applications

In Duality Theory

In linear programming, reduced costs are integral to duality theory. Consider the primal problem formulated as a maximization: \max \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq \mathbf{0}. The associated dual is \min \mathbf{y}^T \mathbf{b} subject to \mathbf{y}^T A \geq \mathbf{c}^T, \mathbf{y} \geq \mathbf{0}, where \mathbf{y} is the vector of dual variables (shadow prices). The reduced cost for the j-th primal variable, consistent with the standard definition \bar{c}_j = c_j - \mathbf{y}^T A_j, must satisfy \bar{c}_j \leq 0 for all j at primal optimality in a maximization problem. This condition is equivalent to the dual feasibility requirement \mathbf{y}^T A_j \geq c_j, as the dual constraint slacks are -\bar{c}_j \geq 0. Complementary slackness links the optimal solutions of the and , as per the strong duality theorem. For optimal \mathbf{x}^* and \mathbf{y}^*, x_j^* (\mathbf{y}^{*T} A_j - c_j) = 0 for each j, meaning \bar{c}_j = 0 if and only if x_j^* > 0 (since \bar{c}_j \leq 0). Thus, positive primal variables correspond to tight dual constraints (zero reduced cost), while non-zero reduced costs (negative in maximization) imply the primal variable is at its bound (zero). This ensures equality of the and objectives: \mathbf{c}^T \mathbf{x}^* = \mathbf{y}^{*T} \mathbf{b}. From the dual viewpoint, the negative reduced costs -\bar{c}_j quantify the slack in the dual constraints, indicating by how much \mathbf{y}^T A_j exceeds c_j. A negative \bar{c}_j (non-zero) reflects excess in the dual constraint for that variable, highlighting the margin of dual feasibility. This symmetry underscores the connection between primal optimality (non-positive reduced costs) and dual feasibility. To illustrate, consider the primal: \max 6x_1 + 14x_2 + 13x_3 subject to \frac{1}{2}x_1 + 2x_2 + x_3 \leq 24, x_1 + 2x_2 + 4x_3 \leq 60, x_1, x_2, x_3 \geq 0. The optimal solution is x_1^* = 36, x_2^* = 0, x_3^* = 6, with objective 294. The dual is \min 24y_1 + 60y_2 subject to \frac{1}{2}y_1 + y_2 \geq 6, $2y_1 + 2y_2 \geq 14, y_1 + 4y_2 \geq 13, y_1, y_2 \geq 0, with optimal y_1^* = 11, y_2^* = 0.5, objective 294. The reduced costs are \bar{c}_1 = 6 - (11 \cdot 0.5 + 0.5 \cdot 1) = 0, \bar{c}_2 = 14 - (11 \cdot 2 + 0.5 \cdot 2) = -9, \bar{c}_3 = 13 - (11 \cdot 1 + 0.5 \cdot 4) = 0. Here, \bar{c}_2 < 0 corresponds to x_2^* = 0, and zero reduced costs align with positive x_1^* and x_3^*, confirming complementary slackness. The dual slacks for the second constraint are zero, matching the binding primal constraints.

Extensions to Other Optimization Problems

In , reduced costs from the are used in branch-and-bound algorithms for variable fixing and bound tightening. Reduced cost fixing leverages complementary slackness from optimal solutions to eliminate variables that cannot improve the objective beyond the current bound, strengthening the and reducing the search space. Modern mixed-integer programming solvers like CPLEX incorporate enhanced reduced cost fixing techniques, often through presolving, to improve efficiency on benchmarks like MIPLIB. In network flow problems, reduced costs are adapted using node potentials in algorithms like the successive shortest path method for minimum cost flows. Node potentials \pi(v) define reduced edge costs as \tilde{c}(u \to v) = c(u \to v) + \pi(u) - \pi(v), which preserve shortest path distances while ensuring non-negativity for efficient computation with rather than Bellman-Ford. Potentials are updated after each flow augmentation using distances from the latest , maintaining non-negative reduced costs and yielding a time complexity of O(B E \log V), where B is the total flow value. This extends the economic interpretation of reduced costs to capacitated networks, aiding optimality verification without negative cycles. In interior-point methods for , primal-dual frameworks produce analogs to reduced costs via dual multipliers and barrier terms, which measure deviations from optimality and guide steps toward . Primal-dual interior-point algorithms compute directions incorporating scaled dual variables, similar to reduced costs, to diminish the and follow the central path parameterized by the barrier \mu. These provide sensitivity insights akin to simplex reduced costs, assessing perturbation impacts across barrier subproblems. In contrast to the , interior-point approaches trace smooth interior paths, with the barrier enforcing strict feasibility and self-concordance enabling .

References

  1. [1]
    [PDF] Sensitivity Analysis - MIT
    The nonnegativity constraints also have a shadow price, which, in linear-programming terminology, is given the special name of reduced cost. Definition.
  2. [2]
    [PDF] Linear Programming Notes VII Sensitivity Analysis
    The reduced cost of a variable is the smallest change in the objective func- tion coefficient needed to arrive at a solution in which the variable takes on a.
  3. [3]
    Sensitivity Analysis
    The opportunity/reduced cost of a given decision variable can be interpreted as the rate at which the value of the objective function (ie, profit) will ...Missing: definition | Show results with:definition
  4. [4]
    [PDF] Lecture 9 1 Verifying optimality
    Definition 4 For any y ∈ Rm, the reduced cost ¯c with respect to y is ¯c = c − AT y. Observation 1 Reduced costs ¯c ≥ 0 with respect to y iff y is dual feasible ...
  5. [5]
    [PDF] The revised simplex method 1 Calculating the reduced costs
    We used this to write down a formula for a basic solution to the system of equations Ax = b: if the basic variables are numbered by B, and the nonbasic ...
  6. [6]
    [PDF] 1 Simplex and Matrices
    Formally (for a maximization problem), the reduced cost for a nonbasic variable is defined as the amount by which the value of z will decrease if we increase.
  7. [7]
    [PDF] The Simplex Method - cs.wisc.edu
    One simple rule is to choose the column with the most negative reduced cost. This gives the biggest decrease in z per unit increase in the entering variable.
  8. [8]
    [PDF] The Simplex Method - Stanford University
    The simplex method is to proceed from one BFS (a corner point of the feasible region) to an adjacent or neighboring one by choosing exactly one of non-basic ...
  9. [9]
    [PDF] CO350 Linear Programming Chapter 6: The Simplex Method
    Jun 8, 2005 · The rule states “Pick the nonbasic variable with the largest reduced cost. Break tie arbitrarily”. E.g., in (T), since ¯c2 = 3 > 2=¯c1, we ...
  10. [10]
  11. [11]
    [PDF] New Finite Pivoting Rules for the Simplex Method
    METHOD*t. ROBERT G. BLAND. SUNY-Binghamton. A simple proof of finiteness is given for the simplex method under an easily described pivoting rule. A second new ...
  12. [12]
    [PDF] Initialization, Anti-Cycling, Number of Iterations
    Theorem 1. If the entering and leaving variables are chosen according to Bland's rule, then the Simplex. Method does not cycle. Proof. Suppose the Simplex ...
  13. [13]
    [PDF] The simplex method 2 - MIT OpenCourseWare
    Feb 21, 2013 · If all reduced costs are ≤ 0, then you are optimal. Otherwise, choose a reduced cost that is positive. We could have chosen the 5 or the 4.5 or ...
  14. [14]
    [PDF] 1 Simplex Full Tableau method - MIT OpenCourseWare
    Sep 25, 2009 · The solution in this tableau is optimal since all reduced costs are nonnegative. Notice however, that there is a nonbasic variable with zero ...
  15. [15]
    [PDF] Linear Programming Notes VI Duality and Complementary Slackness
    The complementary slackness conditions guarantee that the values of the primal and dual are the same. In the diet problem, the pill seller guarantees that ...
  16. [16]
    [PDF] Duality in Linear Programming - MIT
    However, the reduced costs have the interpretation of shadow prices on the nonnegativity constraints, and we see that the reduced costs of x1 and x3 are ...
  17. [17]
    [PDF] LP Methods.S2 Sensitivity Analysis
    Sensitivity Analysis. 3. Reduced Cost. The reduced cost of a nonbasic variable at its lower bound is often referred to as the opportunity cost of that variable.
  18. [18]
    [PDF] Using Duality and Sensitivity Analysis to Interpret L
    The reduced cost also can be interpreted as the penalty you would pay to introduce a product into the solution. In this problem, Page 5 5 USING DUALITY AND ...
  19. [19]
    [PDF] Sensitivity Analysis for Linear Programming
    The value in the cost row in the simplex tableau is called the reduced cost. It is zero for a basic variable and, in an optimal tableau, it is non{negative for ...<|control11|><|separator|>
  20. [20]
    [PDF] Review of duality in linear programming - Math-Unipd
    Duality theory in linear programming can be viewed as a tool for ... Indeed, the definition of reduced cost corresponds to that of the dual constraint:.<|control11|><|separator|>
  21. [21]
    [PDF] LP Methods.S3 The Dual Linear Program
    That is, when a primal structural variable is basic, its reduce cost (dual slack) is zero; when a primal slack variable is basic, the corresponding structural ...
  22. [22]
    [PDF] Simplex Method and Reduced Costs, Duality and Marginal Costs
    By studying what happens during a step of the simplex method, we can get the following expression for the reduced cost of variable xj cj = cj −cT. B B. −1Aj ...