Dual linear program
In linear programming, the dual linear program (or simply the dual) is a canonical optimization problem derived from a given primal linear program, where the roles of the objective function and constraints are interchanged to provide complementary insights into the optimization landscape. For a primal problem formulated as maximizing c^T x subject to A x \leq b and x \geq 0, the dual is the minimization problem \min b^T y subject to A^T y \geq c and y \geq 0, with one dual variable y_i corresponding to each primal inequality constraint and one dual constraint per primal variable.[1][2] This construction ensures that the dual problem yields an upper bound on the primal's optimal objective value for any feasible dual solution.[1] The foundational weak duality theorem states that if both the primal and dual problems are feasible, then the optimal value of the primal maximization is less than or equal to the optimal value of the dual minimization, reflecting the bounding relationship between their objective functions.[2] Under the strong duality theorem, if either problem is feasible and bounded, then both attain optimal solutions with equal objective values, establishing a profound equivalence that links the two formulations.[1][2] Additionally, the dual of the dual recovers the original primal, underscoring the symmetry of the duality framework.[2] Duality in linear programming extends beyond theoretical symmetry to practical interpretations, where dual variables represent shadow prices or marginal values of the primal's resources, aiding sensitivity analysis in applications such as resource allocation and economic modeling.[1] The complementary slackness theorem further connects optimal primal and dual solutions, stipulating that for each constraint, either it holds with equality in the primal or the corresponding dual variable is zero, and vice versa, which facilitates verification of optimality and economic interpretations.[1] These principles, rooted in mid-20th-century developments in optimization theory, underpin efficient algorithms like the simplex method and enable dual formulations to simplify computation when the primal has many constraints.[1][2]Primal and Dual Problem Formulations
Primal Linear Program
The primal linear program in standard inequality form is defined as the optimization problem of maximizing a linear objective function subject to linear inequality constraints and non-negativity conditions on the variables.[3] Formally, it is expressed as \max \, c^\top x \quad \subjectto \, Ax \leq b, \quad x \geq 0, where x \in \mathbb{R}^n represents the vector of decision variables, c \in \mathbb{R}^n is the vector of coefficients for the objective function, A \in \mathbb{R}^{m \times n} is the matrix of constraint coefficients, and b \in \mathbb{R}^m is the vector specifying the right-hand side values of the constraints.[3][4] In this formulation, the decision variables x quantify the levels of activities or allocations being optimized, while the objective coefficients c reflect the contribution or value of each unit of those activities toward the goal of maximization.[3] The inequality constraints Ax \leq b model limitations such as available resources or capacities, with each row of A and corresponding entry in b defining a single resource bound.[3] Linear programming, and thus the primal problem, originated as a tool for solving optimization challenges in contexts like resource allocation, where finite supplies must be distributed across competing demands to achieve maximal efficiency or profit.[3] The formulation assumes a finite-dimensional setting with all coefficients and variables being real numbers, ensuring the problem is well-defined over the Euclidean space.[4]Dual Linear Program
The dual linear program is derived from a given primal linear program in standard form, which seeks to maximize \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b} and \mathbf{x} \geq \mathbf{0}, where A is an m \times n matrix, \mathbf{b} \in \mathbb{R}^m, and \mathbf{c} \in \mathbb{R}^n.[2] The standard dual formulation then takes the form of a minimization problem: minimize \mathbf{b}^T \mathbf{y} subject to A^T \mathbf{y} \geq \mathbf{c} and \mathbf{y} \geq \mathbf{0}, where \mathbf{y} \in \mathbb{R}^m represents the dual variables and A^T denotes the transpose of the primal constraint matrix A.[1] This structure ensures that the dual constraints correspond to the primal variables, with the transposed matrix A^T linking the two problems while preserving the non-negativity requirements.[2] The dual variables \mathbf{y} serve as shadow prices or Lagrange multipliers associated with the primal constraints, quantifying the marginal value of relaxing each resource limit in the original problem.[1] In this role, each component y_i indicates the increase in the primal objective value per unit increase in the right-hand side b_i of the corresponding primal constraint, assuming optimality.[2] This interpretation highlights the dual's focus on resource allocation efficiency, where \mathbf{y} scales the primal inequalities to bound the feasible region from above.[1] In terms of optimization, the dual problem is equivalent to the primal in the sense that it converts the maximization objective into a minimization while maintaining the same optimal value under feasibility and boundedness conditions.[2] Specifically, the dual objective \mathbf{b}^T \mathbf{y} provides an upper bound on the primal maximum \mathbf{c}^T \mathbf{x} for any feasible solutions, reflecting the inherent symmetry between the two formulations.[1] This min-max relationship underscores the dual's utility in verifying primal solutions and exploring alternative optimization perspectives.[2]Interpretation of Dual Variables
In linear programming, the dual variables y_i, corresponding to the inequality constraints in the primal problem, are interpreted as shadow prices, which quantify the marginal increase in the optimal value of the primal objective function resulting from a unit increase in the right-hand side coefficient of the i-th primal constraint.[1] This interpretation stems from the economic perspective where these variables represent the imputed value or opportunity cost of scarce resources limited by the primal constraints.[1] For instance, in a profit-maximization primal problem subject to resource limitations, a positive shadow price y_i indicates the additional profit attainable if one more unit of the i-th resource were available, guiding decisions on resource acquisition or expansion.[1] Geometrically, the vector of optimal dual variables y^* defines the normal to a supporting hyperplane that touches the primal feasible set—a convex polyhedron—at an optimal primal point x^*, without penetrating the interior of the set.[5] This hyperplane separates the feasible region from points where the primal objective would be worse, ensuring that the dual objective b^T y^* provides a tight lower bound equal to the primal optimum under strong duality.[5] The components y_i^* weight the contributions of individual constraints to this supporting structure, reflecting how tightly each constraint binds at optimality; zero values for y_i^* correspond to non-binding constraints where the hyperplane does not touch the boundary defined by that inequality.[5] In resource allocation contexts for maximization primal problems, the shadow prices y_i embody the value per unit of each constrained resource, enabling the total imputed value of all resources to equal the optimal primal objective value.[1] Nonzero shadow prices signal fully utilized resources, while their magnitudes inform trade-offs, such as whether relaxing a constraint justifies additional costs.[1] This interpretation facilitates sensitivity analysis, revealing how perturbations in resource availability affect overall optimality without resolving the entire problem.[1] Under non-degeneracy of the primal optimal solution—where no constraint is redundant at the vertex—the optimal dual variables and associated shadow prices are unique, providing a precise marginal valuation for each constraint.[6] Non-degeneracy ensures a single supporting hyperplane aligns exactly with the binding constraints at x^*, avoiding ambiguity in the geometric or economic interpretations.[6] In contrast, degeneracy can yield multiple dual optima, but the standard interpretation assumes this non-degenerate case for clarity and applicability in most theoretical analyses.[6]Constructing the Dual Program
Rules for Standard Forms
The standard form of a primal linear program is formulated as maximizing the objective function c^T x subject to inequality constraints Ax \leq b and non-negativity constraints x \geq 0, where A is the coefficient matrix, b is the right-hand-side vector, c is the objective coefficient vector, x is the variable vector, and all components are appropriately dimensioned.[1][2] The corresponding dual linear program is then constructed by systematically transforming these elements according to established primal-dual correspondences.[7] The construction follows three primary rules for standard inequality forms:- The primal maximization objective transforms into a dual minimization objective, and the primal inequality constraints (Ax \leq b) become non-negative dual variables (y \geq 0), with the dual objective coefficients taken from the primal right-hand-side vector b. Thus, the dual objective is \min b^T y.[1][2]
- Each primal variable x_j \geq 0 corresponds to a dual inequality constraint, where the dual constraint coefficients are derived from the transpose of the primal coefficient matrix A^T, the right-hand side is the primal objective coefficient c_j, and the inequality direction is \geq. Thus, the dual constraints are A^T y \geq c.[1][2]
- The number of dual variables equals the number of primal inequality constraints (excluding non-negativity), and the number of dual constraints equals the number of primal variables. The dual variables y_i are unrestricted in sign except for non-negativity (y \geq 0) to match the primal's \leq constraints.[7][1]
| Primal Element | Dual Element |
|---|---|
| Maximization objective c^T x | Minimization objective b^T y |
| Inequality constraints Ax \leq b | Non-negative variables y \geq 0 |
| Non-negative variables x_j \geq 0 | Inequality constraints A^T y \geq c |
| Right-hand side b | Objective coefficients in b^T y |
| Objective coefficients c | Right-hand side in A^T y \geq c |
Handling General Constraints
In linear programming, primal problems often include equality constraints, free (unrestricted) variables, or variables with mixed sign restrictions beyond non-negativity, requiring extensions to the standard dual construction rules.[1][8] To handle equality constraints in the primal, such as \sum_j a_{ij} x_j = b_i, one approach is to convert the equality into two inequalities: \sum_j a_{ij} x_j \leq b_i and \sum_j a_{ij} x_j \geq b_i. This introduces two dual variables, one unrestricted in sign for each inequality, which can be combined to yield an effectively unrestricted dual variable y_i for the original equality.[1] Alternatively, the dual can be formed directly by associating an unrestricted dual variable y_i \in \mathbb{R} with the equality constraint, without splitting, as the Lagrangian multiplier for an equality lacks sign restrictions.[8][9] Free variables in the primal, where x_j \in \mathbb{R} without sign bounds, correspond to equality constraints in the dual formulation. Specifically, for a free primal variable x_j, the associated dual constraint becomes \sum_i a_{ij} y_i = c_j, reflecting the absence of sign restrictions on x_j that would otherwise impose inequality directions.[1][8] For primal variables that are non-positive (x_j \leq 0) rather than non-negative, the dual constraints undergo a sign flip compared to the standard case. In a maximization primal, a non-positive x_j leads to a dual constraint of the form \sum_i a_{ij} y_i \leq c_j, whereas a non-negative x_j yields \sum_i a_{ij} y_i \geq c_j; this ensures the dual respects the variable's bounded direction.[1][8] Similarly, for non-positive dual variables arising from certain primal constraints, the signs adjust accordingly to maintain duality consistency. To construct the dual for such general primal forms, a canonical transformation procedure first standardizes the problem before applying the base rules for inequality constraints and non-negative variables. This involves: (1) replacing equalities with pairs of inequalities and introducing slack or surplus variables as needed; (2) substituting free variables x_j with differences of non-negative variables (x_j = x_j^+ - x_j^-, x_j^+, x_j^- \geq 0); (3) flipping signs for non-positive variables by setting x_j = -z_j with z_j \geq 0 and adjusting coefficients; and (4) converting the objective if necessary (e.g., maximization to minimization via negation). Once in standard form—all constraints as \leq inequalities and variables \geq 0—the standard dual rules can be applied directly.[1][8]| Primal Constraint Type | Dual Variable Sign | Primal Variable Type | Dual Constraint Type |
|---|---|---|---|
| \sum_j a_{ij} x_j \leq b_i (max primal) | y_i \geq 0 | x_j \geq 0 | \sum_i a_{ij} y_i \geq c_j |
| \sum_j a_{ij} x_j = b_i | y_i unrestricted | x_j free | \sum_i a_{ij} y_i = c_j |
| \sum_j a_{ij} x_j \geq b_i (max primal) | y_i \leq 0 | x_j \leq 0 | \sum_i a_{ij} y_i \leq c_j |
Duality Theorems
Weak Duality Theorem
The weak duality theorem establishes a fundamental inequality between the objective values of a primal linear program and its dual for any pair of feasible solutions. Consider the standard primal maximization problem: maximize \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b} and \mathbf{x} \geq \mathbf{0}, with feasible set \mathcal{P}, and its corresponding dual minimization problem: minimize \mathbf{b}^T \mathbf{y} subject to A^T \mathbf{y} \geq \mathbf{c} and \mathbf{y} \geq \mathbf{0}, with feasible set \mathcal{D}. For any \mathbf{x} \in \mathcal{P} and \mathbf{y} \in \mathcal{D}, the theorem states that \mathbf{c}^T \mathbf{x} \leq \mathbf{b}^T \mathbf{y}.[2][10] The proof relies on the non-negativity of the dual variables and the primal constraints. Starting from the dual constraint, A^T \mathbf{y} \geq \mathbf{c}, transpose and multiply by the primal feasible \mathbf{x} \geq \mathbf{0} to obtain \mathbf{c}^T \mathbf{x} \leq (A^T \mathbf{y})^T \mathbf{x} = \mathbf{y}^T (A \mathbf{x}). Since \mathbf{x} \in \mathcal{P} implies A \mathbf{x} \leq \mathbf{b} and \mathbf{y} \geq \mathbf{0}, it follows that \mathbf{y}^T (A \mathbf{x}) \leq \mathbf{y}^T \mathbf{b}. Combining these yields \mathbf{c}^T \mathbf{x} \leq \mathbf{b}^T \mathbf{y}.[2][11] This inequality implies that any feasible dual solution provides an upper bound on the optimal value of the primal problem, assuming the primal is feasible and bounded. If the primal optimum is finite, the dual must also be feasible and its objective value matches or exceeds it, serving as a practical tool for bounding optimization problems without solving them fully. No assumptions of finiteness, attainment of optima, or additional duality conditions are required for the theorem to hold.[10][11]Strong Duality Theorem
The strong duality theorem in linear programming asserts that if the primal problem attains a finite optimal value, then the dual problem also attains a finite optimal value, and these optimal values are equal.[1][12] For the standard primal maximization problem \max c^\top x subject to Ax \leq b and x \geq 0, and its corresponding dual minimization problem \min b^\top y subject to A^\top y \geq c and y \geq 0, the theorem guarantees p^* = d^*, where p^* and d^* denote the respective optimal objective values, provided the primal is feasible and bounded.[13] This equality holds without additional qualification constraints like Slater's condition, as linear programs are polyhedral and strong duality follows directly from feasibility and boundedness of one problem implying the same for the dual.[12] The conditions for strong duality in linear programming require that at least one of the problems (primal or dual) is feasible and has a finite optimum; the other then inherits these properties with equal objective value.[1] In the polyhedral setting of linear programs, this is ensured by the absence of infeasibility or unboundedness in both simultaneously when one is solvable.[13] Building on the weak duality inequality p^* \leq d^*, strong duality closes the gap to equality under these feasibility assumptions.[12] A proof sketch via the simplex method highlights attainment of equality: an optimal basic feasible solution to the primal yields shadow prices (dual variables) that satisfy dual feasibility, with the primal and dual objectives matching due to zero reduced costs at optimality.[1] Alternatively, a separating hyperplane argument in the convex feasible set confirms that no duality gap exists when optima are finite, as any separating plane would contradict boundedness or feasibility.[13] Thus, the duality gap—defined as d^* - p^*—is zero whenever strong duality applies.[12]Complementary Slackness
Complementary slackness conditions provide a precise characterization of the optimal solutions to a primal-dual pair of linear programs. Specifically, if x^* is an optimal solution to the primal problem \max \{ c^T x \mid A x \leq b, x \geq 0 \} and y^* is an optimal solution to the dual problem \min \{ b^T y \mid A^T y \geq c, y \geq 0 \}, then the following hold for all indices i = 1, \dots, m and j = 1, \dots, n: y_i^* (b_i - (A x^*)_i) = 0, \quad x_j^* ((A^T y^*)_j - c_j) = 0. These conditions ensure that the products of corresponding primal and dual slacks and variables are zero at optimality.[14] The interpretation of these conditions is that, at the optimum, for each primal constraint i, either the dual variable y_i^* is zero (indicating the constraint does not bind the dual solution) or the primal slack b_i - (A x^*)_i is zero (meaning the constraint is tight). Symmetrically, for each dual constraint j, either the primal variable x_j^* is zero or the dual slack (A^T y^*)_j - c_j is zero. This mutual exclusivity highlights how optimal solutions "complement" each other by activating only when the opposing slack is absent, reflecting the economic intuition that shadow prices (dual variables) are positive only for binding resource constraints in the primal.[14] The proof of complementary slackness follows from the strong duality theorem, which establishes that the primal and dual optimal values are equal, combined with the weak duality inequality. Under weak duality, the difference between dual and primal objectives is b^T y - c^T x = \sum_i y_i (b_i - (A x)_i) + \sum_j x_j ((A^T y)_j - c_j) \geq 0 for feasible x, y. At optimality, this difference is zero, so each nonnegative term in the sums must individually be zero, yielding the slackness products. Strong duality enables this tightening by guaranteeing the objectives coincide.[14] These conditions play a crucial role in verifying optimality: if feasible solutions x and y to the primal and dual, respectively, satisfy the complementary slackness equalities, then both are optimal, as the weak duality gap closes to zero. Conversely, all optimal pairs satisfy them, providing a practical test for solution correctness without resolving the full program.[14]Mathematical Formulations
Vector Representations
The vector representation of a linear program provides a compact formulation that emphasizes the inner product structure of the objective and constraints, facilitating analysis in finite-dimensional Euclidean spaces. The primal problem in standard inequality form is expressed as maximizing the dot product \mathbf{c} \cdot \mathbf{x}, where \mathbf{c} \in \mathbb{R}^n is the coefficient vector and \mathbf{x} \in \mathbb{R}^n is the decision variable vector, subject to the inequality constraints A \mathbf{x} \leq \mathbf{b} with A \in \mathbb{R}^{m \times n} the constraint matrix and \mathbf{b} \in \mathbb{R}^m the right-hand side vector, along with non-negativity \mathbf{x} \geq \mathbf{0}.[1] This notation highlights the primal as an optimization over a convex set defined by linear inequalities in vector space. The corresponding dual problem is formulated as minimizing the dot product \mathbf{b} \cdot \mathbf{y}, where \mathbf{y} \in \mathbb{R}^m is the dual variable vector, subject to A^T \mathbf{y} \geq \mathbf{c} and \mathbf{y} \geq \mathbf{0}.[1] Here, the transpose A^T ensures that the dual constraints mirror the primal's structure through the adjoint relationship, with the dual objective representing a weighted sum of resource values. This vector form underscores the symmetry between primal and dual, where the roles of objectives and constraints are interchanged via transposition. Geometrically, the feasible set of the primal problem forms a polyhedron in \mathbb{R}^n, defined as the intersection of half-spaces corresponding to the inequalities A \mathbf{x} \leq \mathbf{b} and \mathbf{x} \geq \mathbf{0}.[15] This polyhedral region is bounded or unbounded depending on the constraints, and optimal solutions occur at vertices where tight constraints intersect. Similarly, the dual feasible set is a polyhedron in \mathbb{R}^m, providing a dual geometric perspective on the primal's resource allocation. The vector notation offers significant advantages for theoretical analysis, particularly in proving duality results. For instance, Farkas' lemma, which certifies the infeasibility of a system A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq \mathbf{0} via a separating hyperplane defined by a non-negative vector \boldsymbol{\lambda} such that A^T \boldsymbol{\lambda} \geq \mathbf{0} and \mathbf{b} \cdot \boldsymbol{\lambda} < 0, directly leverages this inner product structure to establish weak duality in linear programming.[16] Such formulations enable elegant derivations of separation theorems and support extensions to convex optimization in vector spaces.Matrix and Eigenvector Perspectives
The standard matrix formulation of a linear program (primal problem) is to maximize \mathbf{c}^T \mathbf{x} subject to \mathbf{A} \mathbf{x} = \mathbf{b} and \mathbf{x} \geq \mathbf{0}, where \mathbf{A} is an m \times n constraint matrix, \mathbf{b} \in \mathbb{R}^m is the right-hand side vector, \mathbf{c} \in \mathbb{R}^n is the objective coefficient vector, and \mathbf{x} \in \mathbb{R}^n is the decision variable vector. This equality-constrained form incorporates inequality constraints via slack variables, enabling a compact representation that highlights the linear structure and facilitates duality analysis. The corresponding dual problem minimizes \mathbf{b}^T \mathbf{y} subject to \mathbf{A}^T \mathbf{y} \geq \mathbf{c}, where \mathbf{y} \in \mathbb{R}^m are the dual variables (unrestricted in sign). From a spectral perspective, solutions to certain linear programs can be interpreted through generalized eigenvalue problems, particularly in the context of economic equilibrium models where duality arises naturally. In von Neumann's 1945 model of general economic equilibrium, optimal activity levels \mathbf{z} \geq \mathbf{0} and equilibrium prices \mathbf{p} > \mathbf{0} satisfy inequalities akin to dual constraints, with the maximal growth rate \rho (corresponding to the optimal value) serving as an eigenvalue. Specifically, the conditions \mathbf{p}^T \mathbf{A} \geq \rho \mathbf{p}^T and \mathbf{A} \mathbf{z} \leq \rho \mathbf{z} hold, with equality on the support of the optima, framing the solution as a generalized eigenvector pair of the matrix pencil (\mathbf{A}, \mathbf{I}).[17] This viewpoint is insightful for theoretical analysis in specific models like von Neumann's, revealing connections between linear programming duality and eigenvalue problems in constrained optimization. However, it is not typically employed in computational algorithms for standard linear programs, as direct methods like the simplex or interior-point approaches are more efficient for solving the matrix systems involved.Examples
Basic Numerical Example
Consider the primal linear program in standard form: \begin{align*} \max &\quad 3x_1 + 4x_2 \\ \text{s.t.} &\quad x_1 + x_2 \leq 5, \\ &\quad 2x_1 + x_2 \leq 6, \\ &\quad x_1, x_2 \geq 0. \end{align*} The dual program is constructed by transposing the coefficient matrix of the constraints and switching the objective direction, yielding: \begin{align*} \min &\quad 5y_1 + 6y_2 \\ \text{s.t.} &\quad y_1 + 2y_2 \geq 3, \\ &\quad y_1 + y_2 \geq 4, \\ &\quad y_1, y_2 \geq 0. \end{align*} [1] The feasible region for the primal is a polygon with vertices at (0,0), (0,5), (1,4), and (3,0). Evaluating the objective at these points gives values of 0, 20, 19, and 9, respectively, so the optimal primal solution is x_1 = 0, x_2 = 5, with objective value 20. For the dual, the feasible region has vertices at (4,0) and (0,4), with objective values of 20 and 24, respectively, so the optimal dual solution is y_1 = 4, y_2 = 0, with objective value 20.[1] Weak duality holds, as for any feasible primal x and dual y, the primal objective satisfies $3x_1 + 4x_2 \leq 5y_1 + 6y_2. Strong duality is verified by the equal optimal values of 20. Complementary slackness conditions are satisfied: in the primal, the first constraint is tight (x_1 + x_2 = 5) corresponding to y_1 = 4 > 0, while the second has positive slack ($2x_1 + x_2 = 5 < 6) corresponding to y_2 = 0; in the dual, the second constraint is tight (y_1 + y_2 = 4) corresponding to x_2 = 5 > 0, while the first has positive slack (y_1 + 2y_2 = 4 > 3) corresponding to x_1 = 0.[1] The following table summarizes the optimal solutions and related values:| Aspect | Primal Value | Dual Value |
|---|---|---|
| x_1 | 0 | - |
| x_2 | 5 | - |
| y_1 | - | 4 |
| y_2 | - | 0 |
| Objective | 20 | 20 |
| Slack in 1st constraint | 0 | 1 |
| Slack in 2nd constraint | 1 | 0 |