Fact-checked by Grok 2 weeks ago

Linear programming relaxation

Linear programming relaxation is a fundamental technique in that involves relaxing the integrality constraints of an integer linear program (ILP)—where variables must take integer values—by replacing them with continuous constraints, typically allowing variables to range over the non-negative reals or the interval [0,1] for binary variables, thereby transforming the problem into a solvable linear program (LP). This relaxation enlarges the to a , enabling polynomial-time algorithms to find an optimal fractional solution that provides a lower bound on the original ILP's optimum. The primary purpose of LP relaxation is to approximate NP-hard problems, such as those reducible to ILPs, by leveraging the efficiency of LP solvers to obtain bounds and guide the construction of near-optimal integer solutions through techniques. For instance, in the vertex cover problem, the LP relaxation yields a 2-approximation by fractional values greater than or equal to 1/2 to 1, ensuring the solution's cost is at most twice the optimal integer value. Similarly, for the set cover problem, randomized of the LP solution achieves an O(log n)- , where n is the universe size, bounding the expected cost by (4 ln n + 2) times the ILP optimum with high probability. A key measure of an LP relaxation's quality is the integrality gap, defined as the supremum ratio of the ILP optimum to the LP optimum over all instances, which quantifies the approximation limitation; for , this gap is asymptotically 2, preventing better than 4/3 approximations in certain cases. LP relaxations also facilitate advanced methods like for bounding and duality for deriving valid inequalities that tighten the formulation. These properties make LP relaxation a cornerstone of approximation algorithms in and , widely applied in areas including network design, scheduling, and .

Fundamentals of Integer Programming

Linear Programming Basics

(LP) is a method in for maximizing or minimizing a linear objective function subject to a finite number of linear and constraints. The variables in LP are typically required to be non-negative, representing quantities such as resources or production levels in applications like and scheduling. The standard form of an LP problem is to minimize (or equivalently, maximize by negating the objective) \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b} and \mathbf{x} \geq \mathbf{0}, where \mathbf{c} \in \mathbb{R}^n is the coefficient vector for the objective, A \in \mathbb{R}^{m \times n} is the constraint matrix, \mathbf{b} \in \mathbb{R}^m is the right-hand side vector, and \mathbf{x} \in \mathbb{R}^n is the of decision variables. problems possess key properties including , as both the objective function and the feasible region defined by the constraints form convex sets, ensuring that local optima are global optima. is solvable in polynomial time using algorithms such as the method, introduced by in 1947, or interior-point methods, pioneered by in 1984. Geometrically, the feasible region of an LP is a convex polyhedron (or polytope if bounded), formed as the intersection of half-spaces defined by the linear constraints. Due to the linearity of the objective function, the optimal solution, if it exists, occurs at a vertex (extreme point) of this polyhedron. Every LP has a dual formulation: for the primal problem of maximizing \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b} and \mathbf{x} \geq \mathbf{0}, the dual is to minimize \mathbf{b}^T \mathbf{y} subject to A^T \mathbf{y} \geq \mathbf{c} and \mathbf{y} \geq \mathbf{0}. The strong duality theorem states that if both the primal and dual problems are feasible and bounded, then they attain the same optimal objective value.

Integer Linear Programming Formulation

Integer linear programming (ILP) extends the framework of by imposing integrality constraints on some or all decision variables, requiring them to take values rather than any real numbers. This discreteness captures problems where solutions must be whole units, such as selecting indivisible items or routes. In the special case of 0-1 ILP, variables are restricted to binary values (0 or 1), which models selection decisions like including or excluding an option. The standard formulation of an ILP is to minimize the objective function \mathbf{c}^T \mathbf{x} subject to linear constraints A \mathbf{x} \leq \mathbf{b}, non-negativity \mathbf{x} \geq \mathbf{0}, and integrality \mathbf{x} \in \mathbb{Z}^n, where \mathbf{c} \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n}, \mathbf{b} \in \mathbb{R}^m, and n and m denote the number of variables and constraints, respectively. This form allows for mixed-integer variants where only a subset of variables are integer-valued. Solving general ILPs is NP-hard, as established by Karp in 1972, who demonstrated the NP-completeness of 0-1 integer programming through reductions from known NP-complete problems like satisfiability. ILPs arise in numerous applications. For instance, the 0-1 involves selecting a of items, each with a weight and value, to maximize total value without exceeding a knapsack capacity; variables indicate item inclusion, with constraints on total weight and the objective maximizing summed values. Similarly, the traveling salesman problem (TSP) requires finding a minimum-cost tour visiting each city exactly once and returning to the start; it is modeled using variables for edges in a , with degree constraints ensuring a single entry and exit per city and subtour elimination to form a connected . The of an ILP comprises the points lying within the defined by the continuous linear constraints, contrasting with the full of standard . This sparsity of feasible solutions underscores the computational intractability of ILPs compared to their continuous counterparts.

The Relaxation Process

Defining LP Relaxation

In integer linear programming (ILP), the (LP) relaxation is constructed by eliminating the integrality requirements on the decision variables while preserving all other constraints, resulting in a surrogate problem that is computationally tractable via standard LP solvers. This transformation expands the to include fractional solutions, allowing the application of efficient polynomial-time algorithms like the method or interior-point methods to obtain bounds on the original ILP's optimal value. Formally, consider a minimization ILP given by \min \{ c^T x \mid A x \leq b, \, x \geq 0, \, x \in \mathbb{Z}^n \}, where c \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n}, b \in \mathbb{R}^m, and x \in \mathbb{R}^n. The corresponding LP relaxation is \min \{ c^T x \mid A x \leq b, \, x \geq 0 \}, which removes the integer constraint x \in \mathbb{Z}^n. The LP relaxation yields a lower bound on the optimal value of the minimization ILP (and an upper bound for maximization problems) because the feasible set of the ILP is a of the LP feasible set, while the objective function remains unchanged. To see this, let z_{ILP}^* and z_{LP}^* denote the optimal values; any feasible ILP solution x_{ILP} satisfies c^T x_{ILP} \geq z_{LP}^* since it is also feasible for the LP, implying z_{ILP}^* \geq z_{LP}^*. This relationship embodies weak duality between the ILP and its relaxation, where the expanded ensures the LP optimum is no larger than the ILP optimum for minimization. A notable exception arises when the constraint matrix A is totally unimodular—defined as a where every square submatrix has a of 0, 1, or -1—provided that the right-hand side b is integer-valued. In this case, all vertices of the LP feasible are integer points, ensuring that the optimal LP solution is and thus exactly solves the ILP. For instance, the of a —where rows correspond to all vertices and columns to edges, with a 1 if the vertex is incident to the edge—is totally unimodular, enabling exact ILP solutions via LP relaxation for problems like bipartite matching.

Construction of the Relaxed Problem

To construct the (LP) relaxation of an linear program (ILP), the process begins by identifying the integer-constrained variables in the original formulation. An ILP is typically expressed as minimizing \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq 0, \mathbf{x} \in \mathbb{Z}^n, where \mathbf{x} includes variables required to take values. The constraints, such as x_j \in \mathbb{Z} for each variable x_j, are the key elements targeted for relaxation. The next step replaces these integrality requirements with continuous bounds that encompass the possible integer values. For binary variables, where x_i \in \{0, 1\}, the is relaxed to $0 \leq x_i \leq 1. For general variables bounded by integers l_j \leq x_j \leq u_j, the relaxation uses l_j \leq x_j \leq u_j with x_j continuous. This substitution transforms the from a discrete set of points to a , preserving the linear objective and constraints while expanding the solution space. The resulting is a standard : minimize \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b}, l \leq \mathbf{x} \leq u. In mixed- linear programs (), which include both and continuous variables, only the variables are relaxed to continuous bounds, while the originally continuous variables retain their constraints. This selective relaxation ensures the MIP's version remains a polyhedral feasible set, solvable via efficient algorithms. For instance, in a MIP with decisions and continuous flows, the binaries become [0, 1]-bounded reals, and flows stay unbounded or as specified. This construction yields equivalent reformulations such as bounding box relaxations, where the polyhedron is simply the box product of individual variable intervals intersected with the linear inequalities, or broader continuous extensions that embed the integer hull into a larger continuous domain. These views highlight the relaxation as a natural continuous analog of the discrete problem, facilitating analysis without altering the linear structure. The relaxed LP is solved using specialized solvers like IBM CPLEX or , which employ interior-point methods or revised algorithms to achieve polynomial-time efficiency, often scaling to millions of variables due to the convexity. A common pitfall arises if the original formulation includes nonlinearities, potentially yielding a nonconvex relaxation, though standard ILP assumes , ensuring the output remains a tractable LP.

Illustrative Example

Problem Setup

To illustrate the application of linear programming relaxation, consider the 0-1 , a canonical example of an integer linear programming (ILP) formulation where decision variables represent binary choices. In this problem, there are n items available, each characterized by a positive weight w_i > 0 and a positive value v_i > 0 for i = 1, \dots, n, along with a knapsack of fixed capacity W > 0. The goal is to select a subset of these items that maximizes the total value while ensuring the total weight does not exceed the capacity. The ILP formulation of the 0-1 is given by \max \sum_{i=1}^n v_i x_i subject to \sum_{i=1}^n w_i x_i \leq W, x_i \in \{0, 1\} \quad \forall i = 1, \dots, n, where the binary variable x_i indicates whether item i is included in the knapsack (x_i = 1) or not (x_i = 0). Exact solution of this ILP is computationally challenging because it requires evaluating up to $2^n possible subsets of items to identify the optimal feasible one, an number that grows rapidly with n and makes brute-force impractical for even moderate-sized instances. The 0-1 is NP-hard, underscoring the need for specialized optimization techniques. For clarity, examine a small instance with three items and capacity W = 4: Item 1 has weight w_1 = 3 and value v_1 = 300; Item 2 has w_2 = 2, v_2 = 180; Item 3 has w_3 = 2, v_3 = 170. The feasible solutions, which satisfy the weight constraint, are the empty subset (total weight 0, total value 0), the singleton {1} (weight 3, value 300), {2} (2, 180), {3} (2, 170), and the pair {2, 3} (4, 350). All other subsets violate the capacity.

Solving the Original and Relaxed Versions

To solve the original integer linear programming (ILP) formulation of the illustrative 0-1 , enumeration of all feasible binary solutions is straightforward given the small number of variables. The feasible subsets are the (value 0), item 1 alone (weight 3 ≤ 4, value 300), item 2 alone (weight 2 ≤ 4, value 180), item 3 alone (weight 2 ≤ 4, value 170), and items 2 and 3 together (weight 4 ≤ 4, value 350); combinations including item 1 with either 2 or 3 exceed the capacity. Thus, the optimal integer solution is x_1 = 0, x_2 = 1, x_3 = 1, with objective value 350. The LP relaxation replaces the binary constraints x_i \in \{0,1\} with continuous bounds $0 \leq x_i \leq 1, yielding the problem: \begin{align*} \max &\quad 300x_1 + 180x_2 + 170x_3 \\ \text{s.t.} &\quad 3x_1 + 2x_2 + 2x_3 \leq 4, \\ &\quad 0 \leq x_1, x_2, x_3 \leq 1. \end{align*} This can be solved manually by evaluating the fractional knapsack heuristic, which prioritizes items by -to-weight (100 for item 1, 90 for item 2, 85 for item 3): assign x_1 = 1 (using 3 units of , 300), then fill the remaining 1 unit with x_2 = 0.5 ( 90), setting x_3 = 0, for a total of 390. Alternatively, the simplex method confirms this optimum by pivoting from the through constraints to the point where the objective aligns with the feasible boundary. The LP solution (1, 0.5, 0) is infeasible for the original ILP due to the non- x_2, yet it establishes an upper bound of 390 on the integer optimum, illustrating the relaxation's bounding property. The of the LP relaxation forms a in , bounded by the knapsack and box constraints, which contains a of points including the five integer-feasible vertices identified earlier. In contrast, the ILP restricts attention to the discrete points within this polyhedron—specifically, the origin and the three single-item and one two-item combinations—highlighting how the relaxation enlarges the solution space to provide a solvable continuous .

Theoretical Properties

Optimality and Feasibility Bounds

In linear programming relaxation of an linear program (ILP), the optimal value of the relaxed linear program () provides a bound on the optimal value of the original ILP. For a maximization problem, the optimum z^{LP} satisfies z^{LP} \geq z^{ILP}, where z^{ILP} is the ILP optimum. For a minimization problem, the inequality reverses: z^{LP} \leq z^{ILP}. This follows from the inclusion of feasible sets: the ILP feasible set S^{ILP}, consisting of points satisfying the linear constraints and integrality requirements, is a subset of the feasible set S^{LP}, which relaxes the integrality constraints. Since the objective is linear, the optimum over the larger set provides an upper bound for maximization or a lower bound for minimization compared to the smaller set. The proof relies solely on this feasible set containment and the linearity of the objective, holding regardless of whether the solution is integer-valued. Regarding feasibility, every solution feasible to the ILP is also feasible to its LP relaxation, as the relaxation imposes the same linear constraints but omits the requirement that variables be . The converse does not hold: the LP may admit fractional solutions outside the ILP feasible set. The collection of ILP feasible points defines the , or , which is the of these points, P_I = \conv\{ x \in \mathbb{Z}^n \mid Ax \leq b \}. This captures the structure of integer solutions and serves as the target for exact reformulations, though computing it fully is generally intractable. When the LP solution is fractional, it can be used to derive valid inequalities, known as cuts, that tighten the relaxation by excluding the fractional point while preserving all feasible solutions. These inequalities are valid for the polyhedron and are generated directly from the LP solution, such as through the tableau. For instance, Gomory cuts, introduced in the seminal work on cutting planes, provide a systematic way to obtain such inequalities from the current LP basis. In algorithmic contexts like , the LP relaxation plays a central role in providing bounds to guide the search. For maximization problems, the LP optimum at each node of the tree yields an upper bound on the best possible ILP in the corresponding subproblem, enabling the (fathoming) of suboptimal branches when this bound falls below the value of a known . For minimization problems, the LP provides a lower bound for . This bounding mechanism, combined with branching on fractional variables, efficiently explores the space. In the illustrative example, the LP relaxation similarly furnishes an upper bound that exceeds the optimum, demonstrating the bound's role in assessing quality.

Duality in Relaxations

In the context of linear programming (LP) relaxation for integer linear programming (ILP), the dual formulation provides a complementary perspective for bounding and analyzing the problem. Consider the standard ILP primal formulated as maximizing \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq \mathbf{0}, and \mathbf{x} integer-valued (for minimization, the roles and inequalities adjust accordingly). The LP relaxation drops the integrality constraint, yielding the same objective and constraints with \mathbf{x} \geq \mathbf{0} continuous. The dual of this LP relaxation is then minimizing \mathbf{b}^T \mathbf{y} subject to A^T \mathbf{y} \geq \mathbf{c}, \mathbf{y} \geq \mathbf{0}. By strong duality in LP, the optimal value of the primal LP relaxation equals the optimal value of its dual, assuming both attain optima. This equality holds because the LP feasible sets are polyhedral and bounded appropriately for the problem instance. For the original ILP, weak duality applies: the optimal dual value (a minimum) provides an upper bound on the ILP optimal value (a maximum), since any feasible ILP solution \mathbf{x} satisfies \mathbf{c}^T \mathbf{x} \leq \mathbf{b}^T \mathbf{y} for any dual feasible \mathbf{y}. For minimization ILPs, the dual provides a lower bound. Thus, solving the dual LP relaxation yields the same bound as the primal but from the minimization side, often computationally equivalent. The optimal dual variables \mathbf{y}^* serve as shadow prices, representing the marginal increase in the LP relaxation's optimal value per unit increase in the right-hand-side vector \mathbf{b}. In ILP contexts, these shadow prices offer insights into sensitivity, helping identify which most impact the integer solution's quality, even though the prices are derived from the relaxation rather than the ILP directly. For instance, a high shadow price on a constraint signals that tightening it would significantly degrade the bound, guiding practical adjustments in problem formulation. To illustrate, consider the 0-1 knapsack example from the previous section, with profits c_1 = 10, c_2 = 6, weights a_1 = 4, a_2 = 3, and capacity b = 5. The LP relaxation optimal value is 12, achieved at x_1 = 1, x_2 = 1/3. The is minimizing $5y + z_1 + z_2 subject to $4y + z_1 \geq 10, $3y + z_2 \geq 6, y, z_1, z_2 \geq 0, with optimal solution y = 2, z_1 = 2, z_2 = 0, yielding the same value 12. Here, the shadow price y = 2 indicates that increasing capacity by one unit raises the LP bound by 2, while the ILP optimal of 10 (selecting only the first item) respects the weak duality bound $12 \geq 10.

Quality Measures

Approximation Guarantees

Linear programming relaxations play a central role in developing algorithms for NP-hard linear programs, particularly by providing a tractable fractional lower bound that guides the construction of near-optimal solutions. A key measure of an algorithm's performance is its approximation ratio ρ, where for a minimization problem, the algorithm guarantees a with cost at most ρ times the optimal solution cost, with ρ ≥ 1; for maximization problems, the ratio is defined analogously with the solution value at least 1/ρ times the optimum. This ratio quantifies the worst-case deviation from optimality, enabling rigorous performance analysis even when exact solutions are computationally infeasible. Rounding techniques transform the optimal fractional solution of an relaxation into a feasible solution, often with provable guarantees. In randomized , variables are independently set to 1 with probability equal to their fractional values, while deterministic applies thresholds or other rules to ensure feasibility. For the minimum problem, a simple deterministic selects all vertices with LP value at least 1/2, producing a cover whose size is at most twice the optimal, yielding a 2- ; this bound is tight, as the LP integrality gap approaches 2 on certain graphs. Similarly, for the , LP-based combined with greedy selection achieves an approximation ratio of ln n + 1, as established by Lovász in his analysis of the ratio between optimal integral and fractional covers. The quality of these approximations is closely tied to the integrality gap of the LP relaxation, which bounds the possible loss in optimality. When the integrality gap is constant or polylogarithmic, rounding procedures can leverage this to deliver matching approximation guarantees in polynomial time, often via primal-dual or dual-fitting analyses that align the fractional and objectives. For example, bounded gaps enable constant-factor approximations for problems like facility location or , where the LP solution directly informs scalable heuristics. Despite these successes, LP relaxations have inherent limitations for certain NP-hard problems, where large integrality gaps preclude constant-factor s unless P=NP. In such cases, the gap grows with input size, as seen in set cover where no o(ln n)- exists under standard assumptions, or in problems like 3-SAT maximization where arbitrary constant factors are impossible without collapsing P and NP. These hardness results underscore that while LP provides robust guarantees when gaps are controlled, strengthening the relaxation or hybrid methods may be necessary for tighter bounds.

Integrality Gap Analysis

The integrality gap serves as a fundamental measure of the quality of a () relaxation for an integer linear program (ILP), quantifying the worst-case discrepancy between their optimal solutions. For a minimization ILP, it is defined as the supremum, over all feasible instances, of the of the optimal ILP value to the optimal LP value: \sup \frac{\mathrm{OPT_{ILP}}}{\mathrm{OPT_{LP}}}, where \mathrm{OPT_{ILP}} \geq \mathrm{OPT_{LP}} since the LP relaxation enlarges the by dropping integrality constraints. For a maximization ILP, it is defined as the supremum of the of the optimal LP value to the optimal ILP value: \sup \frac{\mathrm{OPT_{LP}}}{\mathrm{OPT_{ILP}}}, where \mathrm{OPT_{LP}} \geq \mathrm{OPT_{ILP}}. In both cases, this , always at least 1, highlights the inherent limit imposed by the relaxation; an integrality gap of \rho implies no using this LP can guarantee better than \rho- without additional techniques. Computing the integrality gap for specific problems often involves constructing adversarial instances that maximize the ratio. For the problem, the standard LP relaxation exhibits an integrality gap of 2, achieved in the limit by complete bipartite graphs K_{n,n} where the fractional solution assigns 1/2 to each vertex on one side, yielding half the integer optimum. In contrast, some ILPs suffer from unbounded integrality gaps, meaning the ratio can grow arbitrarily large. For instance, the natural LP relaxation for the knapsack median problem (a variant combining facility location with knapsack constraints) has an unbounded gap, even when facility costs are bounded, as instances with exponentially increasing client distances force the fractional solution to spread costs inefficiently while the integer solution clusters tightly. The magnitude of the integrality gap depends on structural properties of the problem, including the constraint matrix and coefficients. Sparse or totally unimodular matrices tend to yield small gaps (sometimes 1, ensuring ), whereas dense or irregular structures amplify discrepancies by allowing fractional solutions to exploit non-integer combinations unavailable to integer ones. Objective coefficients that emphasize high-variance terms can further widen the gap by favoring fractional averaging over integer selections. Despite these challenges, certain problem classes admit constant-factor integrality gaps, enabling strong approximation guarantees. For the metric traveling salesman problem (TSP), Christofides' 1976 algorithm leverages a and matching to achieve a 1.5-approximation, which matches the previously known upper bound on the integrality gap of the subtour elimination LP relaxation in metric instances (recently improved to less than 1.5 as of 2023). More recent analyses confirm constant gaps for restricted metrics, such as unit-weight graphical TSP, where advanced LP hierarchies bound the ratio below 1.6.

Algorithmic Applications

Branch and Bound Integration

is a recursive algorithmic framework for solving integer linear programming (ILP) problems exactly by systematically partitioning the domain of integer variables and using bounding mechanisms to prune unproductive search paths. Introduced by Land and Doig in , the method constructs a where each represents a subproblem with additional constraints from branching decisions, and relaxations provide bounds to evaluate the potential of unexplored branches. In the context of ILP, linear programming (LP) relaxations play a central role by furnishing tight lower or upper bounds at each node of the branch and bound tree, depending on whether the problem is minimization or maximization. At the root node, the LP relaxation of the original ILP is solved to obtain an initial bound; if the solution is integer-feasible, it provides an incumbent solution, otherwise, fractional variables are selected for branching. Branching typically involves fixing a fractional x_i to 0 or 1, creating two child nodes with tightened constraints, and the LP relaxation is re-solved at each child to compute updated bounds. LP-based bounding enables efficient pruning: if the bound at a node is worse than the current best integer solution (incumbent), the entire subtree rooted at that node can be discarded, as it cannot yield a better feasible solution. Re-optimization after branching exploits the structure of the previous LP solution, often using warm starts to reduce computational effort, and the resulting bounds reflect the tightened feasible region. This process continues depth-first or via node selection heuristics until all nodes are processed or pruned, guaranteeing optimality upon termination. The efficiency of LP-based branch and bound stems from the polynomial-time solvability of LPs, yielding pseudopolynomial time complexity for certain structured problems like the knapsack, where the number of nodes explored grows manageably with instance size. Modern implementations, such as the SCIP optimization suite, integrate advanced LP solvers like SoPlex for rapid relaxations and incorporate presolving, cutting planes, and heuristics to enhance bounding and reduce tree size in practice. SCIP has demonstrated scalability on large-scale mixed-integer programs, solving instances with thousands of variables through effective LP re-optimization across the search tree. A representative extension to the 0-1 knapsack problem illustrates this integration: consider maximizing $8x_1 + 11x_2 + 6x_3 + 4x_4 subject to $5x_1 + 7x_2 + 4x_3 + 3x_4 \leq 14 and x_i \in \{0,1\}. The root LP relaxation yields a bound of 22 with fractional solution x = (1, 1, 0.5, 0); branching on x_3 = 1 gives an integer solution of 21 at (0,1,1,1) (providing an incumbent), while x_3 = 0 leads to a bound less than or equal to 21 (e.g., integer solution of 19 at (1,1,0,0)), which is pruned, exploring a shallow tree to confirm the optimal value of 21 at x = (0, 1, 1, 1). This example highlights how LP bounds guide the search, pruning infeasible or suboptimal branches early.

Cutting Plane Methods

Cutting plane methods enhance the () relaxation of linear programs by iteratively adding valid inequalities, known as cuts, that eliminate fractional s while preserving all feasible points. These cuts tighten the relaxation, reducing the integrality gap and improving the lower bound on the optimal . The process begins by solving the current relaxation; if the is fractional, a cut is derived from the tableau and added to the formulation, with the procedure repeating until an is obtained or combined with other techniques. Gomory's cutting plane method, introduced in 1958, provides a systematic way to generate such cuts for pure integer programs from the optimal simplex tableau. Consider a basic feasible solution where a tableau row takes the form \sum_j \bar{a}_j x_j = \bar{b}, with \bar{b} fractional, \bar{b} = \lfloor \bar{b} \rfloor + f_0 where $0 < f_0 < 1, and \bar{a}_j = \lfloor \bar{a}_j \rfloor + f_j with $0 \leq f_j < 1. The Gomory fractional cut, assuming non-negative coefficients in the row, is then \sum_j f_j x_j \geq f_0. For the mixed-integer case, where some variables are continuous, the Gomory mixed-integer cut (GMIC) extends this by deriving valid inequalities that account for continuous variables, using disjunctive programming to adjust coefficients based on fractional parts relative to f_0, ensuring the cut is violated by the current fractional solution but valid for the integer hull. The Chvátal-Gomory procedure, developed by Chvátal in 1973, extends this idea to generate cuts through iterative rounding for pure programs. Starting from the LP relaxation inequalities \sum_j a_{ij} x_j \leq b_i, non-negative multipliers \lambda_i \geq 0 are chosen such that \sum_i \lambda_i a_{ij} is for all j, yielding \sum_j \lfloor \sum_i \lambda_i a_{ij} \rfloor x_j \leq \lfloor \sum_i \lambda_i b_i \rfloor. This produces a family of valid cuts, with repeated application defining the Chvátal-Gomory closure, which strengthens the iteratively. The procedure converges to the integer hull in finitely many steps for rational data. Modern variants build on these foundations to generate stronger cuts efficiently. Mixed-integer rounding (MIR) cuts, popularized in the late 1990s, derive from aggregating constraints and applying rounding to mixed-integer rows, often yielding tighter inequalities than basic Gomory cuts; for instance, from a simple mixed row \sum_j a_j x_j + \sum_k d_k y_k \leq \delta, the MIR cut is \sum_{j: f(a_j) \geq f(\delta)} \frac{f(a_j)}{1 - f(\delta)} x_j + \sum_{k: f(d_k) < f(\delta)} \frac{f(d_k)}{f(\delta)} y_k \geq 1, where f denotes the fractional part and x_j are integers, y_k continuous. Lift-and-project methods, originating from Balas's 1979 disjunctive programming framework, project the polyhedron onto disjunctions induced by integer constraints (e.g., x_j \in \{0,1\} implies x_j (1 - x_j) = 0), lifting inequalities to higher dimensions before projecting back to obtain valid cuts for the relaxation. These cuts can be computed via linear programming and often close more of the gap in fewer iterations. For rational input data, cutting plane methods guarantee finite convergence to an optimal integer solution, as each cut strictly improves the bound and the process terminates when the stabilizes. In practice, they are combined with branch-and-bound in branch-and-cut algorithms, where cuts are added at nodes to refine relaxations throughout the , significantly enhancing performance on large-scale problems.

References

  1. [1]
    [PDF] Lecture 7 1 Linear Programming Relaxations - Stanford CS Theory
    Jan 25, 2011 · In which we show how to use linear programming to approximate the vertex cover problem. 1 Linear Programming Relaxations. An integer linear ...
  2. [2]
    [PDF] 10.1 Integer Programming and LP relaxation - cs.wisc.edu
    We can therefore reduce any NP-complete optimization problem to an integer program, “relax” it to a linear program by removing the integrality constraints, ...
  3. [3]
    [PDF] Lecture 20: LP Relaxation and Approximation Algorithms - Yuan Zhou
    Linear programming relaxation is an established technique for designing such approximation algorithms for the NP-hard optimization problems (ILP).
  4. [4]
    [PDF] Linear Programming - Stanford CS Theory
    equations and inequalities can be easily reduced to the standard form. The dual problem of the linear programming problem in standard form is. Minimize bT y.
  5. [5]
    [PDF] Origins of the Simplex Method - DTIC
    It is fortunate back in 1947 when algorithms for solving linear programming were first being developed, that the column geometry and not the row geometry was ...
  6. [6]
    [PDF] Duality in Linear Programming - MIT
    Duality in linear programming is essentially a unifying theory that develops the relationships between a given linear program and another related linear ...
  7. [7]
    [PDF] Integer Programming - MIT
    This problem is called the (linear) integer-programming problem. It is said to be a mixed integer program when some, but not all, variables are restricted to be ...
  8. [8]
    [PDF] REDUCIBILITY AMONG COMBINATORIAL PROBLEMS
    (Plenum Press, 1972). REDUCIBILITY AMONG COMBINATORIAL PROBLEMS. +. Richard M. Karp. University of California at Berkeley. Abstract: A large class of ...
  9. [9]
    [PDF] A Tutorial on Integer Programming
    The knapsack problem is a particularly simple integer program: it has only one constraint. Furthermore, the coefficients of this constraint and the objec- tive ...
  10. [10]
    Traveling salesman problem - Optimization Wiki
    Sep 25, 2020 · ... flow constraints. aTSP ILP Formulation. The integer linear programming formulation for an aTSP is given by. min ∑ i ∑ j c i j y i j s.t ...History · Description · Formulation · Solutions
  11. [11]
  12. [12]
    [PDF] 1 Linear programming relaxation - Cornell: Computer Science
    It's often both interesting and useful to consider what happens when we relax the constraint mxy ∈ {0,1} to mxy ≥. 0, allowing the variables to take any non- ...
  13. [13]
    [PDF] 1 Totally Unimodular Matrices - Stanford CS Theory
    Definition 1 (Totally Unimodular Matrix) A matrix A is totally unimodular if every square submatrix has determinant 0, +1, or −1.
  14. [14]
    [PDF] Lecture 6: Totally Unimodular Matrices - NC State ISE
    The optimal bases of LP form a unimodular matrix. Page 4. Examples of unimodular matrix. • Unimodular matrices form a group under matrix.
  15. [15]
    [PDF] Applications of Totally Unimodular Matrices 11.1 Bipartite Graphs
    Feb 25, 2025 · A matrix A is totally unimodular (TU) if every square submatrix of A has determinant 0 or 1 or −1. Lemma 3. If the constraint matrix A is TU and ...
  16. [16]
    [PDF] 1 Bipartite matching and vertex covers - Princeton University
    Feb 25, 2016 · A matrix A is totally unimodular (TUM) if the determinant of any of its square submatrices belongs to {0,−1,+1}. Definition 3. • Given a convex ...
  17. [17]
    Integer and Combinatorial Optimization | Wiley Online Books
    Integer and Combinatorial Optimization ; Author(s):. George Nemhauser, Laurence Wolsey, ; First published:16 June 1988 ; Print ISBN:9780471828198 |Online ...
  18. [18]
    [PDF] LP Relaxation and Rounding 1 Linear programs and linear integer ...
    Oct 12, 2006 · The basic idea of LP-relaxation and rounding is quite simple. We first formulate an optimization problem as an integer program (IP), which is a ...
  19. [19]
    [PDF] Duality for Mixed Integer Linear Programs
    Apr 5, 2007 · The LP obtained from a given MILP by removing the integrality requirements on the variables, i.e., setting I = ∅, is referred to as the ...
  20. [20]
    [PDF] Integer Programming Formulation 1
    When the integer constraints are removed from an IP formulation, we obtain an LP formulation. This. LP formulation is called the LP relaxation. Since we have ...<|separator|>
  21. [21]
    [PDF] Parallel Solvers for Mixed Integer Linear Optimization
    May 4, 2017 · The barrier algorithm for solving the initial LP relaxation in the root node, always one of the first steps in any algorithm for solving an ...
  22. [22]
    [PDF] The Knapsack Problem
    The knapsack problem is to fill a knapsack with items of different weights and values to maximize total value, given limited capacity.
  23. [23]
    0-1 Knapsack Problem - an overview | ScienceDirect Topics
    "... Mathematically, the 0–1 knapsack problem can be formulated as Maximize ∑ i n v i ⋅ x i with x i ∈ {0, 1} Subject to ∑ i n w i ⋅ x i ≤ W with x i ∈ {0, 1} ...
  24. [24]
    [PDF] 1. Introduction The 0/1 knapsack problem has been studied ... - NJIT
    An instance of the 0/1 knapsack problem will be denoted by the ordered pair. (Un, Kn). The 0/1 knapsack problem is known to be NP-hard [3], so it is unlikely.
  25. [25]
    [PDF] Integer Linear Programming - MAT3007 Optimization
    Example Continued. Solving the LP relaxation (S1), maximize. 8x1 + 11x2 + 6x3 + 4x4 subject to 5x1 + 7x2 + 4x3 + 3x4 ≤ 14 x3 = 1, x1,x2,x4 ∈ [0, 1]. we obtain ...
  26. [26]
    [PDF] Integer programming techniques 1: branch and bound
    Apr 2, 2013 · In a branch and bound tree, the nodes represent integer programs. ... We get bounds for each integer program by solving the LP relaxations.
  27. [27]
    Cutting planes in integer and mixed integer programming
    Nov 15, 2002 · This survey is devoted to cutting planes that are useful or potentially useful in solving mixed integer programs.
  28. [28]
    [PDF] Integer Programming Duality
    The LP obtained from a given MILP by removing the integrality requirements on the variables, i.e., setting r = 0, is referred to as the associated LP relaxation ...Missing: ILP | Show results with:ILP
  29. [29]
    [PDF] Integer Programming Duality - John Hooker
    In the example, the dual attains its optimal value of 60 when 1 < u ≤ 20/17. This is better than the bound of 69 provided by the linear programming relaxation.
  30. [30]
    [PDF] Integer programming and pricing revisited
    It is well known that the dual solution of an LP problem enables one to obtain shadow prices for the resources. 203. O Oxford Uniroiity Prai 1997 at ...
  31. [31]
    How to derive the dual problem of Knapsack problem
    Jan 8, 2016 · If you know that the dual of the dual is the primal problem itself, you don't need to rewrite the max problem into the min one.Verifying optimality for given primal and dual solutions to a linear ...LP relaxation of bounded knapsack problem with constraints on the ...More results from math.stackexchange.com
  32. [32]
    [PDF] Lecture 12: Knapsack Problems - University of Connecticut
    A knapsack problem involves filling a knapsack of size V with objects of different profits and weights to maximize profit, without exceeding the total weight V.
  33. [33]
    [PDF] Approximation Algorithms - Cornell: Computer Science
    approximation ratio, as usual, we define an appropriate loop invariant. Let OPT denote any vertex cover of minimum cardinality. Let Si denote the contents ...
  34. [34]
    [PDF] Using Linear Programming in the Design and Analysis of ...
    Building on results for the set covering problem (due to Johnson [19], Lovász ... relaxation, we obtain a 4-approximation algorithm. Page 12. Once again, we ...
  35. [35]
    [PDF] LP RELAXATIONS AND INTEGRALITY GAP
    Sep 9, 2025 · where the integrality gap is bounded, the LP relaxation can often be used to obtain an approximation of the optimal solution of the ILP ...
  36. [36]
    [PDF] Approximation algorithms based on LP relaxation 1 Linear programs ...
    Mar 10, 2005 · We first formulate an optimization problem as an integer program (IP), which is like a linear program (LP) with integer variables. Then, we ...
  37. [37]
    cc.complexity theory - The importance of Integrality Gap
    Aug 22, 2010 · Integrality gaps essentially represent the inherent limits of a particular linear or convex relaxation in approximating an integer program.
  38. [38]
    [PDF] Constant Factor Approximation Algorithm for the Knapsack Median ...
    LP relaxation has unbounded integrality gap. This happens even when all facility costs are at most B (the natural LP relaxation for the knapsack problem also ...<|control11|><|separator|>
  39. [39]
    [PDF] Worst-Case Analysis of a New Heuristic for the Travelling Salesman ...
    An 0(n ) heuristic algorithm Is described for solving n-city travelling salesmen problems (TSP) whose cost matrix satisfies the triangularity condition.
  40. [40]
    [PDF] Improving Christofides' Algorithm for the st Path TSP - arXiv
    Nov 2, 2011 · ... integrality gap of the LP relaxation used. Secondly, we study the unit-weight graphical metric s-t path TSP to present a 1.5780-approximation.
  41. [41]
    An Automatic Method of Solving Discrete Programming Problems
    10 This is an upper bound to the branch value of y which has been obtained by ... 516 A. H. LAND A. G. DOIG. A similar method is used to determine the value ...
  42. [42]
    Branch-and-bound algorithms: A survey of recent advances in ...
    This survey presents a description of recent research advances in the design of B&B algorithms, particularly with regards to these three components.
  43. [43]
    How to add branching rules - SCIP Doxygen Documentation
    Branching rules are used to split the problem at the current node into smaller subproblems. Branching rules can be called at three different occasions.
  44. [44]
    LP Solver Interface - SCIP Doxygen Documentation
    SCIP uses external tools to solve LP relaxations. The communication is ... This naturally occurs in a branch-and-bound process, where the objective ...
  45. [45]
    SCIP: FAQ - Zuse-Institut Berlin
    The methods SCIPcolGetLb() and SCIPcolGetUb() give you the locally valid bounds of the LP relaxation in the current branch-and-bound-node. If you are ...
  46. [46]
    [PDF] Relaxations and Bounds: Applications to Knapsack Problems
    Any problem formulated as an ILP has a natural relaxation: the LP relaxation. ... Its LP relaxation (P1) gives also an upper bound z1 of the MKP.<|separator|>
  47. [47]
    The SCIP Optimization Suite 9.0 - arXiv
    Feb 26, 2024 · The LPs that need to be solved as relaxations in the branch-and-bound process are by default solved with SoPlex. Interfaces to most actively ...<|control11|><|separator|>
  48. [48]
    [PDF] A Branch and Bound Algorithm for the Knapsack Problem
    A branch and bound algorithm for solution of the "knapsack problem," max E vzix where E wixi < W and xi = 0, 1, is presented which can obtain.
  49. [49]
    Edmonds polytopes and a hierarchy of combinatorial problems
    May 28, 2006 · Gomory describes a way of generating new constraints, called cuts, that are satisfied by every choice of integers x 1 , x 2 , … , x n satisfying ...
  50. [50]
    Aggregation and Mixed Integer Rounding to Solve MIPs - PubsOnLine
    In this paper, we discuss the use of mixed integer rounding (MIR) inequalities to solve mixed integer programs. MIR inequalities are essentially Gomory ...