Fact-checked by Grok 2 weeks ago

Discrete optimization

Discrete optimization is a subfield of that focuses on finding the optimal solution to problems where the decision variables are restricted to a discrete set, such as integers or elements from finite combinatorial structures, typically formulated as minimizing or maximizing an objective function subject to constraints over this discrete . Unlike , which deals with real-valued variables over infinite domains often solvable via gradient methods, discrete optimization confronts finite but potentially exponentially large search spaces, rendering many problems NP-hard and necessitating specialized algorithms for tractability. At its core, discrete optimization encompasses , where variables are integers, and , which involves selecting optimal subsets or arrangements from discrete structures like graphs and networks. Key problem classes include the traveling salesman problem (TSP), shortest path problems, minimum spanning trees, maximum matchings, and the , each modeled on underlying discrete entities such as nodes, edges, or binary choices. Solution methods range from exact approaches like branch-and-bound for integer programs and dynamic programming for knapsack-like problems to approximation algorithms and heuristics for intractable cases, often leveraging polyhedral theory, total unimodularity, and network flow techniques to ensure integrality or efficiency. Notable theoretical foundations include the for network flows and complexity classifications distinguishing polynomial-time solvable problems (e.g., bipartite matching via Edmonds' algorithm) from NP-hard ones like general TSP. The field holds significant importance in and , intersecting , , scheduling, and , with real-world applications in transportation routing, , telecommunications network design, and financial portfolio selection under discrete constraints. Advances continue to emphasize structural insights, such as the hull of polytopes, to bridge relaxations with discrete solutions, enabling scalable solvers for industrial-scale problems despite computational challenges.

Introduction and Overview

Definition and Scope

Discrete optimization refers to a class of problems in which the decision variables are constrained to take values from a set, such as integers or elements from finite subsets, rather than from the of real numbers as in . In these problems, the objective is to find the optimal solution—typically minimizing or maximizing a given —among a finite or countably infinite number of feasible alternatives, often requiring computationally efficient methods due to the of possibilities. This contrasts with , where variables can assume any real value and local optimality is often assessed via neighborhoods and differentiability, whereas discrete settings rely on predefined neighborhood structures for local searches. The scope of discrete optimization broadly encompasses challenges rooted in , , and , where the consists of discrete points within a larger . While the domain can theoretically include infinite discrete sets, practical focus lies on finite cases that are solvable through algorithmic means, excluding undecidable or purely theoretical infinite scenarios. Key characteristics include inherent non-convexity of the feasible set, which precludes the use of standard guarantees, and frequent , rendering exact solutions computationally intractable for large instances without specialized techniques. Additionally, the absence of differentiability in discrete domains necessitates tailored algorithms that exploit combinatorial structure rather than gradient-based methods. A classic example is the traveling salesman problem (TSP), where the goal is to find the shortest permutation of cities that a salesman can visit, starting and ending at the origin, illustrating the permutation-based discrete choices central to the field.

Historical Development

The roots of discrete optimization trace back to the 18th and 19th centuries, where early graph theory problems laid conceptual groundwork for combinatorial challenges. In 1736, Leonhard Euler proved that no path exists crossing each of Königsberg's seven bridges exactly once, introducing degree-based conditions for traversability that prefigured modern graph optimization. Euler further examined the knight's tour problem in 1759, seeking closed paths for a knight to visit every chessboard square once, which highlighted constraints on cycle structures in discrete spaces. By the mid-19th century, William Rowan Hamilton advanced these ideas through his 1857 Icosian game, a puzzle requiring Hamiltonian cycles along the edges of a dodecahedron, emphasizing path coverage in polyhedral graphs. The mid-20th century marked the field's formal emergence, catalyzed by efforts during and after to optimize and . In 1947, formulated and the , enabling efficient solutions to problems and setting the stage for discrete extensions like , where variables are restricted to integers. This work addressed practical needs in scheduling and allocation, bridging mathematical theory with computational practice. Key algorithmic breakthroughs followed in the late 1950s and early 1960s. Ralph Gomory's 1958 cutting plane method generated valid inequalities to refine linear programming relaxations, ensuring convergence to integer solutions for pure integer programs. In 1960, Ailsa Land and Alison Doig introduced branch-and-bound, a divide-and-conquer approach that partitions the search space and prunes infeasible or suboptimal branches using bounds from relaxations. The 1960s and 1970s witnessed explosive growth in combinatorial optimization, integrating graph algorithms and complexity theory. Jack Edmonds' 1965 algorithm computed maximum matchings in non-bipartite graphs in polynomial time, pioneering polyhedral descriptions of matching polytopes. Stephen Cook's 1971 demonstration of NP-completeness for satisfiability problems revealed the computational intractability of many discrete optimization tasks, shifting focus toward approximation and exact methods for hard cases. Since the 1980s, discrete optimization has intertwined with , leveraging rising computational capabilities for hybrid solvers and theoretical advances. Expansions of branch-and-bound, combined with cutting planes and presolving techniques, enabled tackling large-scale instances in software. The enduring influence of post-WWII operations research sustained this evolution, promoting interdisciplinary tools for real-world discrete problems across industries.

Mathematical Formulation

Core Components and Models

Discrete optimization problems are formulated using decision variables that belong to discrete sets, typically integers from the set \mathbb{Z} or binary values from \{0,1\}, reflecting choices that cannot be fractionated, such as selecting items or assigning resources indivisibly. These variables are subject to constraints expressed as equalities or inequalities involving linear or nonlinear combinations, ensuring feasibility within practical or logical bounds, such as capacity limits or precedence requirements. The objective in discrete optimization seeks to minimize or maximize a scalar value f(\mathbf{x}), where \mathbf{x} is the of discrete decision variables; this is often linear, as in f(\mathbf{x}) = \mathbf{c}^T \mathbf{x}, but can be nonlinear to capture complex costs or benefits. Constraints couple these variables, forming a that is a of the , often nonconvex due to the integrality requirements. Standard models for discrete optimization include integer linear programming (ILP), which optimizes a linear objective over integer variables subject to linear constraints. The canonical ILP formulation is: \begin{align*} \min &\quad \mathbf{c}^T \mathbf{x} \\ \text{s.t.} &\quad A \mathbf{x} \leq \mathbf{b} \\ &\quad \mathbf{x} \in \mathbb{Z}^n, \end{align*} where \mathbf{c} \in \mathbb{R}^n, A \in \mathbb{R}^{m \times n}, \mathbf{b} \in \mathbb{R}^m, and \mathbf{x} \in \mathbb{R}^n. A variant uses equality constraints: \begin{align*} \min &\quad \mathbf{c}^T \mathbf{x} \\ \text{s.t.} &\quad A \mathbf{x} = \mathbf{b} \\ &\quad \mathbf{x} \geq \mathbf{0} \\ &\quad \mathbf{x} \in \mathbb{Z}^n. \end{align*} This model captures many practical decisions where linearity simplifies analysis while enforcing discreteness. Mixed-integer programming (MIP) extends ILP by allowing some variables to be continuous, typically non-negative reals, while others remain integer; the formulation mirrors ILP but partitions \mathbf{x} into integer and continuous components, enabling hybrid models for scenarios like production planning with batch sizes (integer) and flow rates (continuous). MIP formulations maintain the linear structure for tractability, with the general form: \begin{align*} \min &\quad \mathbf{c}^T \mathbf{x} + \mathbf{d}^T \mathbf{y} \\ \text{s.t.} &\quad A \mathbf{x} + B \mathbf{y} \leq \mathbf{b} \\ &\quad \mathbf{x} \in \mathbb{Z}^{n_1}, \ \mathbf{y} \in \mathbb{R}^{n_2}_{\geq 0}, \end{align*} where \mathbf{x} are integers and \mathbf{y} are continuous. To assess solution quality or derive bounds, relaxation techniques replace the integrality constraints with continuous ones, yielding the linear programming (LP) relaxation; for an ILP, this involves solving \min \mathbf{c}^T \mathbf{x} subject to A \mathbf{x} \leq \mathbf{b}, \mathbf{x} \geq \mathbf{0}, providing an upper bound on the optimal integer value for minimization problems. This relaxation exploits efficient LP solvers and informs the gap between continuous and discrete optima, guiding further refinement.

Key Problem Types

Discrete optimization is characterized by several major classes of problems, each defined by distinct structures over discrete domains. The primary categories include and graph-based problems, which together capture a wide range of scenarios involving finite choices. addresses problems over finite sets, seeking optimal subsets or arrangements that satisfy specific constraints. For instance, the set covering problem requires selecting a minimum-cost collection of subsets to cover every element in a given at least once. Similarly, partitioning problems involve dividing a set into disjoint subsets to achieve a balanced or cost-minimizing configuration, such as equitable among groups. These problems emphasize the of possibilities, where the feasible solutions form a discrete, often exponentially large, collection. Graph-based problems extend this framework to network structures, optimizing paths, trees, or flows within . The identifies the minimum-weight route from a source to a target , accounting for costs that represent distances or times. Minimum spanning trees connect all vertices with the least total weight, forming an acyclic subgraph that spans the . Maximum flow problems, in contrast, maximize the throughput from a source to a under limits on , modeling networks like transportation systems. Among the classic exemplars, the 0-1 maximizes the total value of a of items, each with a value and weight, without exceeding a fixed knapsack ; items are indivisibly included or excluded. The minimizes the total cost of pairing elements from two , such as tasks to workers, ensuring a in a . Problems exhibiting and are well-suited to dynamic programming approaches, where solutions to smaller instances recursively build the global optimum. The exemplifies this, as partial fillings inform larger capacities. , used to compare biological strings like DNA or proteins, also relies on this property to find the highest-scoring pairwise matching under substitution and gap costs. In distinction from , discrete problems focus on or exhaustive search over countable alternatives, precluding gradient-based or derivative methods that assume smoothness over real-valued spaces. Many such problems can be formulated as linear programs, linking them to broader modeling paradigms.

Solution Methods

Exact Algorithms

Exact algorithms for discrete optimization problems guarantee the discovery of globally optimal solutions by systematically exploring the feasible region, often using mathematical relaxations to bound and prune the search space efficiently. Unlike approaches, these methods ensure optimality but can be computationally intensive for large instances due to the inherent of many discrete problems, which are often NP-hard. Key techniques include dynamic programming for structured problems with overlapping substructures, and branch-and-bound with enhancements like cutting planes for general formulations. Dynamic programming, introduced by Richard Bellman in 1957, addresses discrete optimization problems that exhibit , where the optimal solution to a problem can be constructed from optimal solutions to its subproblems. The method decomposes the problem into stages, solving smaller subproblems recursively and storing intermediate results to avoid recomputation, a process known as . This approach is particularly suited to sequential decision-making problems, such as shortest paths in graphs or inventory management, where decisions at each stage affect future costs. The foundational formalizes the recursive relationship for a minimization problem over n stages: V_n = \min_{i} \left\{ c(i) + V_{n-1} \right\} where V_n represents the minimum cost-to-go from stage n to the end, c(i) is the immediate cost of decision i at stage n, and the minimization is over feasible decisions i, with base case V_0 = 0. Bellman's optimality principle underpins this, stating that an optimal policy for the entire problem induces optimal policies for subproblems starting from any intermediate state. For more general discrete optimization, particularly integer linear programs (ILPs), the branch-and-bound algorithm provides a systematic enumeration framework. Developed by Ailsa and Alison Doig in , it constructs a where each represents a subproblem obtained by branching on a , fixing it to integer values like 0 or 1 for binary variables. At each , a relaxation—typically the () relaxation where integrality constraints are dropped—is solved to obtain bounds: an upper bound for maximization problems or lower bound for minimization. If the 's bound does not improve upon the current best integer solution (), the subtree is pruned, avoiding full exploration of infeasible or suboptimal branches. relaxations are central, as their solutions provide tight bounds and guide selection for branching, often using criteria like pseudocost or strong branching to prioritize promising s. To strengthen these LP relaxations and improve bounding, cutting plane methods add valid inequalities that eliminate fractional solutions without excluding integer feasible points. Ralph Gomory introduced the seminal cutting plane algorithm in 1960, deriving inequalities from the simplex tableau of an LP relaxation. For a basic feasible solution where a row equation is \sum a_{ij} x_j = b with b fractional, a Gomory cut is generated as \sum f_j x_j + \sum (1 - f_j) y_j \geq 1, where f_j is the fractional part of coefficients, and y_j are complementary slack variables for non-basic variables; this cut is valid for the integer hull and tightens the formulation. Modern implementations generate families of such cuts, including mixed-integer Gomory cuts for problems with continuous variables, iteratively adding them until the relaxation yields an integer solution or proves infeasibility. Branch-and-cut algorithms hybridize branch-and-bound with cutting planes, applying cuts dynamically at nodes of the search tree to refine relaxations throughout the process. This framework was formalized by Manfred Padberg and Giovanni Rinaldi in 1991 for the traveling salesman problem, where cuts specific to the problem structure (e.g., subtour elimination inequalities) are separated and added on-the-fly. In general ILPs, generic cuts like Gomory's are combined with problem-specific ones, with separation routines invoked after solving the node's LP to check for violated inequalities. Commercial solvers such as CPLEX implement branch-and-cut as their core engine for mixed-integer programming, managing cut pools, node presolving, and heuristics to accelerate convergence. These exact algorithms are highly effective for moderate-sized instances, routinely solving ILPs with thousands of variables and constraints in reasonable time on modern hardware, thanks to advances in solvers and for cut selection. However, for very large or highly unstructured problems, their worst-case exponential limits , motivating hybrid or approximate methods in practice.

Approximation and Heuristic Techniques

In discrete optimization, and techniques provide efficient methods for finding near-optimal solutions to NP-hard problems where exact algorithms are computationally prohibitive. These approaches trade optimality for , often yielding solutions within a bounded factor of the optimum or empirically high-quality results in reasonable time. Approximation algorithms offer provable guarantees on solution quality, while heuristics prioritize speed and practicality, exploring the solution space through iterative improvements or probabilistic mechanisms. Approximation algorithms are polynomial-time procedures that deliver solutions with a guaranteed performance bound relative to the optimal value. For a minimization problem, an algorithm is a \rho- if its solution cost is at most \rho times the optimal cost, where \rho \geq 1 is the approximation ratio. A classic example is the 2-approximation for the minimum problem, achieved by computing a maximal matching and including both endpoints of each matched edge in the cover; this ensures the selected set covers all edges while bounding the size by twice the optimum, as each optimal cover edge can charge to at most two selected vertices. Greedy algorithms form a foundational class of approximation methods, making locally optimal choices at each step to construct a solution incrementally. In the , the repeatedly selects the set that covers the most uncovered elements, achieving an approximation ratio of \ln n + 1, where n is the universe size; this logarithmic bound arises from comparing the algorithm's cost to the series bounding the optimal cover's efficiency. Such algorithms are simple, fast, and widely applicable, though their guarantees weaken for larger instances. Local search techniques iteratively refine a feasible by exploring its neighborhood—sets of solutions differing by small modifications—until no improvement is possible. Hill-climbing, a basic form, greedily moves to the best neighboring solution with lower cost, converging to a local optimum but risking stagnation in poor regions. To mitigate this, tabu search enhances local search by maintaining a short-term of recent moves (tabu list) to forbid immediate reversals, promoting diversification and escaping local optima; introduced by Glover in , it has proven effective for problems like the traveling salesman, often yielding solutions within 1-5% of optimality in practice. Metaheuristics extend local search principles with higher-level strategies to navigate rugged search landscapes. Genetic algorithms, pioneered by in 1975, evolve a population of solutions through selection, crossover, and mutation operations inspired by natural evolution, favoring fitter individuals to converge toward global optima over generations; they excel in multimodal discrete problems like scheduling, with empirical speedups of orders of magnitude over exact methods for large-scale instances. Simulated annealing, developed by Kirkpatrick et al. in 1983, mimics the metallurgical annealing process by probabilistically accepting worse solutions with probability e^{-\Delta / T}, where \Delta is the cost increase and T is a cooling ; this allows escaping local minima early and freezing into good solutions, achieving near-optimal results for partitioning with ratios often below 10% deviation. Performance in these techniques is evaluated via the approximation ratio for guaranteed methods and empirical metrics like solution quality and runtime for heuristics. For NP-hard problems such as the traveling salesman, metaheuristics like can solve instances with thousands of cities in seconds, delivering tours within 2% of optimal—far surpassing exact solvers' feasibility on such scales—while and local search provide baselines with ratios up to logarithmic factors.

Applications and Branches

Industrial and Operational Applications

Discrete optimization plays a central role in and by addressing complex routing and placement decisions. The (VRP), a classic combinatorial challenge, optimizes delivery routes for fleets to minimize total distance traveled or costs while meeting time windows and capacity constraints, as applied in urban distribution networks by companies seeking efficiency gains. For warehouse location, facility location models determine optimal sites from discrete candidate sets to reduce transportation expenses and balance supply with demand, incorporating fixed costs and service coverage in global logistics planning. In , discrete optimization enhances production workflows through and balancing techniques. assigns operations to machines in a non-preemptive manner to minimize —the completion time of all jobs—in environments with multiple job routes and resource conflicts, supporting flexible systems. balancing partitions tasks into workstations to equalize cycle times and reduce bottlenecks, formulated as a partitioning problem that optimizes throughput in high-volume processes. Financial applications leverage discrete models for asset selection and valuation under uncertainty. Portfolio optimization selects discrete subsets of assets to maximize expected returns subject to risk constraints, often using to handle indivisible investments like or bonds in institutional fund management. For option pricing, the binomial tree model discretizes asset price paths into up or down movements over finite time steps, enabling to derive fair values for and options in discrete-time settings. Telecommunications networks employ discrete optimization for robust infrastructure design. Minimum cost spanning trees connect communication nodes with the lowest total edge weights, ensuring full connectivity in backbone networks while minimizing cabling or transmission costs. The frequency assignment problem allocates discrete channels to transmitters to minimize , modeled as to support reliable signal propagation in cellular and radio systems. A prominent real-world implementation is airline crew scheduling, where mixed-integer programming (MIP) generates pairings and rosters that comply with regulations and minimize costs, as demonstrated in ' operations yielding annual savings of $16–18 million against a $600 million crew payroll.

Specialized Subfields

discrete optimization extends traditional discrete problems by incorporating in parameters, such as random demands or costs, to derive decisions that perform well across possible scenarios. A foundational approach is two-stage with recourse, where first-stage decisions are made before uncertainty is revealed, followed by second-stage corrective actions to minimize expected costs or maximize expected profits. This has been analyzed for algorithms, showing that for many combinatorial problems like the stochastic knapsack, constant-factor approximations exist relative to the expected optimum, with techniques like randomized adapted to handle scenario-based expectations. Seminal work highlights the value of recourse in mitigating , as seen in applications to inventory management where the expected recourse cost can be bounded within a factor of the deterministic counterpart. Robust optimization in discrete settings addresses worst-case by seeking solutions that remain feasible and near-optimal even under adversarial parameter perturbations, differing from methods by avoiding probability distributions. For instance, in the robust , item weights or s may vary within uncertainty sets, and the goal is to select a that maximizes the minimum over all possible realizations without exceeding in the worst case. This is often formulated using budgeted uncertainty models, where the number of deviations from nominal values is limited, leading to tractable mixed-integer programs solvable via column-and-constraint generation. Research demonstrates that robust solutions incur a price of robustness, trading off nominal performance for guaranteed resilience, with exact methods achieving optimality for moderate instance sizes in variants. Multi-objective discrete optimization arises when multiple conflicting criteria, such as cost and reliability, must be balanced, aiming to identify the —a set of non-dominated solutions where no can improve without degrading another. Common methods include scalarization via weighted sums, transforming the vector problem into a single- one by optimizing \sum w_i f_i(x) for weights w_i > 0, or evolutionary algorithms that approximate the entire front. In discrete contexts like the multi-objective traveling salesman problem, the can be exponentially large, but branch-and-bound techniques with dominance checks efficiently enumerate supported solutions. Overviews of five decades of development emphasize the role of methods in generating the front for mixed-integer programs, ensuring convexity in space for linear cases. Constraint programming integrates testing with optimization, declaring variables, domains, and logical constraints to prune search spaces efficiently, particularly for scheduling problems with temporal and resource dependencies. In optimization variants, it extends Boolean (SAT) solvers to handle objectives like minimizing in , where constraints enforce precedence and no-overlap conditions. Global constraints, such as cumulative for resource limits, propagate bounds to reduce the feasible set, outperforming pure MILP for highly combinatorial structures. Tutorials on constraint-based scheduling illustrate how hybridization with local search yields practical solutions, as in personnel rostering where logical implications ensure feasibility before optimizing soft constraints. Quantum-inspired approaches to discrete optimization draw from principles, such as superposition and tunneling, to tackle NP-hard problems encoded as Ising models with discrete spin variables minimizing \sum_{i,j} J_{ij} s_i s_j + \sum_i h_i s_i, where s_i \in \{-1, 1\}. , an early method implemented on devices like D-Wave systems, simulates adiabatic evolution to find ground states, offering potential speedups over classical for sparse graphs. Initial explorations on benchmark problems, including the reformulation of discrete tasks, report empirical advantages in solution quality for certain random instances, though classical heuristics often match or exceed performance on dense graphs. These techniques inspire software implementations that mimic quantum effects without , focusing on constraints in the Ising Hamiltonian.

Challenges and Future Directions

Computational Complexity

Discrete optimization problems often fall within the complexity class NP, where solutions can be verified in polynomial time, but finding them may require exponential effort. A cornerstone of this landscape is the P versus NP question, which asks whether every problem in NP can be solved in polynomial time. Many seminal discrete optimization problems, such as the traveling salesman problem (TSP) and the Boolean satisfiability problem (SAT), are NP-complete, meaning they are among the hardest problems in NP and serve as benchmarks for the class. SAT was the first problem proven NP-complete via Cook's theorem in 1971, establishing that it encapsulates the difficulty of all NP problems through polynomial-time reductions. TSP, a classic routing optimization challenge, was subsequently shown NP-complete by Karp in 1972, implying that no known polynomial-time exact algorithm exists for it unless P = NP. This hardness underscores the theoretical limits of exact computation for these problems, as resolving P = NP remains one of the Millennium Prize Problems. NP-hard problems in discrete optimization extend beyond NP-complete decision versions to include optimization tasks like integer programming, where finding an optimal integer solution to a linear program is NP-hard. Garey and Johnson formalized this in their 1979 compendium, proving that even the decision version of integer programming is NP-complete. A key distinction within NP-hardness is between weakly and strongly NP-hard problems. Weakly NP-hard problems, such as the 0-1 knapsack problem, are NP-hard under binary encoding of inputs but admit pseudo-polynomial time algorithms, solvable in time polynomial in the numeric values but exponential in their bit lengths. In contrast, strongly NP-hard problems, like TSP, remain NP-hard even under unary encoding of inputs, resisting pseudo-polynomial solutions and highlighting inherent combinatorial explosion regardless of number sizes. Despite pervasive hardness, certain discrete optimization problems reside in P and admit efficient exact algorithms. The maximum cardinality matching problem in general graphs, for instance, can be solved in polynomial time using Edmonds' blossom algorithm, which runs in O(n^3) time via efficient implementations that handle augmenting paths and blossom contractions. Likewise, the single-source shortest path problem in graphs with non-negative edge weights is solvable in O((V + E) \log V) time using Dijkstra's algorithm with a binary heap priority queue, enabling practical optimization in network routing. These tractable cases demonstrate that while many discrete problems are intractable, structural properties like bipartiteness or acyclicity can yield polynomial solvability. The theory of relies on polynomial-time reductions to interconnect problems, revealing shared hardness. Karp's influential 1972 paper demonstrated this by reducing SAT to 21 combinatorial problems, including TSP and , thereby establishing their NP-completeness. A notable example is the reduction from 3-SAT to the : for a 3-CNF formula with m clauses, construct a with m vertices per literal occurrence, connecting vertices if they appear in different clauses without variable conflicts, such that the formula is satisfiable if and only if the graph has a clique of size m. This chain of reductions illustrates how solving one NP-complete discrete optimization problem in polynomial time would collapse and . The established NP-hardness of core discrete optimization problems has profound implications, justifying the shift to inexact methods for real-world applications where exact solutions are infeasible for large instances. Without a polynomial-time —contingent on P ≠ —practitioners rely on approximations and heuristics to achieve near-optimal solutions efficiently, as exact solvers poorly beyond modest problem sizes. One prominent emerging trend in discrete optimization is the integration of techniques to enhance traditional exact s, particularly in mixed-integer (MILP) solvers. Neural networks have been employed to improve bounding mechanisms within branch-and-bound frameworks by learning effective cutting planes, which tighten relaxations and reduce the search space. For instance, since the 2010s, approaches have been developed to predict the strength of cuts, leading to significant speedups in solving large-scale MILPs; such methods can improve solver runtime by around 16% on synthetic MILPs, as shown in recent studies. More recent advancements, such as machine learning-augmented branch-and-bound, use graph neural networks to guide node selection, demonstrating improved performance. Hybrid AI methods are further advancing heuristic guidance and problem-solving in . Reinforcement learning (RL) has emerged as a powerful tool for dynamically selecting variables or cuts during the branch-and-bound process, treating the solver's decisions as an RL policy to maximize cumulative rewards based on solution quality and time efficiency. For example, RL-based approaches, such as for feasibility pumps, have been shown to significantly reduce steps to feasible solutions compared to traditional methods on small to medium instances. Complementing this, graph neural networks (GNNs) have gained traction post-2018 for directly tackling combinatorial problems like the traveling salesman or maximum independent set, where they encode graph structures to approximate optimal solutions or inform exact solvers; surveys indicate GNNs can yield near-optimal results faster than classical methods on various graph instances. To address scalability in the era of , distributed solvers are being developed for massive problems, leveraging infrastructures to parallelize branch-and-bound across clusters. These approaches decompose MILPs into subproblems solved concurrently, with via shared bounds, enabling the handling of instances with millions of variables that exceed single-machine limits. Recent distributed algorithms, for example, have demonstrated faster processing times compared to centralized solvers like Gurobi for large-scale MILPs, by allocating right-hand sides across processors. Parallel distributed-memory frameworks like PIPS-SBB enable solving mixed-integer programs by leveraging multiple processors. In sustainability applications, discrete optimization is increasingly applied to grids, particularly through the unit commitment problem, which schedules generator on/off states to minimize costs while integrating variable renewables like and . Formulations model discrete decisions such as unit startups and ramping constraints alongside forecasts, using MILP to balance grid stability and emissions. For high penetration scenarios, unit commitment models have shown significant cost reductions with high penetration in transmission-constrained multiarea systems. Advanced graph-based methods for security-constrained unit commitment further handle ultra-large-scale grids with thousands of buses, optimizing discrete commitments in real-time for renewable-dominated systems. Progress in is opening new avenues for discrete optimization via D-Wave's quantum annealing systems, which natively solve (QUBO) formulations—a canonical representation for many discrete problems like graph partitioning or knapsack variants. In the 2020s, demonstrations have applied to sparse Ising models derived from QUBO. Recent benchmarks on D-Wave's Advantage system show competitive performance on combinatorial problems like MaxCut, though challenges like embedding efficiency persist for dense instances.

References

  1. [1]
    [PDF] Discrete Optimization - UW Math Department
    Roughly speaking, discrete optimization deals with finding the best solution out of finite number of possibilities in a computationally efficient way.
  2. [2]
    [PDF] Discrete and Continuous Optimization
    An optimization problem (or a mathematical programming problem) reads min f(x) subject to x ∈ M, where f : Rn → R is the objective function and M ⊆ Rn is ...
  3. [3]
    Discrete Optimization: Theory, Algorithms, and Applications - MDPI
    May 1, 2019 · Discrete optimization is an important area of applied mathematics that is at the intersection of several disciplines and covers both ...Missing: sources | Show results with:sources
  4. [4]
    [PDF] A Practical Guide to Discrete Optimization
    Aug 7, 2014 · Discrete-optimization models, such as these, are typically defined on discrete structures, including networks, graphs, and matrices. As a field ...
  5. [5]
    [PDF] Discrete Optimization (at IBM's Mathematical Sciences Department)
    Sciences Dept. Discrete optimization is the study of problems where the goal is to select a minimum cost alternative from a finite (or countable) set of ...
  6. [6]
    [PDF] A Review of Discrete Optimization Algorithms
    In a discrete space, local optimality is defined in terms of an a priori neighborhood structure as opposed to an ε-neighborhood in the continuous case. ...
  7. [7]
    [PDF] Topics in Discrete Optimization Lenny Fukshansky
    An optimization problem like this is called discrete if the domain D is a discrete set inside of some topological space, i.e. if every point of D is an ...
  8. [8]
    [PDF] Fifty-Plus Years of Combinatorial Integer Programming
    Jun 3, 2009 · Integer-programming models arise naturally in optimization problems over combinatorial structures, most notably in problems on graphs and ...
  9. [9]
    [PDF] Nonconvex? NP! (No Problem!) - Statistics & Data Science
    What does it mean to solve a nonconvex problem? Nonconvex problems can have local minima, i.e., there can exist a feasible x such that f(y) ≥ ...
  10. [10]
    The Traveling Salesman Problem (TSP)
    The Traveling Salesman Problem (TSP) is possibly the classic discrete optimization problem. ... Example: Consider these 7 points: A minimum-spanning tree ...
  11. [11]
    Leonard Euler's Solution to the Konigsberg Bridge Problem
    Euler first explains his simple six-step method to solve any general situation with landmasses divided by rivers and connected by bridges. First Euler denotes ...
  12. [12]
    Milestones in Graph Theory - American Mathematical Society
    1735 Euler solves the Königsberg bridges problem. 1750 Euler states his ... 1759 Euler discusses the knight's tour on a chessboard problem. 1794 Legendre ...
  13. [13]
    A Brief History of Hamiltonian Graphs - ScienceDirect.com
    In this paper we outline the history of hamiltonian graphs from the early studies on the knight's tour problem to Gabriel Dirac's important paper of 1952.
  14. [14]
    [PDF] 50 Years of Integer Programming 1958–2008 - UW Math Department
    The study of combinatorial optimization problems such as the traveling salesman problem has had a significant influence on integer programming. Fifty-plus Years ...
  15. [15]
  16. [16]
    [PDF] Maximum matching and a polyhedron with 0,1-vertices
    1 and 2, January-June 1965. Maximum Matching and a Polyhedron With O,1-Vertices1. Jack Edmonds. (December I, 1964). A matching in a graph C is a subset of edges ...Missing: URL | Show results with:URL
  17. [17]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    1971. Summary. The Complexity of Theorem - Proving Procedures. Stephen A. Cook. University of Toronto. It is shown that any recognition problem solved by a ...Missing: NP- URL
  18. [18]
    Integer and Combinatorial Optimization | Wiley Online Books
    Jun 16, 1988 · This book provides an excellent introduction and survey of traditional fields of combinatorial optimization.
  19. [19]
    Theory of Linear and Integer Programming - Wiley
    In stock Free deliveryTheory of Linear and Integer Programming Alexander Schrijver Centrum voor Wiskunde en Informatica, Amsterdam, The Netherlands This book describes the theory ...
  20. [20]
    [PDF] Mixed Integer Linear Programming Formulation Techniques
    Citation: Vielma, Juan Pablo. “Mixed Integer Linear Programming Formulation Techniques.” SIAM Review 57, no. 1 (January 2015): 3–57.
  21. [21]
    (PDF) Combinatorial Optimization: Algorithms and Complexity
    Aug 6, 2025 · The objective of combinatorial optimization (CO) problems is to find the optimal solution from a discrete space, and these problems are ...
  22. [22]
    A Greedy Heuristic for the Set-Covering Problem - PubsOnLine
    The set-covering problem is to minimize cTx subject to Ax ≥ e and x binary. We compare the value of the objective function at a feasible solution found by a ...
  23. [23]
    [PDF] Introduction to Combinatorial Optimization
    In graph theory, it has been proved that a connected graph has an. Euler tour if and only if every node has even degree. A node is called as an odd node if ...
  24. [24]
    [PDF] Combinatorial Optimization - Department Mathematik
    In this edition, we added a proof of Cayley's formula, more details on blocking flows, the new faster b-matching separation algorithm, an approximation scheme ...<|control11|><|separator|>
  25. [25]
    [PDF] dijkstra-routing-1959.pdf
    The shortest branch of set II is removed from this set and added to set I. As a result one node is transferred from set B to set 4. Step 2. Consider the ...
  26. [26]
    [PDF] kruskal-1956.pdf
    Clearly the set of edges eventually chosen forms a spanning tree of G, and in fact it forms a shortest spanning tree. In case V is the set of all vertices of G, ...
  27. [27]
    [PDF] maximal flow through a network - lr ford, jr. and dr fulkerson
    Hence, there is a maximal flow, and the set of all maximal flows is convex. Now let S be the class of all arcs which are saturated in every maximal flow. LEMMA ...
  28. [28]
    The Hungarian method for the assignment problem - IDEAS/RePEc
    The “assignment problem” is the quest for an assignment of persons to jobs so that the sum of the n scores so obtained is as large as possible.
  29. [29]
    [PDF] DYNAMIC PROGRAMMING - Gwern
    Bellman, Nuclear Engineering, 1957. 60. Page 84. CHAPTER II. A Stochastic Multi-Stage Decision Process. § 1. Introduction. In the preceding chapter we ...Missing: knapsack | Show results with:knapsack
  30. [30]
    Discrete Optimization - an overview | ScienceDirect Topics
    Typical discrete optimization problems include integer programming, where variables are restricted to integer values, and combinatorial optimization, which ...
  31. [31]
    [PDF] An Automatic Method of Solving Discrete Programming Problems ...
    Apr 4, 2007 · This paper presents a simple numerical algorithm for the solution of programming problems in which some or all of the variables can take only ...Missing: seminal | Show results with:seminal
  32. [32]
    [PDF] Outline of an Algorithm for Integer Solutions to Linear Programs and ...
    The following article originally appeared as: R.E. Gomory, An Algorithm for the Mixed Integer Problem, Research Memorandum. RM-2597, The Rand Corporation, 1960.
  33. [33]
    A Branch-and-Cut Algorithm for the Resolution of Large-Scale ...
    An algorithm is described for solving large-scale instances of the Symmetric Traveling Salesman Problem (STSP) to optimality.
  34. [34]
    Branch and cut in CPLEX - IBM
    CPLEX uses branch-and-cut search when solving mixed integer programming (MIP) models. The branch-and-cut procedure manages a search tree consisting of nodes. ...Missing: documentation | Show results with:documentation
  35. [35]
    [PDF] Exact and Heuristic Methods for Mixed Integer Linear Programs
    problems with thousands of integer variables on personal computers, and to obtain high quality solutions to problems with millions of variables (for example, ...
  36. [36]
    The Design of Approximation Algorithms: | Guide books
    This book shows how to design approximation algorithms: efficient algorithms that find provably near-optimal solutions.Missing: seminal | Show results with:seminal
  37. [37]
    [PDF] Approximation Algorithms - Cornell: Computer Science
    An SDP is an optimization problem that seeks the maximum of a linear function over the set of symmetric positive semidefinite n × n matrices, subject to linear ...
  38. [38]
    [PDF] vertex cover ‣ approximation algorithms - cs.Princeton
    May 5, 2018 · Given a graph G, let M be any matching and let S be any vertex cover. ... Vertex cover: greedy algorithm is a 2-approximation algorithm.
  39. [39]
    [PDF] Tabu Search-- Part I.
    This paper presents the fundamental principles underlying tabu search as a strategy for combinatorial optimization problems. Tabu search has achieved impressive ...
  40. [40]
    [PDF] Optimization by Simulated Annealing S. Kirkpatrick - Stat@Duke
    Nov 5, 2007 · We will introduce an effective temperature for optimization, and show how one can carry out a simulated annealing process in order to obtain ...
  41. [41]
    A Review of the Vehicle Routing Problem and the Current ... - MDPI
    Dec 22, 2022 · This paper focuses on the urban vehicle routing problem (VRP) and examines both classical VRP and its variants and a multi-objective VRP.
  42. [42]
    Facility Location: Models, Methods and Applications - SpringerLink
    This paper is devoted to some of the most important discrete location models. The Uncapacitated Facility Location Problem is first considered.Facility Location: Models... · Chapter Pdf · Explore Related Subjects<|separator|>
  43. [43]
    The flexible job shop scheduling problem: A review - ScienceDirect
    Apr 16, 2024 · The flexible job shop scheduling problem (FJSP) is an NP-hard combinatorial optimization problem, which has wide applications in the real world.
  44. [44]
    A Review on Robust Assembly Line Balancing Approaches
    This paper reviews problems, approaches, models and algorithms on robust assembly line balancing problems and discuss some promising research directions.<|separator|>
  45. [45]
    [PDF] Portfolio Optimization in discrete time - Padova - Math-Unipd
    Abstract. The paper is intended as a survey of some of the main aspects of portfolio optimization in discrete time. We consider three of the major criteria ...
  46. [46]
    [PDF] Option Pricing: A Simplified Approach† - Unisalento
    This paper presents a simple discrete-time model for valuing options. The fundamental economic principles of option pricing by arbitrage methods are ...
  47. [47]
    [PDF] Applications of minimum spanning trees
    Minimum spanning trees have direct applications in the design of networks, including computer networks, telecommunications networks, transportation networks ...
  48. [48]
    Models and solution techniques for frequency assignment problems
    May 12, 2007 · This survey gives an overview of the models and methods that the literature provides on the topic. We present a broad description of the practical settings.
  49. [49]
    [PDF] Pairing Generation for Airline Crew Scheduling - UWSpace
    When tested on a real test case study, the proposed approaches are found to improve the computational times from 142 seconds down to less than one second, and ...
  50. [50]
    [PDF] Machine Learning for Cutting Planes in Integer Programming - IJCAI
    We survey recent work on machine learning (ML) techniques for selecting cutting planes (or cuts) in mixed-integer linear programming (MILP). De-.
  51. [51]
    [PDF] Machine Learning Augmented Branch and Bound for Mixed Integer ...
    Feb 8, 2024 · This paper presents a survey of such approaches, addressing the vision of integration of machine learn- ing and mathematical optimization as ...
  52. [52]
    [PDF] Reinforcement Learning for (Mixed) Integer Programming - arXiv
    Mixed integer programming (MIP) is a general optimization technique with various real-world applications. Finding feasible solutions for MIP.
  53. [53]
    [PDF] Combinatorial Optimization and Reasoning with Graph Neural ...
    This paper reviews using graph neural networks (GNNs) for combinatorial optimization, which involves optimizing a cost function by selecting a subset from a ...
  54. [54]
    A distributed decomposition algorithm for solving large-scale mixed ...
    Dec 11, 2024 · This paper proposes a distributed method using right-hand side allocation decomposition to solve large-scale mixed integer programming problems ...Missing: big | Show results with:big
  55. [55]
    PIPS-SBB: A parallel distributed-memory branch-and-bound ...
    Dec 22, 2015 · When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation.
  56. [56]
    [PDF] Multiarea Stochastic Unit Commitment for High Wind Penetration in ...
    This paper presents a unit commitment model for studying the impact of large-scale wind integration in power systems with transmission constraints and system ...
  57. [57]
    Graph computing technology for ultra-large-scale discrete optimization
    This paper proposes a graph computing-based method to address the ultra-large-scale discrete optimization problem of security constrained unit commitment in ...
  58. [58]
    Discrete Optimization Using Quantum Annealing on Sparse Ising ...
    This paper discusses techniques for solving discrete optimization problems using quantum annealing. Practical issues likely to affect the computation ...
  59. [59]
    Quantum computing for discrete optimization: a highlight of three ...
    Sep 4, 2025 · As quantum hardware, we use the D-Wave Advantage quantum annealer (McGeoch and Farré, 2020) , an analog device operating with superconducting ...Missing: 2020s | Show results with:2020s