Fact-checked by Grok 2 weeks ago

Combinatorial optimization

Combinatorial optimization is a branch of that focuses on finding an optimal solution—typically a minimum or maximum value of an objective function—from a finite, set of feasible configurations, often involving combinatorial structures like , permutations, or graphs. These problems are characterized by a ground set of elements and a collection of feasible , where the goal is to select a that minimizes the total cost, subject to constraints defining feasibility. The field integrates techniques from , , and to address both tractable problems solvable in time and NP-hard challenges requiring or approaches. Notable exact methods include branch-and-bound, introduced by and Doig in 1960 for systematic enumeration, and cutting plane algorithms developed by Gomory in 1958 to strengthen linear relaxations of integer programs. For intractable cases, algorithms guarantee solutions within a bounded factor of optimality, while metaheuristics like explore large search spaces efficiently. Combinatorial optimization problems arise ubiquitously in applications such as (e.g., vehicle routing), (e.g., scheduling), and network design (e.g., minimum spanning trees), often modeled as binary integer linear programs where variables indicate inclusion in a solution set. Classic examples include the traveling salesman problem, which minimizes the tour length visiting all cities exactly once, and the , which maximizes value packed into a limited capacity without exceeding weight limits; both exemplify the blend of theoretical depth and practical impact in the discipline. Structural insights, such as min-max theorems (e.g., max-flow min-cut), enable polynomial-time solutions for many network-related problems via polyhedral .

Fundamentals

Definition and Scope

Combinatorial optimization is a branch of focused on finding an optimal solution from a finite or countable set of feasible alternatives, typically by minimizing or maximizing an defined over combinatorial structures. Formally, such a problem P = (F, Q) consists of a feasible set F, representing possible solutions like subsets, permutations, or sequences, and a real-valued Q: F \to \mathbb{R}, where the aim is to compute \opt(P) = \min_{x \in F} Q(x) or \max_{x \in F} Q(x). This framework captures the essence of decision-making under constraints, where optimality is measured against a clear . Unlike , which operates over infinite, connected feasible regions in with real-valued variables and often leverages smooth functions for tractable solutions like gradient-based methods, combinatorial optimization emphasizes inherently variables such as integers, choices, or arrangements. This discreteness restricts the applicability of continuous techniques and frequently results in exponentially large search spaces, even for modestly sized instances. The field's roots lie in , where solutions must be enumerated or structured around finite configurations rather than approximated via real-number relaxations. The scope of combinatorial optimization broadly includes problems involving (e.g., paths or trees), sets (e.g., covers or partitions), and sequences (e.g., orderings or alignments), encompassing both minimization forms, like cost reduction, and maximization forms, like resource allocation. A representative example is the in a weighted , which seeks the minimum-total-weight route from a source to a target among all feasible paths, illustrating how graph-based decisions align with the field's paradigm. Many such problems exhibit , underscoring their computational intractability for exact solutions in the worst case.

Key Concepts and Terminology

In combinatorial optimization, the feasible set refers to the collection of all valid solutions that satisfy the problem's constraints, such as integrality requirements or resource limits. This set is typically finite and discrete, distinguishing combinatorial problems from , where solutions may form an infinite space. The objective function is a mathematical expression designed to measure the quality of a , often representing quantities like , time, or that need to be minimized or maximized. A general combinatorial optimization problem can thus be formulated as finding a x that minimizes (or maximizes) f(x) subject to x belonging to the feasible set S, expressed as \min \{ f(x) \mid x \in S \}, where f maps solutions to real numbers and S encodes the constraints. Integer programming serves as a primary framework for modeling many combinatorial optimization problems, requiring variables to take integer values. It encompasses linear integer programs, where the objective and constraints are linear, and nonlinear variants; pure integer programs restrict all variables to integers, while mixed-integer programs allow some to be continuous. The search space comprises all possible solutions, including the feasible set and potentially infeasible ones, over which optimization algorithms explore to identify high-quality outcomes. Solution quality is assessed relative to the objective: an optimal solution achieves the best possible value of f(x) within S; a feasible solution satisfies all constraints but may not be optimal; and a suboptimal solution is feasible yet yields a worse objective value than the optimum. The field of combinatorial optimization emerged from earlier foundations in operations research during the mid-20th century.

Applications

In Operations Research and Logistics

Combinatorial optimization plays a pivotal role in operations research and logistics by addressing complex decision-making problems that involve discrete choices under resource constraints. One of the most prominent applications is the Vehicle Routing Problem (VRP), which seeks to determine optimal routes for a fleet of vehicles to serve a set of customers while minimizing total travel distance, time, or cost. Introduced as the truck dispatching problem, the VRP models scenarios such as gasoline delivery from a bulk terminal to service stations, where vehicles must return to the depot after servicing locations with specified demands. Variants like the Capacitated VRP (CVRP) incorporate vehicle capacity limits, ensuring that the total demand at assigned stops does not exceed a truck's load, which is essential for real-world distribution in retail and e-commerce logistics. In , combinatorial methods tackle management and facility location to enhance efficiency across multi-echelon networks. management problems use optimization to balance holding costs, ordering frequencies, and stockouts by determining replenishment policies for warehouses and retailers, often modeled as multi-period lot-sizing problems that minimize total costs while meeting demand. Facility location problems, meanwhile, decide the optimal placement of distribution centers to minimize and setup costs, integrating factors like customer proximity and supply capacities in global networks. These approaches enable resilient supply chains by coordinating upstream suppliers with downstream fulfillment, reducing excess and improving service levels in industries like and retail. Network flow techniques, particularly maximum flow algorithms, are applied to resource allocation in transportation networks, modeling logistics as directed graphs where edges represent roads or pipelines with capacity limits. The computes the highest possible throughput from supply sources to demand sinks, such as optimizing cargo movement in or systems to avoid bottlenecks during peak periods. This is crucial for planning, where it supports evacuation or freight consolidation, ensuring maximal utilization of while respecting constraints like vehicle capacities and time windows. The economic impact of these optimizations is substantial, as demonstrated by applications in shipping that yield multimillion-dollar savings. For instance, UPS's system, a large-scale route optimization tool based on , has reduced annual mileage by 100 million miles and fuel consumption by 10 million gallons, translating to $300–$400 million in yearly cost savings while cutting emissions. A notable is airline crew scheduling, where combinatorial optimization assigns pilots and cabin crew to flight legs while minimizing costs and satisfying regulations on duty times and rest periods. This large-scale set partitioning problem, solved via decomposition algorithms, covers thousands of flights monthly for major carriers, significantly reducing operational expenses through efficient pairing generation.

In Computer Science and Engineering

In very-large-scale (VLSI) , combinatorial optimization plays a crucial role in circuit layout problems, where the goal is to arrange components to minimize wire length, area, or power consumption while satisfying connectivity constraints. For instance, placement and routing tasks are formulated as quadratic assignment or Steiner tree problems, often solved using integer linear programming or graph partitioning techniques to achieve near-optimal layouts. A seminal survey highlights how these methods address timing closure and layout challenges in modern chip , enabling efficient handling of millions of transistors. Compiler optimization leverages combinatorial optimization for tasks such as , which maps program variables to a limited set of processor registers to minimize spills to memory, and , which reorders operations to reduce latency and exploit parallelism. These problems are modeled as for allocation and list scheduling or for sequencing, improving code performance in just-in-time compilers and embedded systems. Research demonstrates that a unified combinatorial framework can integrate both tasks, yielding better code quality than traditional heuristics in practice. In , combinatorial optimization underpins , where subsets of input variables are chosen to enhance model accuracy and reduce dimensionality, often framed as a set cover or to balance relevance and redundancy. Similarly, hyperparameter tuning in discrete spaces, such as selecting architectures or depths, treats configurations as combinatorial objects optimized via Bayesian methods or evolutionary algorithms. These approaches have shown improvements in predictive , with quantum-inspired solvers applied to yielding scalable solutions for high-dimensional datasets. Bioinformatics employs combinatorial optimization for , aligning DNA, RNA, or protein sequences to identify similarities via dynamic programming on edit distances, minimizing gaps and mismatches in pairwise or multiple alignments. Protein folding approximations model the search for low-energy conformations as a lattice-based , using branch-and-bound or to navigate the vast conformational space. These methods facilitate evolutionary analysis and , with hierarchical algorithms assembling substructures to predict complex folds efficiently. Post-2020 advances have integrated with combinatorial optimization in hybrid frameworks for autonomous systems, combining neural networks for learning heuristics with exact solvers for decision-making in real-time scenarios like vehicle routing or in . For example, quantum-enhanced aids autonomous vehicle control by balancing safety, efficiency, and energy use through AI-guided search in discrete state spaces. Such integrations improve scalability and adaptability, as seen in large language model-assisted solvers for dynamic environments.

Solution Methods

Exact Methods

Exact methods in combinatorial optimization aim to find optimal solutions by exhaustively exploring the solution space while leveraging computational techniques to prune infeasible or suboptimal branches, ensuring guarantees of optimality for finite problems. These approaches are particularly suited for problems where the search space is manageable or structured to allow efficient bounding and . Unlike heuristic methods, exact methods provide provable optimality, though they often face in computation time for large instances of NP-hard problems. Branch and bound is a foundational exact method that systematically enumerates candidate solutions by partitioning the feasible region into subproblems, using upper and lower bounds to eliminate branches that cannot contain the optimum. Introduced in 1960 by Ailsa Land and Alison Doig for solving mixed-integer linear programs, the algorithm relaxes integer constraints to obtain bounds, typically via linear programming relaxations, and prunes subtrees where the bound exceeds the current best solution. This pruning mechanism significantly reduces the effective search space, as demonstrated in early applications to discrete programming problems where it efficiently handled enumerations that would otherwise be prohibitive. For instance, in the traveling salesman problem, branch and bound with LP-based bounds has solved instances up to 40 cities by exploring only a fraction of the total permutations. Dynamic programming solves combinatorial optimization problems by breaking them into overlapping subproblems and storing solutions to avoid recomputation, exploiting the principle of optimality where an optimal solution to the overall problem incorporates optimal solutions to subproblems. Developed by Richard Bellman in the 1950s, this recursive approach is ideal for problems with , such as the 0-1 , where the maximum value achievable with a subset of items and remaining capacity is computed via a . In the knapsack case, a bottom-up tabular implementation fills a two-dimensional array of size proportional to the number of items and capacity, yielding an exact solution in O(nW), where n is the number of items and W is the capacity. This method has been extended to other problems like shortest paths in acyclic graphs and , providing exact optima when the state space is not excessively large. Integer linear programming (ILP) solvers extend the simplex method for linear programs to handle integrality constraints, combining relaxation, branching, and cutting techniques to converge to integer-optimal solutions. The simplex method, originated by in 1947, iteratively pivots along edges of the feasible to reach an optimum in polynomial average-case time but exponential worst-case. For ILPs, cutting planes, introduced by Ralph Gomory in 1958, strengthen the linear relaxation by adding valid inequalities (cuts) that eliminate fractional solutions without excluding integer feasibles, such as the Gomory mixed-integer cut derived from simplex tableaux. Branch-and-cut algorithms integrate these by alternating branching on fractional variables with cut generation, as formalized in modern solvers, enabling exact solutions to large-scale ILPs through repeated refinement of the polyhedral approximation. The of exact methods is in the worst case for NP-hard combinatorial problems, as they must potentially explore an number of subproblems to verify optimality, with bounds like O(2^n) for subset-based enumerations. However, for specially structured problems such as network flows, exact methods achieve time complexity; for example, minimum-cost flow can be solved in O(m log U (m + n log n)) time using capacity scaling, where n is the number of nodes, m the number of arcs, and U the maximum capacity. Prominent software tools for exact solutions include IBM ILOG CPLEX and , both commercial solvers that implement advanced branch-and-cut frameworks with presolving, heuristics for warm starts, and to tackle ILPs with millions of variables. CPLEX, first released in 1988, pioneered barrier methods for large-scale LPs and has solved real-world instances like airline scheduling with over 10,000 integer variables. Gurobi, introduced in 2009, emphasizes multicore and cluster scalability, often outperforming competitors on mixed-integer quadratic programs through tuned cut selection and node selection strategies. These tools guarantee optimality certificates and are widely used in for their robustness on structured combinatorial models.

Approximation and Heuristic Methods

Combinatorial optimization problems often involve vast search spaces that render methods computationally infeasible for large instances, necessitating and approaches that trade optimality for efficiency. These methods aim to produce solutions that are "good enough" in reasonable time, with algorithms offering provable guarantees on solution quality relative to the optimum, while s prioritize speed and practicality without such bounds. For small instances where methods suffice, these techniques provide scalable alternatives, but their value shines in high-dimensional or applications. Greedy algorithms form a foundational class of heuristics, iteratively selecting the locally optimal choice at each step to construct a solution rapidly. For instance, in the traveling salesman problem, the nearest neighbor algorithm starts at a and repeatedly visits the closest unvisited city, yielding a tour in O(n^2) time but potentially far from optimal. This approach excels in problems like minimum spanning trees, where Kruskal's or Prim's algorithms guarantee optimality due to the choice property, but in general, it risks entrapment in local optima. Local search methods enhance greedy strategies by exploring neighborhoods around an initial , iteratively improving it until no better neighbor exists. Techniques like hill-climbing evaluate adjacent solutions—such as swapping elements in a —and move to the first improvement, converging quickly but susceptible to local maxima. More robust variants, including for path optimization, systematically probe larger neighborhoods to escape poor local optima, balancing computational cost with quality in problems like scheduling. Approximation algorithms provide rigorous performance assurances, measuring how closely a polynomial-time solution approaches the optimum via ratios. A ρ-approximation algorithm guarantees a solution within factor ρ of the optimal value; for example, a simple 2-approximation algorithm for greedily constructs a maximal matching and includes both endpoints of each matched edge into the cover; the size is at most twice the optimum since each matched edge requires at least one vertex in any cover. Seminal results include Christofides' 1.5-approximation for metric TSP, which combines minimum spanning trees with minimum matchings, demonstrating how structural insights yield tight bounds for specific problem classes. These ratios establish worst-case reliability, guiding practitioners on expected suboptimality. Metaheuristics extend local search by incorporating mechanisms to escape local optima and explore the global landscape more broadly. Simulated annealing, inspired by , probabilistically accepts worse solutions with probability exp(-ΔE/T) to mimic cooling processes, where ΔE is the cost increase and T the , enabling diverse exploration before refinement. Genetic algorithms evolve populations of solutions through selection, crossover, and , drawing from natural evolution to maintain diversity; John Holland's foundational work in the 1970s formalized this for optimization. , proposed by Glover in 1986, uses short-term memory to forbid recent moves (tabu lists) while allowing aspiration criteria for promising exceptions, preventing cycles and promoting intensification. These methods, while lacking guarantees, often outperform pure local search on complex landscapes like vehicle routing. Recent hybrid approaches integrate heuristics with to automate tuning and enhance adaptability, particularly since 2015 amid advances in . For example, can train policies to select neighborhood operators dynamically, as in graph neural network-based solvers for routing problems, improving over static metaheuristics by learning instance-specific behaviors. Neuro-symbolic hybrids combine neural networks for initial solutions with traditional local search for refinement, achieving state-of-the-art results on benchmarks like the traveling salesman problem. These developments leverage data-driven insights to reduce manual parameter adjustment, scaling heuristics to massive datasets.

Computational Complexity

Decision vs. Optimization Versions

In combinatorial optimization, problems are often categorized into decision, search, and optimization variants, each addressing different aspects of the underlying combinatorial structure. A poses a about the existence of a feasible solution satisfying a specific condition, such as whether a given bound can be met. For instance, the decision version of the traveling salesman problem (TSP) asks: given a with edge weights and a threshold k, does there exist a Hamiltonian cycle with total weight at most k? This formulation allows for outcomes, making it suitable for complexity classifications like . The search problem extends the decision variant by requiring the explicit construction of a feasible solution when one exists, without necessarily optimizing any objective. In the context of TSP, the search version would demand finding any Hamiltonian cycle if the decision answer is yes, serving as a bridge to more constructive tasks in combinatorial settings. Search problems are analyzed within the class FNP, where solutions can be verified in polynomial time if a corresponding decision problem is in NP. An , in contrast, seeks the best feasible solution according to a minimization or maximization objective, such as finding the cycle of minimum total weight in TSP. This variant captures the core goal of combinatorial optimization but introduces greater computational demands, as it requires not only feasibility but also extremal quality. Optimization problems are typically NP-hard when their decision counterparts are NP-complete, reflecting the inherent difficulty of pinpointing global optima in discrete spaces. The relationships among these variants are governed by polynomial-time reductions that highlight their interconnected complexities. Solving an optimization problem immediately yields solutions to its decision and search versions: compute the optimal value or solution, then compare to the threshold for decision, or output the solution directly for search. Thus, the decision problem reduces to the optimization problem in polynomial time. The converse holds via self-reducibility in many combinatorial cases: the optimal value can be found by binary search over possible thresholds (with polynomially many decision queries, given bounded solution spaces), and the solution constructed by iteratively querying modified instances. However, direct polynomial reductions from optimization to decision do not always exist without multiple oracle calls, underscoring that optimization encompasses stricter requirements. Formally, these problems are defined within computational models like to analyze their . A is a L \subseteq \{0,1\}^* where a decides membership in time for P or via nondeterministic -time verification for . Search problems correspond to relations R \subseteq \{0,1\}^* \times \{0,1\}^* where, for each instance x, there exists at most one short witness y ( in |x|) such that (x,y) \in R, verifiable deterministically in time. Optimization problems generalize this to functions mapping instances to optimal values or solutions, often with the objective computable in time given a candidate solution. These definitions enable precise classifications, such as TSP's decision version being NP-complete via Cook's theorem and Karp reductions.

Hardness and Intractability

Combinatorial optimization problems whose decision versions belong to the NP are known as NP-optimization problems. These are typically search problems where, given an instance, one seeks to find a feasible solution that optimizes an objective function, and verifying the optimality or quality of a proposed solution can be done in time. The intractability of many NP-optimization problems stems from their , established via polynomial-time many-one reductions from NP-complete decision problems. A example is the traveling salesman problem (TSP), which is NP-hard through a reduction from the Hamiltonian cycle problem, implying that no polynomial-time algorithm exists for TSP unless P = NP. This hardness extends to numerous other combinatorial problems, underscoring the theoretical barriers to exact solutions. The unresolved P versus NP question amplifies these challenges: if P = NP, polynomial-time algorithms would exist for finding optimal solutions to NP-optimization problems; conversely, the consensus in theoretical computer science holds that P ≠ NP, making exact optimization infeasible for large-scale instances due to superpolynomial worst-case time requirements. This assumption drives the practical focus on heuristics and approximations, as exhaustive search becomes computationally prohibitive beyond modest problem sizes. Fixed-parameter tractability provides a nuanced perspective on hardness, classifying problems as solvable in time f(k) \cdot n^{O(1)}, where n is the input size, k is a small , and f is an arbitrary . In combinatorial optimization on , parameters such as —measuring how "tree-like" a is—enable FPT algorithms for problems like independent set or when treewidth is bounded, allowing efficient dynamic programming over tree decompositions. Advances in offer potential mitigations, though not full resolutions, to . achieves a quadratic speedup over classical unstructured search, reducing the time to find optimal solutions in NP-optimization problems from O(2^n) to O(2^{n/2}) for n-bit search spaces. As of 2025, developments in variational quantum algorithms, such as the quantum approximate optimization algorithm (QAOA), continue to demonstrate empirical advantages on small instances of problems like Max-Cut, with recent extensions including multi-objective formulations and dynamic adaptive variants showing improved performance on specific combinatorial tasks; however, scalability remains constrained by noise and limitations in near-term devices.

Notable Problems

Graph Optimization Problems

Graph optimization problems form a cornerstone of combinatorial optimization, focusing on finding optimal structures or paths within representations of networks, such as transportation systems or communication grids. These problems typically involve minimizing costs, weights, or lengths while satisfying or coverage constraints, and they underpin many real-world applications in and design. Seminal challenges in this domain include finding minimum-weight spanning trees, shortest paths, maximum matchings, and tours visiting all vertices exactly once, each with well-defined mathematical formulations that highlight their NP-hard or solvability. The Traveling Salesman Problem (TSP) seeks a minimum-weight Hamiltonian cycle in a , where the cycle visits each exactly once and returns to the starting point. In the symmetric variant, edge weights satisfy the and are undirected, while the asymmetric TSP (ATSP) allows directed edges with potentially different weights in each direction. The problem's origins trace back to the 1930s, when posed it informally in as a challenge in . A classic linear programming formulation for the TSP, known as the Miller-Tucker-Zemlin (MTZ) model, uses binary variables x_{ij} indicating traversal from i to j and auxiliary variables u_i representing the of i in the . The objective is to minimize \sum_{i \neq j} c_{ij} x_{ij}, subject to degree constraints \sum_{j \neq i} x_{ij} = 1 and \sum_{i \neq j} x_{ij} = 1 for all i, and subtour elimination via u_i - u_j + n x_{ij} \leq n-1 for i, j = 2, \dots, n with $2 \leq u_i \leq n. The (MST) problem aims to find a subset of edges in an undirected, connected, edge-weighted that connects all with minimum total weight and no cycles. This optimization ensures efficient network connectivity, such as in electrical grids or clustering. solves the MST by greedily adding the smallest-weight edge that does not form a cycle, using union-find for efficiency, and runs in O(E \log V) time for a with V and E edges. , alternatively, grows the tree from an arbitrary starting by repeatedly adding the minimum-weight edge connecting a tree to a non-tree , also achieving O(E \log V) time with priority queues. Shortest path problems optimize the minimum-weight path from a source vertex to all others (or a specific target) in a weighted graph, crucial for navigation and routing. For graphs with non-negative edge weights, Dijkstra's algorithm efficiently computes single-source shortest paths by maintaining a priority queue of vertices and relaxing edges from the lowest-distance vertex, with time complexity O((V + E) \log V) using Fibonacci heaps. In graphs allowing negative weights (but no negative cycles), the Bellman-Ford algorithm relaxes all edges |V|-1 times iteratively, detecting negative cycles if further relaxation is possible, and operates in O(VE) time. Maximum matching problems seek the largest set of edges without shared vertices, optimizing pairings in bipartite or general graphs, such as in or . In bipartite graphs, the size of the maximum matching equals the size of the minimum , as established by König's theorem. For bipartite graphs, polynomial-time algorithms like Hopcroft-Karp achieve O(E \sqrt{V}) time. In general (non-bipartite) graphs, maximum cardinality matching remains polynomial-time solvable via Edmonds' , though weighted variants add complexity.

Resource Allocation Problems

Resource allocation problems in combinatorial optimization involve assigning limited resources to a set of entities to optimize an objective, such as maximizing or minimizing , under constraints. These problems model scenarios where items, tasks, or sets must be selected or packed without exceeding available resources, often leading to NP-hard challenges that require efficient algorithms for practical solutions. Key examples include packing, , and scheduling tasks, where the goal is to achieve near-optimal allocations despite in possibilities. The 0-1 is a foundational problem where a set of n items, each with weight w_i > 0 and value v_i > 0, must be selected to maximize total value without exceeding a knapsack W. The objective is to solve \max \sum_{i=1}^n v_i x_i subject to \sum_{i=1}^n w_i x_i \leq W and x_i \in \{0,1\} for all i, where x_i = 1 indicates item i is included. This formulation captures binary decisions on resource usage, such as selecting projects under budget limits. A dynamic programming approach solves it exactly in O(nW), building a table dp representing the maximum value for j, updated via dp = \max(dp, dp[j - w_i] + v_i) for each item i and j \geq w_i. This method, rooted in early techniques, enables optimal solutions for moderate W. The seeks to pack n items of sizes s_i \in (0,1] into the minimum number of bins of unit capacity to minimize resource waste. Formally, it minimizes the number of bins m such that items can be assigned to bins without exceeding capacity 1 per bin, often approximated since exact solutions are NP-hard. This arises in for container loading or memory allocation in computing. Seminal methods generate feasible packings as columns in a linear program, solving the dual to find tight lower bounds. The requires selecting a minimum number of subsets from a collection \mathcal{S} = \{S_1, \dots, S_m\} to cover a U with |U| = n, minimizing |\mathcal{S}'| such that \bigcup_{S \in \mathcal{S}'} S = U. It models for covering demands with overlapping services, like facility location. The iteratively picks the set covering the most uncovered elements, achieving an ratio of \ln n + 1, where the size is at most (\ln n + 1) times optimal. This bound, derived from series analysis, remains tight up to constants. Job scheduling on parallel identical machines to minimize makespan, denoted P||C_{\max}, allocates n jobs with processing times p_j > 0 to m machines to minimize the completion time C_{\max} = \max_k \sum_{j \in M_k} p_j, where M_k is the set of jobs on machine k. This equates to partitioning jobs into m subsets with balanced loads, akin to multiprocessor scheduling in computing. The longest processing time (LPT) heuristic sorts jobs decreasingly and assigns each to the current least-loaded machine, yielding a (4/3 - 1/(3m))-approximation. Exact dynamic programming solves small instances but scales poorly. Variants like the multidimensional knapsack problem extend the 0-1 case to multiple constraints, maximizing \sum v_i x_i subject to \sum w_{i,d} x_i \leq W_d for dimensions d=1,\dots,D and x_i \in \{0,1\}. In , it models by selecting assets to maximize return under constraints on risk, liquidity, and sectors. Lagrangian or branch-and-bound techniques adapt the single-dimensional DP for these cases, balancing multiple resource limits.

References

  1. [1]
    What is Combinatorial Optimization?
    Combinatorial optimization is the process of searching for maxima (or minima) of an objective function F whose domain is a discrete but large configuration ...
  2. [2]
    None
    ### Summary of Integer and Combinatorial Optimization (ICO-EORMS11.pdf)
  3. [3]
    [PDF] 1 Introduction and Motivation
    Jan 19, 2010 · Combinatorial optimization problems are a subset of discrete optimization problems although there is no formal definition for them. A ...
  4. [4]
    [PDF] A Survey of Tabu Search in Combinatorial Optimization
    May 1, 2014 · This research provides insight about the algorithm or procedure of the working of tabu search algorithm on combinatorial optimization problems ...
  5. [5]
    [PDF] Lecture 15 - CIS UPenn
    In general, a combinatorial optimization problem is defined by a feasible set and an objective function. Definition 2 An optimization problem P = (F,Q) is an ...
  6. [6]
    [PDF] Introduction to Combinatorial Optimization
    In view of methodologies, combinatorial optimization and discrete opti- mization have very close relationship. For example, to prove NP-hardness.
  7. [7]
    [PDF] CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 1
    Continuous vs Combinatorial Optimization. Some optimization problems are best formulated as one or the other. Many problems, particularly in computer science ...
  8. [8]
    18.433 Combinatorial Optimization
    This subject covers combinatorial optimization which deals with optimization problems defined on discrete structures.
  9. [9]
    [PDF] Combinatorial Optimization: Introductory Problems and Methods
    May 1, 2019 · Abstract. This paper will cover some topics of combinatorial optimization, the study of finding the best possible arrangement of a set of ...
  10. [10]
    CS 522 Network and Combinatorial Optimization
    A combinatorial optimization problem is a problem of maximizing a real-valued objective function on a finite set of feasible solutions.
  11. [11]
    Integer Programming and Combinatorial Optimization
    The course is a comprehensive introduction to the theory, algorithms and applications of integer optimization and is organized in four parts: formulations ...
  12. [12]
    [PDF] Combinatorial Optimization and Integer Linear Programming
    Among the integer linear programs we have a special class, namely the combinatorial optimization problems. In those we restrict the variables to be binary. If ...
  13. [13]
    [PDF] On the history of combinatorial optimization (till 1960) - CWI
    After the formulation of linear programming as generic problem, and the development in 1947 by Dantzig of the simplex method as a tool, one has tried to attack ...
  14. [14]
    The Truck Dispatching Problem | Management Science - PubsOnLine
    The paper is concerned with the optimum routing of a fleet of gasoline delivery trucks between a bulk terminal and a large number of service stations ...
  15. [15]
    Beyond fifty years of vehicle routing: Insights into the history and the ...
    Jun 26, 2025 · We revise the literature on routing problems on the last 50 years. We focus on problem variants. We highlight emerging variants. We provide insights and ...2. Classical Problems · 3. Routing Problems With... · 5. Recent And Emerging...
  16. [16]
    Research on Supply Chain Management Based on Combinatorial ...
    Oct 9, 2023 · In the aspect of inventory management, this paper introduces the inventory control models based on combinatorial optimization algorithm and ...Missing: seminal | Show results with:seminal
  17. [17]
    Facility Location Modeling and Inventory Management with ...
    Sep 2, 2009 · In this paper we consider a centralized logistics system in which a single company owns the production facility and the set of retailers and establishes ...
  18. [18]
    [PDF] On the history of the transportation and maximum flow problems - CWI
    Assuming a steady state condition, find a maximal flow from one given city to the other.Missing: logistics | Show results with:logistics<|separator|>
  19. [19]
    Applications of Maximal Network Flow Problems in Transportation ...
    Aug 10, 2025 · This paper presents some modifications of Ford-Fulkerson's labeling method for solving the maximal network flow problemwith application in ...
  20. [20]
    Optimizing Delivery Routes - Informs.org
    Dec 14, 2015 · As of December 2015, ORION has already saved UPS more than $320 million. At full deployment, ORION is expected to save $300–$400 million ...
  21. [21]
    Airline Crew Scheduling: A New Formulation and Decomposition ...
    Optimizing Airline Crew Scheduling Using Biased Randomization: A Case Study. 8 September 2016. Airline Crew Augmentation: Decades of Improvements from Sabre.
  22. [22]
    [PDF] Combinatorial Optimization in VLSI Design - Semantic Scholar
    This survey paper gives an up-to-date account on the key problems in layout and timing closure and presents the main mathematical ideas used in a set of ...
  23. [23]
    Combinatorial Register Allocation and Instruction Scheduling
    This article introduces a combinatorial optimization approach to register allocation and instruction scheduling, two central compiler problems.
  24. [24]
    [2211.02861] Feature Selection for Classification with QAOA - arXiv
    Nov 5, 2022 · In this work, we consider in particular a quadratic feature selection problem that can be tackled with the Quantum Approximate Optimization Algorithm (QAOA).
  25. [25]
    A Combinatorial Approach to Hyperparameter Optimization
    Jun 11, 2024 · Hyperparameters are predefined model settings which fine-tune the model's behavior and are critical to modeling complex data patterns.
  26. [26]
    Sequence Alignment - Handbook of Discrete and Combinatorial ...
    Alignments are a powerful way to compare related DNA or protein sequences. They can be used to capture various facts about the sequences aligned.
  27. [27]
    CombFold: predicting structures of large protein assemblies using a ...
    Feb 7, 2024 · Here we present CombFold, a combinatorial and hierarchical assembly algorithm for predicting structures of large protein complexes utilizing pairwise ...
  28. [28]
    (PDF) Quantum Enhanced Multi-Objective Optimization with Artificial ...
    Sep 30, 2025 · The integration of quantum computing with artificial intelligence for multi-objective optimization in autonomous vehicle control represents a ...
  29. [29]
    [PDF] Large Language Models for Combinatorial Optimization - arXiv
    Jul 4, 2025 · This systematic review explores the application of Large Language Models (LLMs) in Combinatorial. Optimization (CO).
  30. [30]
    Exact Algorithms for NP-Hard Problems: A Survey - SpringerLink
    We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature,
  31. [31]
    An Automatic Method of Solving Discrete Programming Problems
    10 This is an upper bound to the branch value of y which has been obtained by ignoring the fact that Y2 would be negative at this point. The true branch value ...
  32. [32]
    [PDF] Branch and bound methods for combinatorial problems
    In solving our 40 city problems, we averaged about 5 completely specified tours per problem. A 40 cityproblem has approximately 10'^" tours.
  33. [33]
    [PDF] Chapter 5 Combinatorial Optimization and Complexity - Ethz
    First we consider, for each optimization problem, the associated decision problem. (problem demanding only YES or NO answer):. For any fixed (rational) number ...<|control11|><|separator|>
  34. [34]
    Decision vs Optimization - Algorithms II
    Decision vs Optimization. Both P and NP are classes of decision problems. A decision problem is a problem with a yes/no answer: Is this graph connected?
  35. [35]
    [PDF] COMPUTERS AND INTRACTABILITY A Guide to the Theory of NP ...
    COMPUTERS AND INTRACTABILITY. A Guide to the Theory of NP-Completeness. Michael R. Garey/David S. Johnson. BELL LABORATORIES. MURRAY HILL, NEW JERSEY. W. H. ...
  36. [36]
    [PDF] REDUCIBILITY AMONG COMBINATORIAL PROBLEMS
    REDUCIBILITY AMONG COMBINATORIAL PROBLEMS. 87 elements of other countable domains. It is a reasonable working hypothesis, championed originally by Jack ...
  37. [37]
    The Status of the P Versus NP Problem - Communications of the ACM
    Sep 1, 2009 · So P = NP means that for every problem that has an efficiently verifiable solution, we can find that solution efficiently as well. We call the ...
  38. [38]
    Combinatorial Optimization on Graphs of Bounded Treewidth
    This is a useful approach for obtaining fixed-parameter tractable algorithms. Starting from trees and series-parallel graphs, we introduce the concepts of ...
  39. [39]
    Knapsack Problems | SpringerLink
    Book Title: Knapsack Problems · Authors: Hans Kellerer, Ulrich Pferschy, David Pisinger · Publisher: Springer Berlin, Heidelberg · eBook Packages: Springer Book ...
  40. [40]
    Approximation algorithms for combinatorial problems - ScienceDirect
    Simple, polynomial-time, heuristic algorithms for finding approximate solutions to various polynomial complete optimization problems are analyzed with respect ...
  41. [41]
    The Theory and Computation of Knapsack Functions - jstor
    j( NAPSACK problems of the most general type can arise directly in two ways. If a portion of space is being packed with objects, each having.
  42. [42]
    A dual ascent method for the portfolio selection problem with ...
    This paper uses quadratic and integer programming methods (dual ascent, branch-and-bound) to solve portfolio selection problems involving risk (variance), ...