Fact-checked by Grok 2 weeks ago

Knapsack problem

The knapsack problem is a classic problem in , in which one seeks to maximize the total value of a of items—each characterized by a weight and a value—that can be included in a container (the "knapsack") without exceeding its fixed weight capacity. Formally, for a set of n items with weights w_j and profits p_j (j = 1 to n), and capacity C, the objective is to solve \max \sum_{j=1}^n p_j x_j \quad \text{subject to} \quad \sum_{j=1}^n w_j x_j \leq C, where x_j \in \{0,1\} in the standard 0–1 variant. Systematic study of the knapsack problem began in the 1950s within operations research and computer science, evolving into a benchmark for algorithmic techniques due to its NP-hard nature, which implies that no known polynomial-time algorithm exists for exact solutions in the worst case unless P = NP. Despite this, pseudo-polynomial time algorithms, such as dynamic programming, solve it exactly in O(nC) time, making it tractable for moderate capacities, while approximation algorithms like greedy heuristics provide near-optimal solutions efficiently. Parallel implementations on GPUs can further accelerate solutions for large instances with up to 100,000 variables, reducing computation time by factors of up to 26. The problem has numerous variants, including the unbounded knapsack (allowing multiple copies of items), multidimensional knapsack (multiple capacity constraints), and quadratic or multiobjective extensions, all of which remain NP-hard and extend the core challenge to more complex scenarios. Applications span diverse fields, such as optimizing cargo loading in , user allocation in networks, planning for resource-constrained experiments, and cryptographic protocols like zero-knowledge proofs.

Introduction and Definition

Informal Description

The knapsack problem derives its name from the classic scenario of a traveler or hiker packing a limited-capacity knapsack with valuable items while adhering to weight constraints, evoking the practical challenge of selecting what to carry on a journey. According to , the term was coined by mathematician Tobias Dantzig (1884–1956) in the early to capture this intuitive packing dilemma. At its core, the problem presents a collection of items, each with an associated weight and , alongside a knapsack of fixed capacity. The task is to choose a of items that maximizes the overall without surpassing the weight limit, highlighting essential trade-offs between high-value but heavy options and combinations of lighter, lower-value alternatives. This setup mirrors real-world decisions under , such as budgeting or inventory management. Consider a straightforward example with a knapsack capacity of 7 units and three items: item A (weight 5, value 10), item B (weight 4, value 7), and item C (weight 3, value 5). Selecting item A alone yields a value of 10 but leaves unused capacity, while combining items B and C achieves a total weight of 7 and value of 12, demonstrating how forgoing a single heavy item can enable a more rewarding mix of lighter ones. This formulation positions the knapsack problem as a quintessential model for , where the goal is to balance competing priorities to extract the greatest possible from finite resources.

Mathematical Formulation

The 0-1 knapsack problem, the standard variant, is defined as follows. Given a set of n items, where each item i (for i = 1, \dots, n) has a positive weight w_i > 0 and a positive v_i > 0, along with a knapsack capacity W > 0 (also a positive ), the goal is to select a of items that maximizes the total without exceeding the capacity. This is expressed mathematically as the : \begin{align*} \max &\quad \sum_{i=1}^n v_i x_i \\ \text{s.t.} &\quad \sum_{i=1}^n w_i x_i \leq W, \\ &\quad x_i \in \{0, 1\} \quad \forall i = 1, \dots, n, \end{align*} where x_i = 1 if item i is included in the knapsack and x_i = 0 otherwise. The unbounded knapsack problem relaxes the binary restriction on x_i, allowing multiple instances of each item. Here, the formulation modifies the decision variables to non-negative integers: \begin{align*} \max &\quad \sum_{i=1}^n v_i x_i \\ \text{s.t.} &\quad \sum_{i=1}^n w_i x_i \leq W, \\ &\quad x_i \in \mathbb{Z}_{\geq 0} \quad \forall i = 1, \dots, n, \end{align*} with the same assumptions on positive integer weights, values, and capacity. This variant models scenarios where items can be taken in unlimited quantities. In contrast, the fractional knapsack problem permits partial inclusion of items, treating them as divisible. The decision variables are now continuous in [0, 1]: \begin{align*} \max &\quad \sum_{i=1}^n v_i x_i \\ \text{s.t.} &\quad \sum_{i=1}^n w_i x_i \leq W, \\ &\quad 0 \leq x_i \leq 1 \quad \forall i = 1, \dots, n, \end{align*} again assuming positive integer parameters unless otherwise specified. This continuous relaxation is solvable via a by sorting items by value density v_i / w_i. The decision version of the 0-1 knapsack problem, used in , asks whether there exists a feasible achieving at least a target value V > 0: Does there exist x \in \{0, 1\}^n such that \sum_{i=1}^n v_i x_i \geq V and \sum_{i=1}^n w_i x_i \leq W? This binary question inherits the same parameter assumptions.

Applications

Resource Allocation and Logistics

The knapsack problem finds extensive application in and , where decision-makers must select subsets of items to maximize value under constraints like weight, volume, or budget. In , these problems emerged prominently after , as researchers developed techniques to address military logistics and industrial efficiency challenges, building on foundations laid by figures like . This historical context underscored the knapsack model's utility in modeling limited-resource scenarios, influencing postwar advancements in . In cargo loading for shipping and airlines, the knapsack problem models the selection of payloads to maximize or value while respecting or limits on and . For instance, treat cargo shipments as items with associated profits (e.g., freight rates) and weights, formulating the acceptance of bookings as a 0-1 knapsack to avoid overloading while filling available space. This approach ensures efficient utilization of hold space, particularly in systems where dynamic decisions balance demands against contract allotments. Investment portfolio selection leverages the knapsack framework to allocate a fixed budget across assets, treating securities as indivisible items with returns as values and costs (e.g., share prices) as weights. Under uncertainty, models incorporate interval-valued parameters for expected returns and budgets, using parametric linear programming to determine optimal integer share allocations while enforcing cardinality limits on the number of assets. This formulation addresses the limitations of fractional solutions in classical mean-variance optimization, prioritizing high-return portfolios without exceeding capital constraints. In , cutting stock problems—such as slicing rolls of , metal, or fabric—rely on knapsack subproblems to generate efficient cutting patterns that minimize waste. Seminal work by Gilmore and Gomory formulated the one-dimensional case using , where each new pattern is derived by solving a knapsack maximization to fit demanded widths within length. For two-dimensional variants, sequential knapsacks first allocate lengths to strips and then widths to combine them, reducing material overuse in production lines. A representative example is truck loading, where a with a of 730 units must select from parcels (e.g., 15 items with weights from 50 to 120 units and values based on priority) to maximize total delivery value. The optimal selection—A, B, C, D, H, I, N, O—fills the exactly at 730 units for a value of 1418, illustrating decisions to balance load without overflow. Such instances highlight the model's role in daily , often extending to multidimensional constraints like volume alongside .

Cryptography and Coding Theory

The subset sum problem, a special case of the knapsack problem where the value of each item equals its weight (i.e., v_i = w_i for all i), forms the basis for early knapsack-based cryptosystems. In these systems, the hardness of solving the subset sum problem is leveraged to ensure security, as finding a subset of weights that sums to a target value is computationally infeasible for general instances. The seminal Merkle-Hellman knapsack cryptosystem, proposed in 1978, uses a "trapdoor" mechanism to transform a hard subset sum instance into an easy one for the key holder, enabling efficient encryption and decryption while appearing intractable to outsiders. A key feature of the Merkle-Hellman scheme is the use of superincreasing sequences, where each weight satisfies w_i > \sum_{j=1}^{i-1} w_j, allowing straightforward decoding for the legitimate recipient after applying a modular with a secret multiplier and . For example, encryption involves computing a ciphertext as the of selected public weights corresponding to message bits, and decryption reverses the to recover the superincreasing instance, which can be solved in linear time. This approach was extended to digital signatures, where the enables signing without revealing the private key, and to zero-knowledge proofs, such as interactive protocols demonstrating of a to a knapsack instance without disclosing it. In , knapsack problems arise in the decoding of certain linear -correcting codes, where finding a low-weight corresponds to solving a subset sum instance over the code's parity-check matrix. Harald Niederreiter's 1986 framework formalized knapsack-type cryptosystems using algebraic , proposing schemes where hides messages in syndromes, and decoding relies on efficient knapsack solvers for structured codes. These ideas influenced code-based public-key systems, though practical implementations often shift to more robust decoding algorithms to avoid direct knapsack vulnerabilities. Despite initial promise, knapsack cryptosystems were largely abandoned in the 1980s following lattice-based attacks, such as those using the Lenstra-Lenstra-Lovász (LLL) introduced in 1982, which efficiently recover the trapdoor from the public key by solving the closest vector problem in low-dimensional lattices. Adi Shamir's 1982 attack and subsequent improvements, including those by Lagarias and Odlyzko, demonstrated that even variants with superincreasing sequences succumb to these methods, rendering pure knapsack-based protocols insecure against modern computational resources. As a result, contemporary avoids reliance on knapsack hardness, favoring lattice-based primitives that resist such attacks or code-based systems with hidden structure.

Bioinformatics and Other Domains

In bioinformatics, the knapsack problem arises in sequencing, particularly in approaches where the goal is to select a of DNA fragments (reads or clones) to maximize coverage while adhering to limited sequencing capacity, such as or constraints. This formulation treats each DNA fragment as an item with a "weight" corresponding to its sequencing cost (e.g., or time) and a "value" based on the genomic regions it covers, aiming to minimize redundancies and gaps in assembly. A hybrid algorithm combining the , dynamic programming, and corridor method has been proposed for modeled as an problem incorporating knapsack constraints, outperforming prior algorithms on benchmark instances from . The problem also appears in drug design, where selecting combinations of molecular features or partial atomic charges for candidate compounds involves balancing efficacy (value) against potential side effects or synthesis costs (weights) within regulatory or resource limits. For instance, assigning partial atomic charges in drug-like molecules can be modeled as a multiple-choice knapsack problem, allowing selection from groups of charge options per atom to optimize molecular properties like binding affinity while ensuring consistency across simulations. This approach, solved via dynamic programming and approximation techniques, has been shown to reduce computational errors in quantitative structure-activity (QSAR) models, enabling faster of large chemical libraries. Constraint logic programming has been applied to on lattice models, optimizing energy minimization under constraints like secondary structure and compactness, outperforming exhaustive search for small proteins and providing insights into misfolding diseases like Alzheimer's. Beyond , the knapsack problem is prominent in VLSI design for placing circuit components, where modules or cells are allocated to chip areas to maximize functionality (e.g., metrics) without exceeding real estate or budgets. Dynamic programming solutions to the 0-1 knapsack variant have been integrated into placement heuristics, selecting subsets of components to resolve overcrowding while preserving , as seen in multilevel partitioning algorithms that achieve up to 20% better wirelength reduction in industrial benchmarks. Recent applications in , post-2020, leverage the knapsack problem for optimizing charge scheduling in renewable systems, selecting charging intervals or power levels to maximize or revenue under capacity and time-of-use pricing constraints. For example, a knapsack-based formulation integrated with genetic algorithms has been proposed for power usage scheduling in with systems (BESS), distributing loads to off-peak hours and balancing supply-demand fluctuations, resulting in up to 15% cost savings in simulated solar-integrated homes. extensions, accounting for interaction costs like rates, have also been explored but remain computationally intensive for use.

Computational Complexity

Decision and Optimization Versions

The Knapsack problem is formulated in two primary variants: the decision version and the optimization version. In the decision version, given an n representing the number of items, each with a positive weight w_i and value v_i for i = 1, \dots, n, along with positive integers W (the knapsack capacity) and V (the target value), the question is whether there exists a S of {1, \dots, n} such that \sum_{i \in S} w_i \leq W and \sum_{i \in S} v_i \geq V. This yes/no formulation captures the core combinatorial challenge of selecting items without exceeding the weight limit while meeting or surpassing a value threshold. The optimization version extends this by seeking the maximum achievable value, or equivalently the subset S that maximizes \sum_{i \in S} v_i subject to \sum_{i \in S} w_i \leq W. The two versions are closely related through self-reducibility: the optimization problem can be solved using an oracle for the decision problem by performing a binary search over possible target values V in the range from 0 to the sum of all v_i, which requires O(\log(\sum v_i)) decision queries, each polynomial in the input size. This reduction demonstrates that polynomial-time solvability of the decision version implies the same for the optimization version, highlighting their equivalent computational difficulty in the complexity hierarchy. The decision version belongs to NP because a proposed subset serves as a polynomial-length certificate that can be verified in linear time O(n) by computing the total weight and value sums and checking against W and V, respectively. Historically, Richard Karp identified the Knapsack decision problem as one of 21 NP-complete problems in his seminal 1972 paper, establishing its foundational role in computational complexity theory by showing polynomial-time reductions from other hard problems.

NP-Hardness Proofs

The decision version of the 0-1 Knapsack problem—determining whether there exists a subset of items with total weight at most W and total value at least V—is NP-complete. This follows from a polynomial-time many-one reduction from the Partition problem, which is itself NP-complete. Given a Partition instance consisting of positive integers a_1, \dots, a_n with even total sum S = \sum a_i, construct a Knapsack instance by setting the weight and value of each item i to w_i = v_i = a_i, the capacity to W = S/2, and the target value to V = S/2. This reduction runs in linear time relative to the input size. A "yes" instance of Partition corresponds to a subset summing exactly to S/2, which satisfies the Knapsack constraints with equality; conversely, any feasible Knapsack solution yields a valid Partition by the identical weights and values. The unbounded Knapsack problem, allowing multiple copies of each item, is also NP-complete in its decision version. Strong NP-hardness for the unbounded case arises via a from the 3-Partition problem, which is strongly NP-complete. In this reduction, each element of the 3-Partition instance (a set of $3m integers bounded by a polynomial in m) is mapped to an item type in the unbounded Knapsack with weight and value equal to the element's size, capacity set to the total sum divided by m, and target value matching the per-bin sum; feasibility requires exactly three copies per "bin" without overflow, mirroring the partition constraint. This establishes strong , as the numbers remain polynomially bounded, ruling out pseudo-polynomial solvability in the unary encoding. These hardness results imply that no polynomial-time exists for solving the (decision or optimization) Knapsack problem unless P = NP. However, the existence of a pseudo-polynomial-time dynamic programming running in O(n[W](/page/W)) time provides a practical exact solution when the [W](/page/W) is not too large, highlighting the weakly NP-hard nature of the 0-1 and unbounded variants. In theory, the Knapsack problem is fixed-parameter tractable when parameterized by the number of items n, solvable in O(2^n n) time via or dynamic programming over subsets. However, it is W-hard when parameterized by the [W](/page/W) in multidimensional generalizations, though the single-dimensional case remains FPT in [W](/page/W) due to the pseudo-polynomial dynamic program. Recent extensions confirm NP-hardness in generalized settings, such as the Knapsack Problem with Group Fairness, where items belong to fairness groups and solutions must satisfy proportionality constraints across groups; this variant remains NP-hard via reduction from , even with two groups.

Unit-Cost and Real-Number Models

In the unit-cost (RAM) model, arithmetic operations and comparisons on integers fitting within machine words are assumed to take constant time, regardless of the word size needed to represent the capacity W. Under this model, the dynamic programming algorithm for the 0-1 knapsack problem computes the optimal solution in O(nW) time, where n is the number of items, by filling a table of size O(nW) with constant-time updates per entry. This running time is pseudo-polynomial, as it is polynomial in the magnitude of W but exponential in the input size \Theta(n \log W + n \log V), where V is the total value bound, since hard instances can have W exponentially large in n. The unit-cost model highlights the dependence of the knapsack problem's solvability on the numeric scale of the inputs, contrasting with the standard multi-tape model where bit operations dominate and the same dynamic programming approach yields only relative to the full input length. In the log-cost model, where the time for is proportional to the bit lengths of the operands (e.g., O((\log W)^2) per or ), the dynamic programming runtime becomes super-polynomial for instances with large W, reinforcing the established via reductions in the model. When weights and values are real numbers, the real-number RAM model assumes constant-time operations on reals, including comparisons and arithmetic. In this setting, the fractional knapsack problem—which permits taking fractional portions of items—is solvable exactly via a that sorts items by decreasing value-to-weight ratio and fills the knapsack sequentially, achieving O(n \log n) time dominated by . (Note: This briefly references the greedy approach for the fractional variant, with full details in approximation methods.) The availability of a fully polynomial-time approximation scheme (FPTAS) for the 0-1 knapsack in the standard model enables (1 - ε)-approximations in O(n^2 / \epsilon) time, independent of W and V, by scaling values and applying dynamic programming on bounded targets. Although theoretically insightful for analyzing scale dependence, the unit-cost model is unrealistic for practical large-W instances, as real machines incur costs scaling with bit lengths for big integers, aligning more closely with log-cost assumptions; nonetheless, it underscores why pseudo-polynomial algorithms suffice for many applications where capacities remain manageable.

Exact Solving Methods

Dynamic Programming Approaches

The dynamic programming solution for the 0-1 knapsack problem computes the maximum value achievable by considering subsets of items under a weight constraint. It employs a two-dimensional table where dp represents the maximum value using the first i items with a knapsack of w. The is defined as: dp = \begin{cases} \max(dp[i-1], \, dp[i-1][w - w_i] + v_i) & \text{if } w \geq w_i \\ dp[i-1] & \text{otherwise} \end{cases} with base cases dp{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0 for all w and dp{{grok:render&&&type=render_inline_citation&&&citation_id=0&&&citation_type=wikipedia}} = 0 for all i, where w_i and v_i are the weight and value of item i. This approach fills the table iteratively, achieving a of O(nW) and of O(nW), where n is the number of items and W is the knapsack . Space can be optimized to O(W) by using a one-dimensional , updating values in reverse order from W down to each item's weight to avoid overwriting dependencies. The following pseudocode illustrates the optimized 1D implementation for the 0-1 knapsack:
function zeroOneKnapsack(weights[], values[], W):
    n = length(weights)
    dp = array of size W+1, initialized to 0
    for i from 1 to n:
        for w from W down to weights[i]:
            dp[w] = max(dp[w], values[i] + dp[w - weights[i]])
    return dp[W]
To demonstrate, consider an example with n=3 items and W=5: item 1 (weight 2, value 3), item 2 (weight 3, value 4), item 3 (weight 4, value 5). The 1D DP array evolves as follows:
  • Initialize: dp = [0, 0, 0, 0, 0, 0]
  • After item 1: dp = [0, 0, 3, 3, 3, 3] (updates for w ≥ 2)
  • After item 2: dp = [0, 0, 3, 4, 4, 7] (updates for w ≥ 3, e.g., dp = max(3, 4 + dp) = 4; dp = max(3, 4 + dp) = 4; dp = max(3, 4 + dp) = 7)
  • After item 3: dp = [0, 0, 3, 4, 5, 7] (updates for w ≥ 4, e.g., dp = max(4, 5 + dp) = 5; dp = max(7, 5 + dp) = 7)
The final maximum value is 7, achieved by selecting items 1 and 2. For the unbounded knapsack problem, where multiple instances of each item are allowed, the dynamic programming formulation modifies the recurrence to permit repeated use. A 1D array dp stores the maximum value for capacity w, updated by considering each item for every weight. The relation is dp = \max(dp, v_i + dp[w - w_i]) for each item i where w \geq w_i. This yields the same O(nW) time and O(W) space complexities. Pseudocode for the unbounded variant is:
function unboundedKnapsack(weights[], values[], W):
    dp = array of size W+1, initialized to 0
    for w from 1 to W:
        for i from 1 to n:
            if w >= weights[i]:
                dp[w] = max(dp[w], values[i] + dp[w - weights[i]])
    return dp[W]
The inner loop order ensures multiples are considered without restriction. These dynamic programming methods provide exact solutions but are pseudo-polynomial, with running time in the of W, rendering them impractical for instances where W exceeds typical memory limits, such as W > 10^6.

Meet-in-the-Middle Algorithm

The meet-in-the-middle algorithm provides an exact solution to the 0-1 knapsack problem by employing a divide-and-conquer that splits the items into two halves, making it particularly suitable for instances where the number of items n is up to about 40 and the knapsack capacity W is large. Introduced by and in 1974, this approach enumerates all subset sums for each half separately and then combines them efficiently to identify the optimal solution without exploring the full $2^n search space. The algorithm begins by partitioning the n items into two subsets A and B, each of size approximately n/2. For subset A, all $2^{n/2} possible subsets are generated, recording pairs (w_A, v_A) where w_A is the total weight and v_A is the total value of the subset. The same process is applied to subset B, yielding pairs (w_B, v_B). One of these lists, say for A, is sorted by weight w_A in ascending order. To find the maximum value, the algorithm iterates over each subset in B. For a given (w_B, v_B), it performs a binary search on the sorted list for A to locate the largest v_A such that w_A \leq W - w_B. The value v_A + v_B is then computed, and the global maximum across all such pairs is retained as the solution. This ensures all feasible combinations are considered while avoiding redundant computations. The is O(2^{n/2} n), arising from the O(2^{n/2}) generations per half (each taking O(n) time in total across all subsets via recursive or iterative methods), the O(2^{n/2} \log 2^{n/2}) step, and the O(2^{n/2} \log 2^{n/2}) searches. The is O(2^{n/2}), required to store the pairs for both halves. An optimization replaces and search with a that maps each possible w_A to the maximum v_A, enabling average O(1) lookups and reducing the overall time to O(2^{n/2} n) more tightly in practice. For illustration, consider n=20 items with weights and values such that W is large, say $10^6. The dynamic programming baseline requires O(20 \times 10^6) = 2 \times 10^7 operations, which may be feasible but grows linearly with W. In contrast, the meet-in-the-middle approach generates $2^{10} = [1024](/page/1024) subsets per half, with and $1024[binary](/page/Binary) searches each takingO(1024 \log ) \approx 10^4operations, yielding a total time of roughlyO(20 \times 1024) \approx 2 \times 10^4operations—significantly faster whenW \gg 2^{n/2}. This demonstrates its advantage over dynamic programming for large W$. Subsequent extensions have refined the technique for even harder instances. For example, Howgrave-Graham and Joux (2010) developed an advanced variant using and optimized storage of subset sums, achieving a of \tilde{O}(2^{0.337n}) for knapsacks of density near 1, with space \tilde{O}(2^{0.256n}). These post-2010 improvements build on the core idea but incorporate more sophisticated data structures for denser problem instances. The meet-in-the-middle algorithm excels when n \leq 40, as $2^{20} \approx 10^6 fits in modern memory and computation budgets, and it outperforms dynamic programming precisely when W is too large for the O(nW) pseudo-polynomial bound to be practical.

Branch-and-Bound and Dominance Techniques

The branch-and-bound (B&B) algorithm represents a foundational exact method for solving the 0-1 knapsack problem, employing a systematic tree search to explore decision variables while pruning unpromising branches to ensure computational efficiency. Introduced by Kolesar in 1967, the approach constructs a search tree where each node corresponds to a partial assignment of items, typically using depth-first traversal by sequentially deciding whether to include (x_i = 1) or exclude (x_i = 0) the next item i. At each node, an upper bound on the maximum achievable value is computed for the remaining subproblem; if this bound is less than or equal to the value of the best feasible solution found so far, the branch is pruned, avoiding exhaustive enumeration of infeasible or suboptimal paths. Upper bounds in B&B for the 0-1 knapsack are often derived from () relaxations, where binary constraints are replaced by continuous variables [0,1], or from heuristics that order items by decreasing value-to-weight ratio (v_i / w_i) and fill the remaining capacity fractionally. Martello and Toth (1977) proposed an enhanced upper bound that dominates the standard relaxation by incorporating surrogate relaxations and reduction techniques, enabling tighter and faster convergence in the search tree. These bounds are evaluated at each node to guide the search, with depth-first ordering typically prioritizing the exclusion branch (x_i = 0) to quickly generate feasible solutions and update the incumbent value. Dominance techniques further refine the B&B process by eliminating redundant partial solutions or states within the search. In the context of comparison, a representing a partial with current v, remaining w, and items considered up to i is dominated by another with v' ≥ v, w' ≤ w, and i' ≥ i (i.e., fewer or equal unassigned items); the dominated can be discarded as it cannot yield a better overall . This criterion, as applied in early B&B implementations, reduces the effective search space by avoiding symmetric or inferior branches. Implementations often integrate dynamic programming (DP) elements to compute bounds more efficiently, such as solving a relaxed subproblem over remaining items to obtain precise upper estimates, while applying dominance to prune states in the auxiliary DP table—discarding any state (i, w, v) if a superior state (i', w', v') exists with i' ≥ i, w' ≤ w, and v' ≥ v. This hybrid approach minimizes the computational overhead of bound calculations, transforming the into a practical method for medium-sized instances. Consider a small 0-1 knapsack instance with W = 10 and three items: item 1 (w_1=6, v_1=30), item 2 (w_2=5, v_2=20), item 3 (w_3=3, v_3=15). The B&B tree starts at the root (i=0, w=10, v=0). Branching on item 1 yields: include (node A: i=1, w=4, v=30, upper bound ≈ 30 + greedy fractional on remaining ≈ 30 + 15 = 45, taking item 3 fully) and exclude (node B: i=1, w=10, v=0, upper bound ≈ 35 from items 2 and 3). From node B, branching on item 2: include (node C: i=2, w=5, v=20, upper bound ≈ 20 + 15 = 35) and exclude (node D: i=2, w=10, v=0, upper bound ≈ (10/3)*15 ≈ 50 from fractional item 3). Node D's bound (50) exceeds early incumbents but guides further exploration. Continuing, node A branches on item 2 (exclude, since w=4 <5), then on item 3 (include: feasible solution v=45), updating incumbent; subsequent nodes like C (include item 3: v=35 <45, pruned if bound ≤45). The optimal v=45 (items 1 and 3) is verified with pruned nodes reducing exploration from 8 to 5 leaves. Empirically, B&B with dominance achieves significant speedups on hard instances, such as strongly correlated problems where values and weights are closely related; Kolesar's implementation solved 50-item instances in under 0.1 minutes on 1960s hardware, generating median 300-500 nodes. These techniques form the core of commercial integer programming solvers like , which embed knapsack B&B for cutting-plane generation and branching in mixed-integer programs, handling instances up to thousands of variables in practice.

Approximation and Heuristic Methods

Greedy Algorithms

Greedy algorithms for the knapsack problem provide fast heuristic solutions by prioritizing items based on their value density, defined as the ratio v_i / w_i for each item i. These methods are particularly effective for the , where items can be divided, and offer reasonable approximations for the more challenging , where items must be taken entirely or not at all. In the fractional knapsack problem, the greedy algorithm sorts the items in non-increasing order of value density and greedily adds as much of each item as possible until the knapsack capacity is reached. This approach yields the optimal solution because the optimal selection corresponds to filling the knapsack with the highest-density items first, maximizing total value without exceeding the weight limit. The time complexity is O(n \log n), dominated by the sorting step. For the 0-1 knapsack problem, the greedy algorithm follows the same sorting by value density but only includes an item if its full weight fits within the remaining capacity. To enhance performance, the algorithm computes the maximum of this greedy selection and the value of the single highest-value item that fits. This yields a \frac{1}{2}-approximation, meaning the solution value is at least half the optimal. The proof relies on the linear programming (LP) relaxation of the 0-1 knapsack, where the greedy solution bounds the integer optimum via duality: the LP optimum upper-bounds the true optimum, and the greedy fills the capacity with high-density items, ensuring the ratio holds. The \frac{1}{2}-approximation ratio is tight, as demonstrated by instances where the solution achieves nearly half the optimal value. Consider a knapsack of capacity $2w and items: (value w + 4\epsilon, weight w + 2\epsilon), (value w + 2\epsilon, weight w + \epsilon), (value w - \epsilon, weight w - \epsilon), for small \epsilon > 0. The densities are approximately 1, but the selects the first item (value w + 4\epsilon) and cannot add more, while the optimum selects the second and third items (total value $2w + \epsilon). As \epsilon \to 0, the approaches \frac{1}{2}. Density-based variants extend the basic by incorporating dynamic programming for subsets of small-weight items to refine selections. For instance, items with weights below a can be solved exactly via , while larger items are handled greedily by ; this hybrid improves practical performance on instances with varied item sizes, though it increases beyond O(n \log n). Despite the worst-case \frac{1}{2}- for 0-1 knapsack, algorithms perform better on average, often achieving values much closer to optimal under typical distributions of item values and weights. Analysis shows the expected ratio exceeds \frac{1}{2} significantly for the decreasing variant. For stronger guarantees with tunable error, fully polynomial-time schemes build on these ideas but require more sophisticated scaling.

Fully Polynomial-Time Approximation Schemes

The fully polynomial-time approximation scheme (FPTAS) for the 0-1 knapsack problem constructs a (1 - ε)-approximate solution for any ε > 0 by scaling and rounding the item values, followed by exact dynamic programming on the reduced instance. Specifically, let OPT denote the optimal solution value. Define δ = ε OPT / n and set the scaled value for each item i as v_i' = ⌊v_i / δ⌋. This rounding reduces the maximum scaled value to at most n, ensuring that the dynamic programming table for achieving scaled values up to n has size in n and 1/ε. The exact solution on the scaled instance is computed using dynamic programming in time in n, 1/ε, and log(max v_i), typically O(n^2 / ε) for the basic implementation after scaling. The approximation guarantee holds because the optimal set S* in the original instance satisfies ∑{i ∈ S*} v_i' ≥ ∑{i ∈ S*} (v_i - δ) = OPT - n δ = OPT - ε OPT = (1 - ε) OPT. Let S' be the optimal set for the scaled instance; then ∑{i ∈ S'} v_i ≥ ∑{i ∈ S'} v_i' ⋅ δ ≥ (1 - ε) OPT ⋅ δ / δ = (1 - ε) OPT, since the scaled optimum is at least (1 - ε) OPT and each original value is at least the scaled value times δ. Since OPT is unknown, an upper bound U ≥ OPT (e.g., the sum of all v_i) is used in place, yielding δ = ε U / n; this ensures the error bound remains ≤ ε U ≤ ε (n max v_i), but the relative guarantee relative to OPT is preserved, with time depending on log U. The proof sketch relies on bounding the total rounding error across at most n items by n δ = ε OPT, ensuring the scaled optimum captures at least (1 - ε) of the original OPT. This FPTAS extends naturally to the unbounded knapsack problem, where multiple copies of items are allowed, using similar and dynamic programming but with O(n + 1/ε^3) in seminal improvements. For the multiple knapsack problem, involving several knapsacks with individual capacities, techniques lead to -time schemes (PTAS) achieving (1 - ε)-optimality, though not fully unless P = . In practice, values of ε = 0.01 are commonly used to balance accuracy and computation time, often outperforming exact methods for large n due to the .

Quantum and Machine Learning-Based Approximations

The Quantum Approximate Optimization Algorithm (QAOA) has been adapted for approximate solutions to the 0-1 knapsack problem by encoding the objective as a variational , often reducing it to related combinatorial problems like max-cut for constraint handling. In one approach using a quadratic penalty term to enforce capacity constraints, QAOA with circuit depths p=1 to 5 yields ratios improving from approximately 0.3 to 0.8 on instances, demonstrating progressive enhancement in solution quality as depth increases. These advances leverage hybrid quantum-classical optimization to navigate the problem's space more effectively than shallow circuits alone. Recent quantum algorithms further exploit Grover's search for speedups in 0-1 knapsack approximations, applying amplitude amplification to sub-problems that generate feasible subsets before refining the remainder. A 2025 method, the Quantum Tree Generator (QTG), uses successive Grover iterations to identify high-value solutions with a query complexity of O(\sqrt{2^n}) for practical instances, achieving success probabilities of at least 40% on hard cases (gap ≥7) and 80% on easier ones (gap ≤6) after around 700 + n²/16 iterations. This provides a quadratic speedup over classical unstructured search while requiring only O(n) qubits, outperforming classical solvers like COMBO on instances with ≥100 variables in terms of space efficiency. Reinforcement learning (RL) offers machine learning-based approximations by framing the knapsack as a sequential decision process, where an learns policies to select items under capacity limits. A 2025 RL framework defines states as (i, w)—the current item index i and remaining weight w—with binary actions to include or exclude each item, using a variant (Dueling Deep Q-Network) trained on diverse instances for generalization. This approach produces near-optimal policies, outperforming heuristics by up to 5% on n=100 instances while being over 9000 times faster than dynamic programming. Fairness-aware approximations extend these methods to incorporate group fairness constraints, ensuring equitable across item classes. A 2025 integer linear programming (ILP) formulation for the knapsack problem with group fairness bounds resource consumption per class, solvable via dynamic programming reformulation as a multiple-choice knapsack with pseudopolynomial O(n c h + ℓ c²), where h is the resource bound range and ℓ the number of classes. This ILP-based approach achieves optimality on 98.66% of tested instances, including hard cases, while enforcing fairness without excessive solution degradation compared to unconstrained baselines.

Variations

Bounded and Unbounded Knapsacks

The bounded knapsack problem extends the standard 0-1 knapsack by allowing multiple copies of each item type, up to a specified b_i for item i, where the decision x_i satisfies $0 \leq x_i \leq b_i and x_i is . This formulation generalizes the 0-1 case, which corresponds to b_i = 1 for all i, enabling modeling of scenarios with limited availability of identical items. In contrast, the unbounded knapsack problem removes any upper limit on the number of copies, setting b_i = \infty for all i, so x_i \geq 0 is integer without bound. This variant is suitable for situations involving unlimited supplies of item types, such as selecting from an infinite stock of materials. Dynamic programming solves the unbounded knapsack in pseudo-polynomial time O(nW), where n is the number of item types and W is the knapsack capacity, by iterating over items and updating capacities in a single pass per item, allowing reuse without tracking counts. For the bounded case, the approach extends the 0-1 dynamic programming by either treating each possible copy as a distinct item or using a modified recurrence that loops up to b_i times per item, yielding time complexity O(n W \sum b_i). Applications of the bounded knapsack include inventory management where stock levels limit selections, such as a retailer choosing limited quantities of products to maximize profit under weight constraints for shipping. The unbounded variant applies to problems like cutting stock optimization, where material is available in unlimited lengths and pieces are cut to maximize utilization without quantity restrictions. Both problems are NP-hard in the strong sense, as reductions from the 0-1 knapsack preserve this property. However, the unbounded knapsack admits a simpler fully polynomial-time scheme (FPTAS) due to its structure allowing greedy-like scaling in the approximation, whereas bounded variants require more involved adjustments to achieve similar guarantees.

Multiple and Multidimensional Knapsacks

The multiple knapsack problem extends the classical 0-1 knapsack by considering m knapsacks, each with its own capacity W_j for j = 1, \dots, m. Given n items, each with p_i > 0 and weight w_i > 0, the goal is to assign each item to at most one knapsack such that the total weight in knapsack j does not exceed W_j, while maximizing the total assigned . This problem is strongly NP-complete, as it includes the single knapsack problem as a special case when m=1, and lacks a fully polynomial-time scheme (FPTAS) even for m=2. Approximation algorithms achieve strong performance through techniques like and rounding, which group items into a logarithmic number of classes to enable a polynomial-time scheme (PTAS) guaranteeing a (1 - \epsilon)- for any \epsilon > 0. The multidimensional knapsack problem generalizes further by imposing constraints across d > 1 dimensions, such as weight, volume, and cost. Each item i has a profit p_i > 0 and a weight vector \mathbf{w}_i = (w_{i,1}, \dots, w_{i,d}) with w_{i,k} > 0 for dimension k = 1, \dots, d, while the knapsack has capacity vector \mathbf{W} = (W_1, \dots, W_d) with W_k > 0. The objective is to select a subset of items maximizing total profit subject to \sum_i x_i w_{i,k} \leq W_k for each k = 1, \dots, d, where x_i \in \{0,1\}. For fixed d, dynamic programming yields a pseudo-polynomial-time algorithm, and scaling techniques extend the single-dimensional FPTAS to produce a (1 - \epsilon)-approximation in time polynomial in n and $1/\epsilon but exponential in d. However, when d is part of the input (variable d), the problem is strongly NP-hard with no pseudo-polynomial algorithm unless P = NP. The multiple knapsack problem relates closely to the , which can be viewed as its minimization dual: while multiple knapsack maximizes profit (or packed weight) using a fixed number of bins with given capacities, minimizes the number of uniform-capacity bins needed to pack all items without exceeding capacities per bin. A practical example of the two-dimensional case (d=2) arises in cargo loading, where items must respect both weight limits (W_1) and volume limits (W_2) to optimize shipment value within a container.

Quadratic, Geometric, and Online Variants

The knapsack problem (QKP) extends the classical 0-1 knapsack by incorporating pairwise interactions between items in the objective function, modeling scenarios such as facility location or where selecting one item affects the value of others. Formally, given items with weights w_i > 0, linear profits p_i \in \mathbb{R}, and quadratic coefficients q_{ij} \in \mathbb{R} for i, j = 1, \dots, n, the goal is to solve \max \sum_{i=1}^n p_i x_i + \sum_{i=1}^n \sum_{j=1}^n q_{ij} x_i x_j \quad \text{subject to} \quad \sum_{i=1}^n w_i x_i \leq c, \quad x_i \in \{0,1\}, where c > 0 is the knapsack capacity and x_i indicates selection of item i. This formulation was introduced by Gallo et al. in 1980, who provided initial exact and methods. The QKP is strongly NP-hard, as it generalizes the maximum via a where weights are set to 1 and terms encode edges. Exact solution approaches rely on branch-and-bound techniques enhanced with cutting planes and Lagrangian relaxations; for instance, a cut-and-branch framework generates valid inequalities from conic relaxations and solves subproblems via dynamic programming, achieving optimality on instances up to 100 items. Heuristics, such as reactive , further support practical solving for larger instances by iteratively improving feasible solutions. In the geometric knapsack problem, items are represented as geometric shapes (typically axis-aligned rectangles or hyperrectangles in d dimensions) with associated values, and the knapsack is a fixed container (e.g., a unit square in 2D); the objective is to select and non-overlappingly pack a subset of items to maximize total value without rotations unless specified. This variant arises in applications like ad placement or 3D printing, where spatial constraints prevent overlap, distinguishing it from scalar-weight models. The problem is NP-hard even in 2D, with approximation algorithms focusing on shelf packing: items are grouped into horizontal "shelves" of equal height, packed left-to-right within each shelf, and shelves stacked vertically, yielding a polynomial-time approximation scheme (PTAS) with ratio $1 + \epsilon for arbitrary rectangles. Recent advances include near-linear time algorithms for d-dimensional cases, achieving (1 + \epsilon)-approximations for hypercubes via dynamic programming over rounded sizes. The online knapsack problem requires irrevocable decisions on items arriving sequentially, without knowledge of future arrivals, extending the classical problem to streaming settings like resource allocation. In the fractional variant, partial acceptance is allowed, and the —accepting the maximum feasible fraction of each item—achieves a competitive of $1/2, meaning the algorithm's value is at least half the offline optimum in the worst case. For the 0-1 integer version with arbitrary value-to-weight s, no constant competitive ratio exists, as adversaries can force arbitrarily poor performance by presenting small high-value items after filling with low-value ones; constant ratios require assumptions like bounded densities. Recent developments incorporate machine learning predictions to mitigate online limitations; for example, in the online fractional knapsack with input predictions (e.g., future item densities), algorithms achieve consistency (near-optimal under accurate predictions) and robustness (bounded degradation under errors), with a 2024 scheme providing O(1)-competitive ratios when predictions err by at most \epsilon. Such prediction-augmented methods, analyzed under the "predictions with advice" framework, bridge online and offline performance for unit-profit or stochastic arrivals.

Knapsacks with Additional Constraints

The knapsack problem with additional constraints extends the classical by incorporating relational, fairness, or requirements that reflect real-world complexities, such as incompatibilities between items or equitable across groups. These variants maintain the core objective of maximizing value under capacity limits but introduce interdependencies that increase computational difficulty, often requiring specialized exact or approximate algorithms. Common applications arise in scenarios where items cannot be selected independently, such as project selection with mutual exclusions or of limited resources among stakeholders. In the knapsack problem with conflicts, items are represented as nodes in an undirected conflict graph, where edges denote incompatibilities preventing simultaneous selection, effectively combining the maximum independent set problem with knapsack optimization. This variant, known as the knapsack problem with conflict graph (KPCG), ensures that no two adjacent nodes are chosen, preserving the knapsack capacity while maximizing total value. The problem is NP-hard, as even dense conflict graphs can lead to exponentially many feasible independent sets, complicating enumeration. Recent advancements include branch-and-bound algorithms that exploit graph structure for bounding and pruning, achieving exact solutions for instances with up to 100 items and moderate conflict densities. For hard instances with high conflict density (e.g., 0.9), a 2025 MILP approach using CP-SAT solver solves benchmark instances (up to 1000 items, capacity 2000) in average 1.3 seconds, achieving 100% optimality and outperforming prior methods in runtime. Fairness constraints in the knapsack problem address group-based , such as ensuring proportional value allocation per demographic or category to avoid in . In the group fairness knapsack problem, items are partitioned into groups, with constraints requiring that the total value or weight from each group meets lower and upper bounds, often aiming for where each group receives value roughly proportional to its size. This formulation is NP-hard, but 2025 integer (ILP) methods provide the first exact branch-and-cut frameworks, incorporating valid inequalities based on group cardinalities to solve instances with 500 items and 10 groups within 1,000 seconds using commercial solvers like Gurobi. These ILP approaches dominate for exact solutions by tightening bounds through lifted cover inequalities tailored to fairness parameters. Robust and variants handle in item weights or values, using scenario-based optimization to ensure solutions perform well across possible realizations. In the robust knapsack problem, weights belong to intervals, and the goal is to maximize the worst-case value over all feasible scenarios within a budgeted set, often modeled via Bertsimas-Sim-style Γ-uncertainty. Exact solutions employ mixed-integer programming reformulations, solvable in time for uncertainty but NP-hard for general polyhedral sets. extensions generate scenarios via simulation to approximate maximization, with scenario-based ILPs reducing variance through sample average approximation for problems with 100 items and 1,000 scenarios. These methods guarantee robustness by penalizing high-variance solutions, achieving near-optimal performance in applications like inventory management under demand fluctuations. Precedence constraints impose ordering on item selection, where certain items can only be included if predecessors are chosen, modeled via a (DAG) of dependencies. The precedence constrained knapsack problem (PCKP) integrates these arcs into the formulation, adjusting item values to account for forced inclusions and using clique-based facets for polyhedral strengthening in branch-and-cut solvers. This adds structural complexity, remaining NP-hard even for tree-structured precedences, with exact algorithms solving instances of 200 items via separation of precedence cover inequalities in under 100 seconds. Applications include scheduling, where components must follow sequential prerequisites. These additional constraints elevate the problem's complexity beyond the standard NP-hardness of the 0-1 knapsack, with some formulations (e.g., those with dynamic precedences or extensive conflicts) reaching PSPACE-completeness due to the need to track state-dependent selections over exponential paths. Approximation algorithms, such as , decompose the problem by dualizing conflict or fairness constraints, providing (1-ε)-approximations in O(n^2 / ε) time for conflict graphs with bounded degree by relaxing incompatibilities into penalized terms. For instance, in pharmaceutical applications, a conflict graph models incompatible drugs (e.g., edges between interacting compounds like certain antibiotics and anticoagulants), ensuring safe combinations while maximizing therapeutic value under dosage limits.

References

  1. [1]
    Knapsack Problem - an overview | ScienceDirect Topics
    Most results on knapsack problems were developed by the combinatorial optimization community, to which the authors of monographs Martello and Toth (1990) and ...Missing: primary | Show results with:primary
  2. [2]
  3. [3]
  4. [4]
    An overview of recent advances. Part I: Single knapsack problems
    Most results on knapsack problems were developed by the combinatorial optimization community, to which the authors of monographs Martello and Toth (1990) and ...Missing: history primary
  5. [5]
    None
    ### Informal Description of the Knapsack Problem
  6. [6]
    [PDF] cs475 Knapsack Problem and Dynamic Programming
    Does Greedy work? Capacity M = 7, Number of objects n = 3 w = [5, 4, 3] v = [10, 7, 5] (ordered by v i. / w i ratio). Page 9. Recursion for Knapsack Problem.
  7. [7]
    The Knapsack Problem - ResearchGate
    Several authors [21,38,39,40] focused on formulating the problem of optimising best controls selection where they applied the basic knapsack problem [19] . Even ...
  8. [8]
    Prescriptive Learning for Air-Cargo Revenue Management
    **Summary of Knapsack Problem in Air Cargo Revenue Management:**
  9. [9]
    [PDF] Cargo Capacity Management with Allotments and Spot Market ...
    An airline manages cargo capacity by selling it through allotment contracts or the spot market, aiming to maximize profit from both.
  10. [10]
  11. [11]
    [PDF] Cutting stock problems and solution procedures
    Abstract: This paper discusses some of the basic formulation issues and solution procedures for solving one- and two- dimensional cutting stock problems.Missing: manufacturing | Show results with:manufacturing
  12. [12]
    19. Knapsack, Bin Packing and Plant Location problems - GOL
    Consider, as an example, loading a group of trucks, taking into account both the weight as well as the volume required and available. Then, for each truck ...
  13. [13]
    [PDF] Hiding Information and Signatures in Trap'door Knapsacks
    This knapsack is easily solved for x, which is also the solution to the apparently difficult, but trapdoor knapsack problem S= u*x. To help make these ideas ...
  14. [14]
    [PDF] The Rise and Fall of Knapsack Cryptosystems
    The general knapsack problem is known to be NP-complete [13], and so it is thought to be quite hard. Being NP-complete means that if a polynomial time algorithm ...
  15. [15]
    [PDF] Subset Sum Problem - garsia at york
    Merkle-Hellman Knapsack Cryptosystem. 1. Choose a superincreasing sequence s1,s2,...,sn. 2. Choose p to be a large prime such that p>s1 + s2 + ··· + sn. 3 ...
  16. [16]
    Niederreiter, H. (1986) Knapsack-Type Cryptosystems ... - Scirp.org.
    Niederreiter, H. (1986) Knapsack-Type Cryptosystems and Algebraic Coding Theory. Problems of Control and Information Theory, 15, 159-166.
  17. [17]
    [PDF] Lattice Reduction Attack on the Knapsack
    The Merkle–Hellman knapsack cryptosystem [5] was one of the first pro- posed public key cryptosystems. This cipher utilizes a few elementary, but.<|control11|><|separator|>
  18. [18]
    A hybrid algorithm for the DNA sequencing problem - ScienceDirect
    Jan 30, 2014 · ... sequence with highest pairwise similarity score with the unknown DNA fragment. ... knapsack problem; see, e.g., [3]. We now define the ...
  19. [19]
    Constraint Logic Programming approach to protein structure prediction
    Nov 30, 2004 · ... protein folding process and the basic intramolecular interactions is ... We have modeled a sort of knapsack problem using Inline ...
  20. [20]
    [PDF] Combinatorial Optimization in VLSI Design
    We scan the arcs carrying flow in topological order and solve a multi-knapsack problem by dynamic programming for selecting the best set of cells to be ...
  21. [21]
    [PDF] An Optimization Based Power Usage Scheduling Strategy Using ...
    Apr 15, 2021 · The proposed algorithm uniformly distributes the load in off-peak hours and checks the grid capacity with the help of knapsack problem formulation to balance ...
  22. [22]
    [PDF] Lecture 25 1 NP-Complete Problems and General Strategy 2 ...
    Nov 20, 2014 · Let us recall the decision version of the Knapsack problem: given n items with size s1,s2, ..., sn, value v1,v2, ..., vn, capacity B and value V ...
  23. [23]
    [PDF] Chapter Computational Complexity
    most b B y using a binary search procedure that iteratively halves the range of possible optimal values we see that an optimization problem can be solved ...
  24. [24]
    [PDF] NP-Completeness Running Time Revisited
    Does a text T contain a pattern P? ◇ Does an instance of 0/1 Knapsack ... Self-reducibility: Decision and Optimization Problems are all ... ▫ (Binary) search for ...
  25. [25]
    [PDF] Theory of NP-Completeness
    Example: If we can solve the 0-1 knapsack decision problem, we can solve the ... • For the 0-1 knapsack problem, a certificate is a list of integers.
  26. [26]
    (PDF) Reducibility Among Combinatorial Problems - ResearchGate
    We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent.
  27. [27]
    [PDF] Knapsack Problems: A Parameterized Point of View - arXiv
    Nov 23, 2016 · We give a dynamic programming solution and a pseudo-polynomial reduction from 3-Partition in order to show that the problem is not pseudo ...
  28. [28]
    Algorithms and complexity results for the 0–1 knapsack problem with ...
    Oct 6, 2025 · This paper describes the first exact solution approaches for the Knapsack Problem with Group Fairness, based on integer linear programming ...
  29. [29]
    [PDF] László Babai Dynamic programming: The Knapsack Problem
    Jan 22, 2020 · ... unit cost. Theorem. Under the assumption that the weights are integers (but the values are real), one can find the optimum in O(nW) ...
  30. [30]
    [PDF] An FPTAS for #Knapsack and Related Counting Problems
    that the following two operations are of unit cost):. 1. Ordering: given two states u, v we can efficiently check if u ≺ v and if so find a w that is ...
  31. [31]
    [PDF] Complexity and algorithms for nonlinear optimization problems
    May 3, 2007 · The computation model here assumes the unit cost model, i.e. any arithmetic operation ... knapsack problem, but unlike the term n, the term.
  32. [32]
    [PDF] arXiv:1912.02278v3 [cs.CG] 18 Nov 2021
    Nov 18, 2021 · The complexity class ∃R is often called a real RAM analogue of NP, since the problem ETR can be viewed as the real-valued variant of SAT. The ...
  33. [33]
    New Generic Algorithms for Hard Knapsacks - SpringerLink
    In this paper, we study the complexity of solving hard knapsack problems, i.e., knapsacks with a density close to 1 where lattice-based low density attacks ...
  34. [34]
    A Branch and Bound Algorithm for the Knapsack Problem
    A branch and bound algorithm for solution of the “knapsack problem,” max ∑ vixi where ∑ wixi ≦ W and xi = 0, 1, is presented which can obtain either optimal ...Missing: original | Show results with:original
  35. [35]
    An upper bound for the zero-one knapsack problem and a branch ...
    A branch and bound algorithm is proposed, based on the above mentioned upper bound and on original backtracking and forward schemes.Missing: seminal | Show results with:seminal
  36. [36]
    [PDF] Approximation Algorithms
    ... Knapsack ................................................. 68. 8.1 A pseudo ... Self-reducibility ...<|separator|>
  37. [37]
    Average-case analysis of a greedy algorithm for the 0/1 knapsack ...
    We consider the average-case performance of a well-known approximation algorithm for the 0/1 knapsack problem, the decreasing density greedy (DDG) algorithm ...
  38. [38]
    Fast Approximation Algorithms for the Knapsack and Sum of Subset ...
    Fast Approximation Algorithms for the Knapsack and Sum of Subset Problems. Authors: Oscar H. Ibarra ... Kim. Chul E. Kim. Department of Computer, Information ...Missing: paper | Show results with:paper
  39. [39]
    An FPTAS for just-in-time scheduling of a flow shop manufacturing ...
    the running time of APP and EXACT with epsilon = 0.01. The entries with an ... ε = 0.01. 20. 100. Less than 1 s Less than 1 s Less than 1 s Less than 1 s.
  40. [40]
    [PDF] QAOA for the Knapsack Problem
    Nov 4, 2022 · This is a quantum algorithm for finding ap- proximate solutions to combinatoric optimization problems, such as the knapsack problem. So far, ...
  41. [41]
    A quantum algorithm for solving 0-1 Knapsack problems - Nature
    Aug 26, 2025 · For the 0-1-KP, the binary variable xm encodes the choice of either including item m into the knapsack (xm = 1) or omitting it (xm = 0).<|control11|><|separator|>
  42. [42]
    (PDF) Reinforcement Learning for Solving the Knapsack Problem
    Aug 9, 2025 · This paper introduces a novel reinforcement learning (RL) approach to solve the knapsack problem by enhancing the state representation within ...
  43. [43]
    Knapsack problems and constrained optimization - Emory CS
    The knapsack problem is one of the most studied problems in combinatorial optimization, with many real-life applications. Example: pack food in a knapsack ...
  44. [44]
    Knapsack: Unbounded, 0-1, and Bounded
    bounded knapsack: This third version is in between unbounded and 0-1, where a smallish store has limited amount of copies for each product (such as three apples ...
  45. [45]
    [PDF] COMPUTERS AND INTRACTABILITY A Guide to the Theory of NP ...
    Likewise, branch-and-bound algo- rithms for the knapsack problem have been so successful that many consid- er it to be a "well-solved" problem, even though ...
  46. [46]
    [1504.04650] A Faster FPTAS for the Unbounded Knapsack Problem
    Apr 17, 2015 · For over thirty years, the best FPTAS was due to Lawler with a running time in O(n + \frac{1}{\varepsilon^3}) and a space complexity in O(n + \ ...
  47. [47]
    [PDF] A Polynomial Time Approximation Scheme for the Multiple ...
    The decision version of MKP is a generalization of the decision versions of both the knapsack and bin packing problems and is strongly NP-complete. Moreover ...
  48. [48]
    Knapsack problems: A parameterized point of view - ScienceDirect
    Jul 5, 2019 · The knapsack problem (KP) is a very famous NP-hard problem in combinatorial optimization. Also its generalization to multiple dimensions ...
  49. [49]
    [PDF] Bin Completion Algorithms for Multicontainer Packing, Knapsack ...
    2.3.1 Bin-Oriented Branch-and-Bound + Dominance = Bin Completion. At this point, we have defined the salient features of the bin completion approach to ...
  50. [50]
    [PDF] SOLUTION OF MULTIDIMENSIONAL KNAPSACK PROBLEM VIA ...
    The NP-hard multidimensional knapsack problem (MKP) arises in several practical contexts such as the capital budgeting, cargo loading, cutting stock problems ...
  51. [51]
    The quadratic knapsack problem - ScienceDirect.com
    The quadratic knapsack problem is a relevant -hard combinatorial optimization problem, inspired, since the Seventies, by a number of real-world applications.
  52. [52]
    A fast and effective breakpoints heuristic algorithm for the quadratic ...
    Jun 1, 2025 · The QKP problem is NP-hard since it generalizes the strongly NP-hard maximum clique problem. The Quadratic Knapsack Problem was introduced ...
  53. [53]
    A cut-and-branch algorithm for the Quadratic Knapsack Problem
    The main contribution of this paper is a 'cut-and-branch' framework for the QKP. This framework incorporates four families of cutting planes, two primal ...
  54. [54]
  55. [55]
    Approximation schemes for knapsack problems with shelf divisions
    Mar 7, 2006 · The SHELF-KNAPSACK Problem (SK) is to find a subset S ′ ⊆ S partitioned into shelves with total shelves size at most K and maximum value. The C ...Missing: Geometric | Show results with:Geometric
  56. [56]
    [PDF] Online Knapsack Problems - Dartmouth Computer Science
    The multiple knapsack problem (MKP) is slightly harder as it has only a PTAS [4] and does not have an FPTAS unless P=NP. The generalized assignment problem (GAP) ...<|separator|>
  57. [57]
  58. [58]
    (PDF) The Knapsack Problem with Conflict Graphs - ResearchGate
    Aug 6, 2025 · An instance of KCG is defined by a knapsack capacity B and an undirected graph G = (V, E) referred to as a conflict graph, where each node i ∈ V ...
  59. [59]
    A Branch-and-Bound Algorithm for the Knapsack Problem with ...
    Jun 2, 2017 · We study the knapsack problem with conflict graph (KPCG), an extension of the 0-1 knapsack problem, in which a conflict graph describing incompatibilities ...
  60. [60]
    [PDF] On Solving the Knapsack Problem with Conflicts - arXiv
    Jun 3, 2025 · The dominance does not appear to be affected by the details ... San Segundo, “A new combinatorial branch-and-bound algorithm for the knapsack ...<|control11|><|separator|>
  61. [61]
    On Solving the Knapsack Problem with Conflicts - MDPI
    The Knapsack Problem with Conflicts (KPC) is a variant where conflicting item pairs cannot be selected together, unlike the classic knapsack problem.
  62. [62]
    [2006.07832] Group Fairness for Knapsack Problems - arXiv
    Jun 14, 2020 · We study the knapsack problem with group fairness constraints. The input of the problem consists of a knapsack of bounded capacity and a set of items.Missing: aware 2023-2025
  63. [63]
    [PDF] Group Fairness for Knapsack Problems - IFAAMAS
    ABSTRACT. We study the knapsack problem with group fairness constraints. The input of the problem consists of a knapsack of bounded capacity.Missing: aware 2023-2025
  64. [64]
    On the Robust Knapsack Problem | SIAM Journal on Optimization
    This paper considers a distributionally robust version of a quadratic knapsack problem. In this model, a subsets of items is selected to maximizes the total ...
  65. [65]
    Exact solution of the robust knapsack problem - ScienceDirect.com
    We consider an uncertain variant of the knapsack problem in which the weight of the items is not exactly known in advance, but belongs to a given interval, ...
  66. [66]
    Monte Carlo Simulation-Based Scenario Generation in Stochastic ...
    Oct 6, 2024 · We will explore foundational concepts, explain the methodology step-by-step, and provide detailed examples using the Knapsack Problem.<|separator|>
  67. [67]
    The precedence constrained knapsack problem - ScienceDirect.com
    Oct 30, 2015 · We consider the problem of separating maximally violated inequalities for the precedence constrained knapsack problem.
  68. [68]
    [PDF] Clique-based facets for the precedence constrained knapsack ...
    The precedence constraints can be represented by the directed graph G = (N, S), where the node set is the set of all items N, and each precedence con- straint ...
  69. [69]
    [PDF] Approximation of knapsack problems with conflict and forcing graphs
    – A conflict constraint (negative disjunctive constraint) on a pair of items expresses an incompatibility between these items. For each conflicting pair, at ...<|separator|>
  70. [70]
    Predicting combinations of drugs by exploiting graph embedding of ...
    Jan 11, 2022 · For example, the combination of BRAF and V600E significantly improves the therapy of melanoma, and many drug combinations are approved by FDA ( ...