Fact-checked by Grok 2 weeks ago

Bin packing problem

The bin packing problem is a fundamental challenge in and , consisting of packing a given set of items—each with a specified positive size—into the minimum number of fixed-capacity bins such that the sum of the sizes of items assigned to any single bin does not exceed its capacity, typically normalized to unit size 1 with item sizes in (0,1]. Formally, for a list L = (a_1, a_2, \dots, a_n) of items with sizes s(a_i), the objective is to partition L into the fewest subsets B_1, B_2, \dots, B_m where \sum_{a_i \in B_j} s(a_i) \leq 1 for each j = 1, \dots, m, thereby minimizing m. This problem is NP-hard in the strong sense, as proven by reduction from the 3-PARTITION problem, implying no exact polynomial-time algorithm exists unless P=NP, and the decision variant—determining if all items fit into at most k bins—is NP-complete. Due to its computational intractability, extensive research focuses on algorithms, which provide near-optimal solutions efficiently; notable examples include the offline First Fit Decreasing (FFD) heuristic, which sorts items in decreasing size order before packing and achieves at most $11/9 \cdot OPT + 1 bins where OPT is the optimal number (with the asymptotic ratio R^1_{FFD} = 11/9 \approx 1.222), and online variants like First Fit (FF) with worst-case ratio approaching 1.7. Fully polynomial-time schemes (FPTAS) also exist, guaranteeing solutions within $1 + \epsilon of optimal for any \epsilon > 0 in time polynomial in n and $1/\epsilon. The bin packing problem has broad practical applications, including efficient loading of shipping containers or trucks to minimize costs, allocating fixed-size blocks to files or processes in systems, and scheduling tasks on identical machines to reduce usage. Originating in the early as a model for , it has served as a key testbed for developing and analyzing approximation techniques in optimization, influencing fields like , , and . Extensions, such as multidimensional or variable-sized bin variants, address more complex real-world scenarios like 3D packing in warehouses.

Formal Statement

Basic Definition

The bin packing problem originated as a fundamental optimization challenge in , with its formal analysis emerging in the 1970s through early studies of methods and performance guarantees. The landmark paper by et al. (1974) provided the first comprehensive examination of simple packing algorithms, establishing worst-case bounds and highlighting the problem's relevance to scenarios. At its core, the bin packing problem entails assigning a collection of items, each characterized by a specific size, to a minimal number of fixed-capacity bins without exceeding any bin's limit. This mirrors practical tasks such as fitting diverse rectangular goods into uniform shipping containers to reduce the volume of transport needed, thereby cutting costs and improving efficiency in logistics. The problem has broad real-world implications across industries, including cargo loading in manufacturing and transportation to optimize space utilization in containers and vehicles. In computing, it underpins load balancing strategies that distribute workloads across processors or servers to prevent bottlenecks and enhance system performance. Similarly, in operating systems, bin packing principles guide memory allocation by partitioning available storage into blocks for processes while minimizing waste and fragmentation. In the standard formulation, the bin packing problem assumes one-dimensional packing, treating item sizes as lengths along a single axis to simplify analysis and approximation, with extensions to higher dimensions addressed separately when required. Despite its intuitive appeal and practical utility, the problem is strongly , rendering exact solutions infeasible for large instances without prohibitive computational effort.

Mathematical Formulation

The bin packing problem is formally defined as follows. Given a set of n items, each with a positive s_i satisfying $0 < s_i \leq 1 for i = 1, 2, \dots, n, and bins each of unit capacity 1, the goal is to assign the items to the minimum number of bins such that the total of items in any bin does not exceed 1. Let m denote the number of bins used in a feasible packing, where each bin j (for j = 1, 2, \dots, m) contains a subset of items with \sum_{i \in B_j} s_i \leq 1 and the subsets B_j partition the set of all items. The objective is to minimize m, and the minimum value over all feasible packings is denoted by OPT, the optimal number of bins. A fundamental lower bound on OPT arises from the total size of the items: since each bin holds at most 1 unit of size, at least the total size \sum_{i=1}^n s_i bins are needed in the fractional relaxation, so OPT \geq \sum_{i=1}^n s_i. Another lower bound comes from the largest item: no bin can hold more than one item of size greater than $1/2, but more directly, OPT \geq \max_i s_i (with the ceiling implied for integrality, as m must be integer). These bounds together give OPT \geq \max\left\{ \sum_{i=1}^n s_i, \max_i s_i \right\}.

Hardness Results

NP-Hardness

The decision version of the bin packing problem determines whether a given instance—consisting of item sizes s_1, s_2, \dots, s_n where each $0 < s_i \leq 1 and an integer k \geq 1—admits a packing into at most k bins of unit capacity. This problem belongs to NP, since a proposed packing serves as a polynomial-time verifiable certificate. To establish NP-hardness, a polynomial-time reduction from the NP-complete Partition problem is employed. In Partition, given a multiset S = \{a_1, a_2, \dots, a_n\} of positive integers summing to $2B, the question is whether S can be divided into two subsets each summing to B. The corresponding bin packing instance sets item sizes to s_i = a_i / B (ensuring they sum to 2), bin capacity to 1, and target bins k = 2. A valid partition exists if and only if the items fit into two bins, as the total sum is 2 and any overflow in one bin (summing to more than 1) would exceed its capacity. This reduction, detailed by Garey and Johnson, confirms NP-completeness. The bin packing problem exhibits strong NP-hardness, remaining NP-hard even when item sizes are polynomially bounded in the input length (i.e., no exponential dependence on n). This follows from a straightforward reduction from the strongly NP-complete 3-Partition problem, proven NP-complete by . In 3-Partition, given $3m positive integers a_1, \dots, a_{3m} with B/4 < a_i < B/2 for all i and total sum mB, the task is to partition them into m triples each summing to B. Scale the instance to bin packing by setting item sizes s_i = a_i / B (now bounded between $1/4 and $1/2), unit bin capacity, and target k = m; the size constraints ensure exactly three items per bin in any feasible packing, yielding equivalence. These results imply that no polynomial-time algorithm exists for solving the bin packing problem exactly, unless P = .

Inapproximability

The bin packing problem cannot be approximated to within a factor strictly better than \frac{3}{2} by any polynomial-time algorithm unless P = . This fundamental inapproximability result follows from a straightforward reduction from the NP-complete Partition problem: given positive integers a_1, \dots, a_n summing to $2S, create bin packing items of sizes s_i = a_i / S; the optimal number of unit bins is 2 if the a_i admit a partition into two sets of equal sum, and 3 otherwise, so distinguishing these cases requires an approximation ratio better than \frac{3}{2}. The bound is tight in the sense that no polynomial-time \left( \frac{3}{2} - \epsilon \right)-approximation exists for any fixed \epsilon > 0 unless P = NP, as the same reduction implies that improving beyond \frac{3}{2} by any positive margin solves . Harder variants of bin packing, such as multidimensional or vector bin packing, exhibit stronger inapproximability ties to problems like set cover; for instance, reductions from set cover hardness yield inapproximability factors of \frac{5}{4} + \epsilon for any \epsilon > 0 in the two-dimensional case unless P = NP.

Online Algorithms

Heuristic Strategies

In the online bin packing problem, items arrive one by one, and the algorithm must assign each item to a bin immediately upon its arrival, without any information about future items. This setting requires heuristics that make , irrevocable decisions based solely on the current state of open bins and the incoming item size. These strategies prioritize simplicity and low computational overhead, making them suitable for real-time applications despite their suboptimal performance guarantees. The First-Fit (FF) scans the sequence of existing bins from the first opened to the last and places the item in the earliest bin with sufficient remaining capacity. If no such bin exists, a new bin is opened to hold the item. FF maintains all bins as potentially available for future assignments. Analysis shows that FF uses at most (17/10) * OPT + O(1) bins asymptotically, where OPT is the minimum number of bins needed by an optimal offline solution, yielding an asymptotic approximation of 17/10. This bound is tight, as there exist input sequences where FF performs arbitrarily close to this factor. The Best-Fit (BF) heuristic examines all open bins and selects the one with the smallest remaining capacity that can still accommodate the item, with the goal of leaving as little unused space as possible in the chosen bin. A new bin is opened only if no existing bin fits the item. Like FF, BF achieves an asymptotic approximation ratio of 17/10, though it often performs better in practice by reducing fragmentation in individual bins. The bound is similarly tight for worst-case inputs. Worst-Fit (WF) operates by placing the item in the bin with the largest remaining capacity among those that can fit it, potentially spreading items more evenly but often leading to inefficient overall packing due to increased in multiple bins. If no bin fits, a new one is opened. WF has a weaker asymptotic of 2, meaning it can use up to twice as many bins as OPT in the worst case, making it less effective than FF or for adversarial inputs. The Next-Fit (NF) heuristic is a streamlined variant of FF that uses less memory by considering only the most recently opened (the "current" bin) for placement. The item is placed there if it fits; otherwise, the current bin is closed (no longer considered), and a new bin becomes the current one to hold the item. This "sliding window" approach simplifies implementation but results in an asymptotic approximation ratio of 2, comparable to WF and worse than FF or in the worst case.

Advanced Online Algorithms

Advanced online algorithms for the bin packing problem build upon basic heuristics by employing schemes or partitioning techniques to achieve better competitive ratios, particularly through size-based of items and tailored packing rules for each category. These methods address the limitations of simple strategies like First Fit by dynamically grouping items and applying specialized procedures, leading to more efficient space utilization in an setting where items arrive sequentially without knowledge of inputs. One prominent example is the Harmonic-k (H_k) algorithm, which classifies items into k size classes based on their normalized sizes s, where the i-th class contains items satisfying 1/(i+1) < s ≤ 1/i for i = 1 to k, with a separate class for items s ≤ 1/k. For each class, the algorithm uses distinct packing rules: large items (class 1, s > 1/2) are placed in dedicated bins if possible, while smaller classes employ variants of First Fit or Next Fit to avoid fragmentation. This classification ensures that bins are filled more evenly across similar-sized items, yielding an asymptotic competitive of 1 + 1/(k+1) - 1/(k(k+1)), which approaches approximately 1.691 as k tends to . The approach was introduced as a refinement over earlier heuristics, demonstrating improved performance on instances with diverse item sizes. Refined compaction techniques further enhance these classification-based methods by incorporating a post-processing step to merge underfilled bins after initial placement, effectively reducing waste from suboptimal early decisions. In particular, applying compaction to First Fit outputs can consolidate partially filled bins, achieving an asymptotic competitive ratio of approximately 1.588 times the optimal packing. This improvement stems from analyzing bin fill levels post-heuristic and reassigning items to eliminate excess bins, as detailed in advanced variants that integrate such merging. More recent advancements, such as the Advanced Harmonic algorithm, achieve an improved asymptotic competitive ratio of approximately 1.578 as of 2018. While the focus remains on one-dimensional variants, dual approaches like shelf packing extend these ideas to two-dimensional extensions by partitioning the bin height into shelves based on item heights and packing widths online within each shelf, achieving competitive ratios around 2 for strip packing scenarios.

Lower Bounds

Lower bounds for online bin packing algorithms establish fundamental limits on their asymptotic competitive ratios, demonstrating that no algorithm can guarantee performance better than certain thresholds relative to the optimal offline solution. These bounds are derived primarily through adversarial constructions and Yao's principle, which provides a to prove limits for randomized algorithms by considering the worst-case performance of deterministic algorithms over a fixed of . More refined adversarial constructions push this limit higher. For instance, sequences alternating large items (e.g., sizes slightly above 2/3) with small items (e.g., sizes slightly above 1/3) can force any to open new bins for each large item while leaving space unused, resulting in at least 1.5 times the optimal number of bins asymptotically. The strongest general lower bound for any online bin packing algorithm, deterministic or randomized, is approximately 1.54037, established via applied to an optimal of item sizes solved using . This bound shows that, over the worst-case input , the expected number of bins used by the best exceeds 1.54037 times the optimal. For algorithms restricted to constant space, a tighter lower bound of approximately 1.691 applies, derived using with a distribution of item sizes (proportions following 1/k for k = 1, 2, ...). This value, known as H_\infty = \sum_{k=1}^\infty \frac{1}{k}, matches the asymptotic upper bound achieved by the Harmonic algorithm, indicating it is tight for this class.

Offline Algorithms

Multiplicative Approximations

In the offline variant of the bin packing problem, the complete list of item sizes is available in advance, enabling algorithms to preprocess the input—such as by items—and optimize the packing configuration globally to minimize the number of bins used. This setting contrasts with online algorithms, which process items sequentially without future knowledge, often leading to superior approximation guarantees for offline approaches. Seminal heuristics in this category include sorting-based methods that achieve bounded multiplicative factors relative to the optimal solution OPT. The First-Fit Decreasing (FFD) sorts the items in non-increasing order of size and then applies the heuristic, placing each item into the lowest-indexed that has sufficient remaining or opening a new if none exists. Johnson et al. proved that FFD uses at most \frac{11}{9} \mathrm{OPT} + \frac{6}{9} , which is at most approximately 1.22 OPT + 1. The Best-Fit Decreasing (BFD) follows a similar preprocessing step but places each item into the with the smallest remaining that can accommodate it. The same analysis shows that BFD also achieves the approximation ratio of \frac{11}{9} \mathrm{OPT} + \frac{6}{9}. A significant advancement is the asymptotic polynomial-time approximation scheme (APTAS) introduced by de la Vega and Lueker, which, for any fixed \epsilon > 0, produces a packing using at most (1 + \epsilon) \mathrm{OPT} bins in polynomial time (specifically, linear in the input size for fixed \epsilon). This scheme groups small items and solves an integer program for large items, ensuring the multiplicative factor approaches 1 as \epsilon decreases. Subsequent refinements have yielded improved APTAS variants with tighter constants for practical \epsilon values; for instance, setting \epsilon = 0.05 yields a 1.05 OPT + O(1) while maintaining runtime, as explored in robust extensions of the original scheme.

Additive Approximations

Additive approximations for the bin packing problem focus on offline that a using at most OPT + c bins, where OPT is the minimum number of bins needed and c is a small constant. This type of is particularly useful when OPT is bounded, as it provides near-optimal performance without the scaling issues of multiplicative approximations. A key hardness result is that there is no -time achieving OPT + c for any constant c < 1 unless P = NP, since such an would solve the NP-complete decision version of determining whether the items can be packed into k bins. One seminal approach is the in-out algorithm, which classifies items into large and small based on a threshold (typically > 1/2 for large items). Large items are packed optimally using exhaustive or dynamic programming, as their limited types allow efficient exact packing into at most OPT bins. Small items are then packed greedily using a like First Fit: each small item is placed into an existing bin with sufficient remaining capacity ("in") if possible, or a new bin ("out") otherwise. This method achieves a guarantee of OPT + O(1) bins in the worst case. Bin completion algorithms extend this idea by first fixing a packing of large items and then solving a subproblem to "complete" each bin to near-full capacity with small items. After optimally packing large items, the remaining space in those bins is treated as multiple knapsack instances, solved approximately for small items using dynamic programming or methods. This yields an additive guarantee of OPT + O(1), with the constant depending on the ; for example, using a of 1/3 results in OPT + 2. The approach leverages the fact that small items can fill gaps efficiently, bounding the waste per bin. Recent advancements include FPT-time algorithms achieving OPT + for general instances, running in time 2^{O(OPT \log^2 OPT)} \cdot n by classifying items into large, medium, and small categories, rounding medium items geometrically, and greedily packing small ones after enumerating feasible configurations for large items. For certain size distributions, such as when item sizes are multiples of /poly(n) or from a small number of types, OPT + can be achieved in linear time O(n) using tailored dynamic programming that exploits the restricted variety. These results, from the 2010s, highlight progress in practical near-optimality for restricted inputs.

Exact Algorithms

Exact algorithms for the bin packing problem seek to determine the minimum number of bins required to pack all items without exceeding bin capacity, providing optimal solutions despite the problem's that restricts their use to small instances, typically with up to 100 items. These methods rely on exhaustive enumeration strategies enhanced by pruning techniques to manage . Seminal approaches include branch-and-bound, dynamic programming, integer formulations, and improvements to naive exponential-time algorithms. The branch-and-bound framework, particularly the Martello-Toth Procedure (MTP), represents a cornerstone for exact solutions. Developed by Martello and Toth, the MTP systematically enumerates partial packings by assigning items to bins in a depth-first manner, branching on possible assignments for each item while maintaining feasibility. To accelerate the search, it employs tight lower bounds derived from linear relaxations and continuous approximations of the remaining items, as well as reduction procedures that eliminate dominated partial solutions—such as configurations where one partial packing cannot outperform another based on unpacked item profiles. Dominance is assessed via criteria like the Martello-Toth dominance rule, which compares the filled space and potential for the remaining items across branches. The procedure also integrates upper bounds from packings to fathom branches early. Implemented in their software, MTP solves instances with up to 100 items in seconds on standard hardware and larger ones up to 500 items in reasonable time for many cases. Dynamic programming methods for exact bin packing often model the problem using states that capture of packed items or profiles of bin configurations. A typical approach considers the decision version—whether all items fit into k bins—via a state representing the of items assigned to the first few bins and the residual capacities of those bins, computing the minimum k recursively. For the general case, this leads to an exponential number of states, on the order of O(2^n poly(n)), but optimizations like bounding the number of open bins or using item ordering reduce the effective size. Such techniques are practical for instances with n ≤ 100, especially when combined with or , achieving solutions in polynomial time relative to the state space. Martello and Toth describe DP-based lower bounds integrated into branch-and-bound for efficiency, while extensions in knapsack-related DP handle the multiple-subset-sum nature of bin packing. The bin packing problem admits a straightforward integer linear programming (ILP) formulation that enables via off-the-shelf solvers. Define binary variables x_{ij} for i = 1, \dots, n and j = 1, \dots, n, where x_{ij} = 1 if item j (of size s_j) is placed in bin i, and binary variables y_i = 1 if bin i is used (assuming at most n bins suffice). The model minimizes the number of used bins subject to and capacity constraints: \begin{align*} \min &\quad \sum_{i=1}^n y_i \\ \text{s.t.} &\quad \sum_{i=1}^n x_{ij} = 1 \quad \forall j = 1, \dots, n, \\ &\quad \sum_{j=1}^n s_j x_{ij} \leq y_i \quad \forall i = 1, \dots, n, \\ &\quad x_{ij}, y_i \in \{0,1\} \quad \forall i,j. \end{align*} This set partitioning-style formulation has an exponential number of implicit constraints but is effectively solved by modern branch-and-cut solvers like CPLEX or Gurobi, which generate violated inequalities (e.g., cover or clique constraints) dynamically and exploit the problem's structure for fast root-node relaxations. For n ≤ 100, instances solve near-instantaneously, with larger cases benefiting from preprocessing reductions. The formulation originates in early optimization literature and is refined in works addressing symmetry via aggregated variables or lazy constraints. In terms of worst-case time complexity, a naive exact algorithm enumerates all possible assignments, achieving O(2^n n) time via dynamic programming over item subsets to solve the decision problem for increasing k (with binary search on k up to n).

Special Cases for Few Item Types

When the number of distinct item sizes is bounded by a small constant k, the bin packing problem admits exact polynomial-time algorithms. A standard dynamic programming approach tracks the vector of remaining multiplicities for each item type across partially packed bins. The state space consists of all possible multiplicity vectors up to the total counts, yielding O(n^k) states, where n is the total number of items; from each state, transitions correspond to filling a new bin with a feasible combination of items, and the number of such combinations is constant for fixed k since item sizes are fixed. This computes the minimum number of bins required. The case of exactly two item sizes is a special instance with k=2, solvable in polynomial time via the above dynamic programming in O(n^2) time, or alternatively using network flow formulations that model item assignments to bin configurations as a . For items whose sizes are unit fractions $1/m with fixed m \geq 2, the distinct sizes are at most m-1 (namely $1/2, 1/3, \dots, 1/m), so the problem reduces to the bounded-k case above and is solvable in polynomial time. Specific structural results further enable optimal packing in H(a) bins (where H(a) = \lceil \sum 1/m_i \rceil) when the total size satisfies certain thresholds, such as \sum 1/m_i \leq H(a) - 3/7, via polynomial-time algorithms that reduce to minimal counterexamples and apply first-fit decreasing. These techniques find applications in scenarios where item sizes conform to a limited set of standard dimensions, such as in manufacturing processes for cutting stock from rolls or sheets of material.

Performance Comparison

Offline algorithms for the bin packing problem vary significantly in their approximation guarantees, computational complexity, and practical applicability, allowing practitioners to select based on instance size and required precision. First Fit Decreasing (FFD) provides a strong balance of efficiency and quality, achieving at most \frac{11}{9} OPT + 1 bins, where OPT is the optimal number, with a runtime of O(n \log n). Asymptotic Polynomial Time Approximation Schemes (APTAS) offer near-optimal solutions of (1 + ε) OPT + c for any ε > 0, but at higher cost with runtime O(n^2 / \varepsilon^3). Exact algorithms guarantee OPT bins, though their exponential runtime limits them to smaller instances.
AlgorithmApproximation RatioRuntime Complexity
FFD\frac{11}{9} OPT + 1 (≈1.22 OPT + 1)O(n \log n)
APTAS(1 + ε) OPT + cO(n^2 / \varepsilon^3)
OPT
Empirical evaluations demonstrate that heuristics like FFD perform close to optimal on instances with item size distributions, often achieving waste bounded by \Theta(\sqrt{n}) relative to OPT, making them suitable for large-scale problems where optimality is unnecessary. Exact methods, such as branch-and-bound variants, remain feasible for instances up to n ≈ 500, solving them in seconds on modern hardware. Key trade-offs among these approaches include the speed and looseness of multiplicative approximations like FFD, which scale well but may overuse bins by up to 22%; the tunable tightness of additive approximations in APTAS, which excel when OPT is large but demand more computation for small ε; and the precision of exact methods, ideal for small n but impractical beyond moderate sizes due to .

Variants

Fragmentation

In the fragmentation variant of the bin packing problem, items are permitted to be divided into a limited number of fragments, with the total number of fragments per item constrained by a f (for example, f=2). Each fragment is packed independently into s, while ensuring that the sizes of all fragments from an original item sum exactly to its original size, and the remains fixed (typically normalized to 1). Unlike the problem, this allows for more flexible packing by distributing portions of large items across multiple s, but the bound on f prevents arbitrary subdivision to maintain computational challenge and model real-world limits on splitting. The objective is to minimize the number of s required to accommodate all fragments, where the cost of splitting is implicitly captured through the fragmentation limit rather than explicit overhead in usage. This variant remains NP-hard even for f=2, as the case reduces to the classical bin packing problem when no splitting occurs. Seminal work establishes that allowing a single split per item (corresponding to at most two fragments) admits an asymptotic fully polynomial-time scheme (AFPTAS) for the problem, while asymptotic schemes achieve performance arbitrarily close to optimal for larger instances. For unrestricted fragmentation (where f is unbounded), the problem reduces to the divisible case, where algorithms attain the optimal ratio of 1 without size-increasing overhead, though practical implementations often incorporate bounds to avoid trivial solutions. The fragmentation model finds applications in , particularly in scenarios involving data partitioning or task distribution across servers, such as scheduling jobs with data locality constraints where computational workloads can be split into fragments to meet deadlines while optimizing resource utilization. For instance, in cloud data processing frameworks, large datasets may be fragmented to fit available across multiple nodes without violating capacity limits.

Divisible Item Sizes

In the divisible item sizes variant of the bin packing problem, items may be split into any number of portions whose sizes sum to the original item size, similar to packing divisible resources such as fluids. This model assumes no restrictions on the number of fragments per item or , allowing portions to be distributed freely across bins to achieve exact fills. The optimal number of bins required equals the of the total item size divided by the bin capacity; for unit-capacity bins, this is \lceil \sum_{i=1}^n s_i \rceil, where s_i denotes the size of item i. A simple , such as Next Fit, achieves this optimum by sequentially assigning portions of each item to fill current bins completely before opening new ones, with a runtime of O(n) where n is the number of items. Thus, the problem is solvable in time, in contrast to the NP-hard standard bin packing where items cannot be split. This variant eliminates wasted space entirely, as bins can always be packed to full capacity, differing fundamentally from the indivisible case where fragmentation due to fixed item sizes often leads to underutilized bins.

Cardinality Constraints

In the bin packing problem with constraints, items of given sizes must be packed into bins of unit capacity such that the total size in each bin does not exceed 1 and, additionally, no bin contains more than c items, where c \geq 2 is a fixed . The objective remains to minimize the number of bins used, with items being indivisible. This constraint introduces a combinatorial alongside the classical size restriction, effectively modeling scenarios where both and item count matter. The decision version of the problem is NP-complete, and the is NP-hard in the strong sense, even when c is fixed at any value \geq 2; this follows from a from the 3-partition problem, which remains hard under bounded item counts per subset. A fundamental lower bound on the minimum number of bins required, denoted OPT, is \mathrm{OPT} \geq \max\left( \left\lceil \sum_{i=1}^n s_i \right\rceil, \left\lceil \frac{n}{c} \right\rceil \right), where n is the number of items and s_i > 0 are their sizes; the first term accounts for total size, while the second enforces the limit. For fixed c, the problem admits a polynomial-time scheme (PTAS) achieving an approximation of (1 + [\epsilon](/page/Epsilon)) OPT for any [\epsilon](/page/Epsilon) > 0, by enumerating feasible bin configurations (polynomial in number for bounded c) and applying dynamic programming on grouped item types after scaling small items. This variant arises in practical settings such as vehicle loading, where bins represent cargo holds or trucks limited not only by weight or but also by the maximum number of passengers or items (e.g., packages) they can accommodate, and in , such as assigning tasks to CPU cores with caps on the number of processes per core to manage overhead or fairness. These applications highlight the need for balanced packing under dual constraints, often requiring tailored heuristics beyond classical size-only methods.

Non-Additive Functions

In the non-additive variant of the bin packing problem, the objective shifts from minimizing the number of bins to minimizing the total cost, where the cost of each bin is a non-linear f of its load, defined as the sum of item sizes assigned to it. Formally, given items with sizes s_i \in (0,1] and bins of unit capacity, the goal is to the items into subsets B_j such that \sum_{i \in B_j} s_i \leq 1 for each j, minimizing \sum_j f\left( \sum_{i \in B_j} s_i \right), where f: [0,1] \to \mathbb{R}_{\geq 0} is typically non-decreasing and satisfies f(0) = 0. This generalizes the standard additive objective, which corresponds to f(x) = 1 for x > 0, reducing to minimizing the number of non-empty bins. The problem is NP-hard, as it encompasses the classical bin packing problem as a special case. Approximation algorithms exploit properties of f; for monotone non-decreasing f, heuristics such as constructive and local search methods achieve a worst-case performance ratio of 2, meaning the algorithm's cost is at most twice the optimal cost. When f is additionally concave—reflecting economies of scale where marginal costs decrease with higher utilization—an asymptotic polynomial-time approximation scheme (APTAS) exists, yielding a solution cost of at most (1 + \epsilon) times the optimal plus a constant term depending on \epsilon > 0, in polynomial time. Practical examples include scenarios with concave f, such as shipping or where bin costs decrease sublinearly with load due to fixed overheads dominating at low utilization; for instance, a piecewise-linear concave f might model transport costs from higher to lower rates as load increases. Quadratic costs arise in energy-related applications, where power consumption grows nonlinearly with load (e.g., f(x) \propto x^2), incentivizing fuller bins to minimize total energy in data centers or . Variable effective bin costs can also emerge if f encodes setup fees scaled by utilization, though bins remain fixed-capacity.

Knapsack and Multiple Knapsack

The bin packing problem exhibits a strong duality with the knapsack problem, where the single knapsack variant serves as a foundational counterpart. In the single knapsack problem, given a set of items with sizes and values, the objective is to select a subset that maximizes the total value without exceeding a fixed bin capacity, typically normalized to 1. When values are set equal to sizes, this reduces to maximizing the packed weight in one bin, providing insight into the maximum utilization possible per bin in packing scenarios. This maximization contrasts with bin packing's goal of minimizing the number of bins needed to accommodate all items, effectively framing bin packing as the problem of covering the item set with the fewest such maximal knapsack solutions. The multiple knapsack problem extends this duality by considering a fixed number k of bins, each of capacity 1, and seeking to assign items to these bins to maximize the total packed value (or weight, when values equal sizes). In this setting, bin packing for a given k corresponds to determining whether the multiple knapsack solution can pack the entire instance, i.e., if the maximum packed weight equals the total item size. Approximation algorithms for the multiple knapsack problem leverage techniques such as (LP) rounding to achieve performance guarantees; for instance, LP-based methods yield constant-factor approximations, with schemes achieving (1 + \epsilon)-approximation for any \epsilon > 0 in time via a PTAS. These approaches often relax the assignment to fractional solutions and round them while preserving feasibility and value bounds. A key connection between bin packing variants and knapsack problems arises in the Gilmore-Gomory approach for the , which treats cutting patterns as columns in an formulation. Here, iteratively solves a subproblem formulated as a 0-1 to identify beneficial patterns that improve the current solution, linking the minimization of stock usage (akin to bin packing) directly to knapsack optimizations for pattern efficiency. This method, introduced in the early , remains influential for large-scale instances where enumerating all possible patterns is infeasible. To bound the optimal bin count in bin packing, a standard lower bound utilizes the knapsack maximum fill: let S denote the total size of all items, and let f be the maximum weight packable into a single bin of capacity 1, computed via the 0-1 with item values equal to their sizes. Then, the minimum number of bins satisfies \text{OPT} \geq \frac{S}{f}, since no bin can exceed fill f, providing a tight relaxation-based estimate especially when item sizes limit dense packings. This bound is routinely evaluated alongside others like relaxations for assessing performance.

Cutting Stock Problem

The cutting stock problem represents a practical application of bin packing principles in , particularly in industries such as production, , and textiles, where raw material of fixed dimensions must be cut into smaller pieces to meet specific demands while minimizing . In the one-dimensional case, the problem involves cutting rolls or sheets of standard width W into items of various widths w_i to satisfy demands b_i for each item type i = 1, \dots, m, with the goal of using the fewest stock rolls possible or equivalently minimizing total trim loss (unused material). Unlike the classical bin packing problem, which treats items as unique and focuses solely on packing feasibility, the cutting stock problem accounts for multiplicities in demands, allowing multiple identical items per , and optimizes for trim loss as the primary . The problem was first formally modeled as an problem by in 1939, with the English translation published in 1960. In this formulation, let a_{ji} denote the number of pieces of item i cut from pattern j, where each pattern j satisfies \sum_i a_{ji} w_i \leq W, and x_j is the number of times pattern j is used. The model minimizes the total number of stock rolls \sum_j x_j subject to \sum_j a_{ji} x_j = b_i for each i, with x_j \geq 0 integer. This set partitioning structure captures the exact demands but leads to an exponential number of possible patterns, making direct solution impractical for large instances. To address the large number of variables, the problem is efficiently solved using , as introduced by Gilmore and Gomory in their seminal 1961 work. This technique starts with a restricted master problem—a over a subset of patterns—and iteratively generates new columns (cutting patterns) by solving a knapsack subproblem to maximize the : \max \sum_i \pi_i a_i subject to \sum_i w_i a_i \leq W and a_i \geq 0 , where \pi_i are prices from the master. The process converges to the optimal LP solution, after which rounding or branch-and-price extensions yield near-optimal solutions. For instances with large demands b_i, the Gilmore-Gomory procedure provides a (1 + \epsilon)-approximation to the optimal integer solution for any fixed \epsilon > 0, leveraging the fact that the LP relaxation becomes nearly integral and rounding introduces negligible relative waste. This guarantee arises because high demand multiplicities ensure that the additional stock rolls needed for integrality are a small fraction of the total, distinguishing the problem from bin packing where demands are typically unit-sized.

Scheduling and Resource Allocation

The bin packing problem arises naturally in parallel machine scheduling, where the objective is to assign jobs to identical processors to minimize the , defined as the completion time of the last job. In the standard notation P||C_max, n jobs each with processing time p_i must be non-preemptively scheduled on m identical machines, and the problem is equivalent to determining the minimum capacity T such that the jobs can be packed into at most m bins of size T. This connection allows bin packing techniques to provide lower bounds and approximation algorithms for scheduling; for instance, the of deciding whether all jobs fit within a given T directly implies the of P||C_max when m is part of the input. A prominent approximation algorithm for P||C_max is the Longest Processing Time (LPT) rule, which sorts jobs in non-increasing order of processing times and assigns each job to the machine with the current smallest total load—mirroring the First Fit Decreasing (FFD) heuristic in bin packing. Graham showed that LPT guarantees a makespan at most \frac{4}{3} - \frac{1}{3m} times the optimal, establishing it as a (4/3 - 1/(3m))-. This bound remains tight for certain instances, such as when job sizes lead to suboptimal packing similar to the classic bin packing example with items of size 1/2 + \epsilon and 1/3. In resource allocation contexts, such as distributing computational tasks across processors, LPT and related bin packing heuristics minimize by efficiently balancing loads, ensuring no single processor is overburdened beyond necessary. The optimal C_{\max}^* satisfies the lower bound C_{\max}^* \geq \max\left( \frac{\sum_{i=1}^n p_i}{m}, \max_{i} p_i \right), which parallels the fundamental bounds in packing: the item divided by and the largest item . This bound is computable in linear time and serves as a quick estimate for practical scheduling. In , packing formulations address (VM) placement, where VMs with resource demands (e.g., CPU, ) are allocated to physical hosts to minimize active servers or loads across a . Online variants of packing, such as First Fit or Best Fit, are commonly applied for dynamic VM provisioning, reducing by consolidating workloads onto fewer hosts while respecting constraints. For example, in large-scale environments like Amazon EC2, these heuristics approximate solutions to the multidimensional packing problem inherent in multi-resource allocation.