Fact-checked by Grok 2 weeks ago

Quadratic unconstrained binary optimization

Quadratic unconstrained binary optimization () is a problem that seeks to minimize a over variables, formulated as finding the x \in \{0,1\}^n that minimizes x^T Q x + c^T x, where Q is an n \times n and c is a in \mathbb{R}^n. This model unifies a wide range of NP-hard problems, including , stable set, and graph partitioning, by encoding constraints implicitly through penalty terms in the . The origins of trace back to the late and early , with foundational work on pseudo-Boolean functions by researchers such as Fortet (1959–1960) and and Rudeanu (1968), who formalized forms in variables for applications. Subsequent developments in the and , including contributions from , , and others, advanced algorithms for 0-1 optimization and established its . gained renewed prominence in the due to its equivalence to the in statistical physics, enabling applications in hardware like D-Wave systems and platforms. Key features of QUBO include its flexibility in modeling diverse problems through binary decision variables and quadratic interactions, as well as its amenability to both classical solvers (e.g., relaxations, metaheuristics) and quantum-inspired methods. While general QUBO is intractable, special cases—such as those on planar graphs or with low-rank matrices—admit polynomial-time solutions, and algorithms achieve ratios like O(1/\log n) via SDP-based rounding. Notable applications span (e.g., clustering), (e.g., RNA folding, ), logistics (e.g., vehicle routing), and (e.g., via minimization). This versatility positions QUBO as a for advancing optimization in both theoretical and practical domains, particularly with emerging accelerations.

Definition and Formulation

Problem Statement

Quadratic unconstrained binary optimization () provides a unified framework for modeling and solving a broad class of NP-hard problems through the use of binary decision variables x_i \in \{0,1\}. This approach represents decision choices as binary states, with interactions between variables captured in a quadratic function, enabling the encoding of diverse discrete problems such as graph partitioning and facility location without additional constraints beyond the binary requirement. By reducing complex constrained formulations to an unconstrained via penalty terms, QUBO simplifies the structure while preserving the problem's essential features. The origins of QUBO trace back to the late 1950s and early 1960s, when studies on pseudo-Boolean functions and binary quadratic optimization emerged in operations research. Seminal work by Hammer and Rudeanu in 1968 formalized these concepts in the context of Boolean methods for optimization, laying the groundwork for unconstrained quadratic binary programming. Subsequent contributions by Fred Glover in the 1970s advanced binary optimization techniques, further developing QUBO's applications in areas like project selection and clustering. QUBO's value stems from its versatility in transforming varied combinatorial problems into a standard quadratic format, which supports efficient solving with algorithms like and specialized hardware such as quantum annealers from D-Wave Systems. This encoding facilitates exploration by classical and quantum methods alike, often outperforming problem-specific solvers for large instances. For example, a basic two-variable QUBO instance seeks to minimize the expression involving the product of the variables plus each variable individually, yielding the optimal assignment of zeros for both to achieve the minimum .

Mathematical Expression

The Quadratic Unconstrained Optimization () problem seeks to find a x \in \{0,1\}^n that minimizes the objective x^T Q x + c^T x, where Q is an n \times n and c is a in \mathbb{R}^n, both containing real-valued entries. Linear terms from an original problem can be absorbed into this quadratic objective by adding the components of c to the diagonal entries of Q, yielding the compact form x^T Q x (leveraging the property x_i^2 = x_i). This naturally incorporates both linear and contributions: the diagonal elements q_{ii} (including absorbed linear biases) capture the linear terms q_{ii} x_i, while the off-diagonal elements q_{ij} (for i \neq j) encode the interactions. In the common , the objective expands to x^T Q x + c^T x = \sum_{i=1}^n (q_{ii} + c_i) x_i + \sum_{1 \leq i < j \leq n} q_{ij} x_i x_j, where for a symmetric Q, the off-diagonal q_{ij} typically represents half the pairwise interaction strength to avoid double-counting in the matrix form. In standard notation, Q serves as the interaction matrix, with diagonal elements q_{ii} termed biases (representing unary influences on each variable) and off-diagonal elements q_{ij} termed couplings (representing pairwise interactions between variables). This convention facilitates formulation across diverse optimization contexts, including those amenable to quantum annealing hardware.

Properties

Computational Complexity

The quadratic unconstrained binary optimization (QUBO) problem is NP-hard. This hardness follows from a polynomial-time reduction from the maximum cut problem, which is a well-known NP-hard combinatorial optimization task that can be directly formulated as a QUBO instance by setting the off-diagonal elements of the matrix Q to negative weights corresponding to the graph edges and adjusting the diagonal for normalization. The decision version of QUBO—determining whether there exists a binary vector \mathbf{x} \in \{0,1\}^n such that \mathbf{x}^T Q \mathbf{x} \leq k for a given threshold k—is NP-complete. Membership in NP is straightforward, as a proposed solution \mathbf{x} can be verified in polynomial time by computing the quadratic form. Completeness arises from the equivalence between QUBO and the Ising spin glass model, whose decision problem (e.g., whether the ground-state energy is at most zero) is NP-complete via reductions from problems like 3-SAT. Certain special cases of QUBO are solvable in polynomial time. If Q is diagonal, the objective decouples into n independent linear terms over single binary variables, allowing the minimum to be found by evaluating each term separately in O(n) time. If Q is positive semidefinite, the quadratic form is convex, and its minimum over the hypercube [0,1]^n occurs at a binary vertex; this can be solved exactly via a semidefinite programming relaxation or interior-point methods for convex quadratic programming, both polynomial-time algorithms. QUBO remains NP-hard in parameterized settings with sparsity constraints on Q. Specifically, the problem is NP-hard even when Q has only O(n) nonzero entries, as the reduction from preserves sparsity: is NP-hard on cubic graphs (maximum degree 3), which yield QUBO instances with O(n) nonzeros corresponding to the O(n) edges.

Mathematical Characteristics

Quadratic unconstrained binary optimization (QUBO) constitutes a specialized instance of , wherein the decision variables are confined to the vertices of the binary hypercube \{0,1\}^n. The objective is to minimize f(x) = x^T Q x + c^T x, with Q a symmetric n \times n matrix and c \in \mathbb{R}^n. This formulation inherits the combinatorial structure of the hypercube, rendering the feasible set discrete and finite, while the quadratic nature introduces potential non-convexity depending on the spectral properties of Q. The matrix Q fundamentally shapes the optimization landscape. As a symmetric matrix, its definiteness is determined by its eigenvalues: if Q is positive semidefinite (all eigenvalues non-negative), f(x) is convex over the continuous relaxation, facilitating easier analysis. However, in typical QUBO instances, Q is indefinite, featuring both positive and negative eigenvalues, which induces non-convexity and results in a multimodal objective function replete with local minima over the binary domain. This indefiniteness arises frequently in applications, as off-diagonal elements of Q encode interactions that can be attractive or repulsive, mirroring spin systems without explicit constraints. The eigenvalue spectrum of Q further elucidates the problem's intrinsic challenges. The presence of negative eigenvalues signals non-convexity, with the count of such eigenvalues correlating to the extent of the ruggedness in the energy landscape; more negative eigenvalues generally amplify the number and depth of local minima, thereby increasing the likelihood of suboptimal solutions in local search methods. For instance, even low-rank indefinite Q (e.g., rank 2) can harbor exponentially many local optima, underscoring the deceptive nature of the discrete quadratic form. To derive bounds, QUBO problems are often relaxed to continuous quadratic optimization over [0,1]^n, which preserves the objective but expands the domain. When Q is indefinite, this relaxation remains non-convex, but (SDP) provides a convex outer approximation yielding lower bounds on the minimum value. A standard SDP relaxation minimizes \langle Q, X \rangle subject to \begin{pmatrix} 1 & \diag(X)^T \\ \diag(X) & X \end{pmatrix} \succeq 0, with \diag(X) = \mathbf{1} (typically after linear transformation to the equivalent over \{-1,1\} variables), solvable in polynomial time via interior-point methods and offering tightness for certain structured instances.

Equivalences and Mappings

Relation to Ising Model

The quadratic unconstrained binary optimization (QUBO) problem is mathematically equivalent to the classical Ising model from statistical mechanics, allowing for a direct transformation between the two formulations. In the QUBO framework, the objective is to minimize H_{\text{QUBO}} = \sum_{i} Q_{i,i} x_i + \sum_{i < j} Q_{i,j} x_i x_j, where x_i \in \{0, 1\} are binary variables and Q is an upper-triangular matrix of coefficients. The Ising model, conversely, minimizes the Hamiltonian H_{\text{Ising}} = \sum_{i} h_i s_i + \sum_{i < j} J_{i,j} s_i s_j, where s_i \in \{-1, +1\} represent spin variables, h_i are local magnetic fields, and J_{i,j} are coupling strengths between spins. This equivalence arises because both models are binary quadratic optimization problems, differing only in the representation of the binary domain. The transformation between QUBO and the Ising model is achieved via a simple affine mapping of variables. To convert from QUBO to Ising, substitute s_i = 2x_i - 1, which maps the binary x_i to spins s_i. Substituting this into the QUBO objective yields the Ising Hamiltonian: H_{\text{Ising}} = \sum_i h_i s_i + \sum_{i<j} J_{i,j} s_i s_j + C, where the constant C = \sum_i \frac{Q_{i,i}}{2} + \sum_{i<j} \frac{Q_{i,j}}{4} does not affect minimization, the linear terms become magnetic fields h_i = \frac{Q_{i,i}}{2} + \frac{1}{4} \sum_{j \neq i} Q_{\min(i,j),\max(i,j)}, and the quadratic terms map to couplings J_{i,j} = \frac{Q_{i,j}}{4}. The reverse mapping from Ising to QUBO uses x_i = \frac{s_i + 1}{2}, transforming the fields and couplings back to QUBO coefficients accordingly. This bijection preserves the structure and optimal solutions of the problems, with linear terms in QUBO corresponding to external fields that bias spins, and quadratic terms to interactions that favor alignments or anti-alignments. This equivalence gained prominence in the 2010s through the development of quantum annealing hardware by , which natively solves on superconducting quantum processors. Earlier formulations of the date to 1925 in statistical physics for magnetism studies, but its optimization applications were revitalized by D-Wave's 2007 prototype and subsequent systems, prompting widespread reformulation of combinatorial problems as for compatibility. Seminal works, such as Lucas's 2014 survey, demonstrated how numerous NP-hard problems could be mapped to , bridging computer science and physics. The mapping is significant because it enables the application of physics-inspired algorithms and hardware designed for to solve efficiently. For instance, the large effective fields in QUBO-derived can simplify spin configurations by coercing many variables to fixed states, potentially reducing computational complexity in certain solvers. This interchangeability has facilitated interdisciplinary advances, allowing optimization researchers to leverage tools from while adapting for binary decision problems in fields like .

Reductions to Other Problems

Quadratic unconstrained binary optimization (QUBO) provides a versatile framework for solving NP-hard combinatorial problems, as numerous such problems can be polynomially reduced to QUBO by encoding binary decision variables and their interactions directly into the Q matrix. The diagonal elements of Q represent linear costs or rewards associated with individual binary variables, while off-diagonal elements capture pairwise interactions, allowing constraints and objectives to be integrated without explicit enforcement mechanisms. This reduction strategy leverages penalty functions to handle constraints implicitly, transforming constrained problems into unconstrained quadratic forms over binary variables. A prominent example is the reduction of the minimum vertex cover problem, where binary variables x_j \in \{0,1\} indicate whether vertex j is included in the cover (x_j = 1) or not. The objective minimizes the cover size via linear terms \sum_{j \in V} x_j, while ensuring every edge is covered adds penalty terms P \sum_{(i,j) \in E} (1 - x_i - x_j + x_i x_j), with P > 0 a sufficiently large scalar to enforce feasibility. The resulting formulation is \min \mathbf{x}^T Q \mathbf{x}, where uncovered vertices contribute negatively to the diagonal through the expanded penalty but are outweighed by the inclusion cost, prioritizing minimal covers. The maximum independent set problem similarly reduces to QUBO, using binary variables x_i \in \{0,1\} to denote inclusion in the set. To maximize the set size while prohibiting adjacent vertices, the formulation minimizes H = -\sum_{i \in V} x_i + 2 \sum_{(i,j) \in E} x_i x_j, with negative diagonal entries in Q encouraging vertex selection and positive off-diagonal entries for edges penalizing violations. This setup ensures the optimal solution corresponds to the largest non-adjacent vertex subset. QUBO is synonymous with unconstrained programming, serving as a representation that facilitates these reductions across diverse domains. In modeling problems, QUBO incorporates restrictions—such as equalities A\mathbf{x} = b or inequalities—via quadratic penalty terms like P (A\mathbf{x} - b)^2, embedding them into the objective to maintain the pure unconstrained quadratic structure without separate constraint sets. Conversely, itself, being NP-hard, can be reduced to other NP-hard problems to leverage specialized solvers; for instance, any instance can be transformed into an equivalent problem by constructing a where weights derive from the off-diagonal Q elements and additional self-loops or gadgets account for diagonal terms, allowing MaxCut algorithms to solve the original . Similar graph-based transformations extend this to problems like the traveling salesman problem via embeddings that map interactions onto TSP tour constraints.

Applications

Maximum Cut Problem

The maximum cut problem seeks to partition the vertices of an undirected into two disjoint s such that the total weight of s with endpoints in different subsets is maximized. For an unweighted , this corresponds to maximizing the number of crossing s; for a weighted with edge weights w_{ij} > 0, it maximizes \sum_{(i,j) \in E} w_{ij} (x_i (1 - x_j) + x_j (1 - x_i)), where x_i \in \{0,1\} assigns i to one subset if x_i = 1 and the other if x_i = 0. This objective simplifies to \sum_i x_i \left( \sum_{j \sim i} w_{ij} \right) - 2 \sum_{(i,j) \in E} w_{ij} x_i x_j, which is a \mathbf{x}^T Q \mathbf{x} with Q_{ii} = \sum_{j \sim i} w_{ij} (the weighted of i) and Q_{ij} = Q_{ji} = -w_{ij} if (i,j) \in E, or $0 otherwise. Maximizing this form yields the maximum cut value, making Max-Cut a direct instance; equivalently, minimizing -\mathbf{x}^T Q \mathbf{x} aligns with standard QUBO minimization frameworks. For example, consider an unweighted with 5 vertices and edges \{ (1,2), (1,3), (2,4), (3,4), (3,5), (4,5) \}. The corresponding Q matrix is Q = \begin{pmatrix} 2 & -1 & -1 & 0 & 0 \\ -1 & 2 & 0 & -1 & 0 \\ -1 & 0 & 3 & -1 & -1 \\ 0 & -1 & -1 & 3 & -1 \\ 0 & 0 & -1 & -1 & 2 \end{pmatrix}. The optimal assignment \mathbf{x} = (0,1,1,0,0) places vertices 2 and 3 in one and 1, 4, 5 in the other, yielding \mathbf{x}^T Q \mathbf{x} = 5 and a of 5 edges (specifically, the crossing edges are (1,2), (1,3), (2,4), (3,4), (3,5)). Max-Cut has served as a application in since the 1970s, highlighted by seminal approximation algorithms such as the Goemans-Williamson method, which achieves a 0.878-approximation ratio via relaxation.

Graph Partitioning and Clustering

Graph partitioning problems involve dividing the vertices of an undirected graph into two subsets of approximately equal size while minimizing the number of edges that connect vertices across the subsets, known as the cut size. This can be formulated as a problem by assigning variables x_i \in \{0,1\} to each i, where x_i = 1 indicates assignment to one subset and $0 to the other. The objective function minimizes the cut size through terms in the Q matrix that penalize differing assignments for connected vertices, such as Q_{ij} = -\gamma for edges (i,j) \in E, where \gamma > 0 is a scaling factor derived from the . Balance constraints are encoded via quadratic penalties in Q, for example, by adding terms like \lambda \left( \sum_i x_i - n/2 \right)^2, where \lambda > 0 enforces equal sizes by penalizing deviations from half the total vertices n; this expands to diagonal and off-diagonal entries in Q that discourage imbalance. Such formulations for graph partitioning have been applied in VLSI design since the 1990s to optimize circuit placement and routing by minimizing wire lengths modeled as cut edges, with early quadratic binary models paving the way for modern unconstrained approaches. In , these methods facilitate balanced divisions for studying network structures, emerging prominently in the with quantum-enhanced solvers. Unlike the problem, which seeks unbalanced partitions to maximize crossing edges, graph partitioning prioritizes size equality for practical load balancing. For clustering tasks, a k-means-like binary clustering (with k=2) can be expressed in QUBO form by setting the Q matrix to reward intra-cluster similarity, such as minimizing squared distances between points assigned to the same cluster via terms Q_{ij} = d(p_i, p_j) for data points p_i, p_j, where positive entries penalize (discourage) co-assignment of dissimilar points. Imbalance is penalized quadratically, similar to partitioning, ensuring each cluster has roughly half the points; for instance, the full objective becomes \min_x x^T Q x + \lambda \left( \sum_i x_i - N/2 \right)^2, where N is the number of points. This approach targets global optima unlike iterative classical k-means, with empirical tests showing competitive results on small datasets. A specific QUBO formulation arises in modularity maximization for community detection, where the goal is to partition graphs into communities with dense internal connections. The modularity matrix B_{ij} = A_{ij} - \gamma \frac{d_i d_j}{2m} defines Q entries, with A_{ij} the adjacency, d_i degrees, m total edges, and \gamma a resolution parameter; positive B_{ij} rewards co-assignment in the same community, while negative values penalize it based on expected edge densities under a null model. The QUBO objective is then \min_x x^T (-B) x + \text{balance terms}, maximizing modularity Q \approx \frac{1}{4m} \sum_{ij} B_{ij} s_i s_j (with Ising spins s_i = 2x_i - 1) to identify communities in complex networks like social graphs. This has been implemented on quantum annealers for scalable detection.

Machine Learning and Other Uses

Quadratic unconstrained binary optimization (QUBO) formulations have found significant applications in machine learning, particularly for feature selection tasks, where the objective is to identify a subset of relevant features from high-dimensional data while minimizing redundancy. In this context, the QUBO problem is constructed by encoding feature importance scores in the linear terms of the objective function and penalizing pairwise redundancies through the quadratic matrix Q, allowing binary variables to represent inclusion or exclusion of features. This approach balances model interpretability and performance, as demonstrated in quantum annealing implementations that outperform classical greedy methods on datasets with thousands of features. For sparse principal component analysis (PCA), QUBO models incorporate quadratic regularization terms to enforce sparsity in the loading vectors, enabling the identification of key principal components with minimal non-zero entries while preserving variance explanation. In , QUBO is widely used for , where binary variables indicate asset inclusion, and the quadratic terms in Q capture risk-return tradeoffs via matrices and expected returns. This formulation addresses the Markowitz mean-variance problem in a setting, often solved with quantum-inspired algorithms to handle in large asset universes. Empirical studies show that such QUBO-based portfolios achieve Sharpe ratios comparable to continuous optimizations but with enhanced diversification through binary constraints. In , QUBO models extend to RNA secondary structure prediction, where binary variables represent base-pairing decisions, and quadratic terms encode thermodynamic stability and constraints to minimize . uses QUBO to optimize sequences for target folds, penalizing unfavorable interactions via the Q matrix to achieve desired structures. , as in the hydrophobic-polar (HP) model, employs binary variables for residue placements to minimize through quadratic interactions; pioneering QUBO formulations in the 2010s enabled quantum solver applications on small peptides. In , QUBO formulations solve vehicle routing problems (VRP), such as the capacitated VRP, by using binary variables for route assignments and quadratic penalties for capacity and distance constraints, facilitating efficient delivery scheduling. For , QUBO minimizes energy functions in tasks like and segmentation, where binary labels indicate object presence, and quadratic terms model spatial and appearance consistencies. In scheduling problems, QUBO encodes constraints quadratically, as seen in NASA's Deep Space Network antenna assignments, where it optimizes track durations while avoiding overlaps. Applications in recommender systems, emerging around 2020–2024, use QUBO for binary user-item interactions or in , optimizing over sparse matrices; advancements include counterfactual analysis for improved personalization on datasets.

Solution Methods

Exact Algorithms

Exact algorithms for solving quadratic unconstrained binary optimization (QUBO) problems guarantee finding the global optimum by systematically exploring the $2^n possible binary solutions, where n is the number of variables, while employing bounding techniques to the search tree and avoid exhaustive enumeration. These methods are essential due to the of QUBO, which arises from reductions to problems like . Branch-and-bound approaches dominate, often incorporating to derive tight upper bounds on the objective function, enabling efficient pruning of suboptimal branches. In branch-and-bound for , the search tree is constructed by sequentially fixing binary variables to 0 or 1, partitioning the problem into subproblems at each node. is applied to the \mathbf{x}^T Q \mathbf{x}, where Q is the , by dualizing the unfixed variables to obtain a relaxed bound; specifically, the function provides an upper bound that is compared against the current best solution to fathom branches where no improvement is possible. Seminal implementations, such as those by Pardalos and Rodgers, use dynamic preprocessing and parallelization to solve small instances up to around 30 variables, with extensions allowing larger specific cases, though general performance is limited by density. Enhancements like roof duality can further tighten bounds by identifying persistent variables—those fixed to optimal values across the roof —reducing the effective search space. For sparse QUBO instances, where the matrix Q has limited nonzero entries corresponding to a with low density, dynamic programming exploits structural properties to enumerate states efficiently. Roof duality plays a key role here by computing the convex roof of the , allowing variable fixing and into smaller subproblems solvable via dynamic programming over the variable dependencies, such as in chain-like or tree-structured graphs. This approach reduces the state space from exponential in n to polynomial in the sparsity degree, as demonstrated in methods for pseudo-Boolean optimization that extend to sparse QUBO. For example, in instances with bounded , dynamic programming tables track partial solutions over bags of variables, achieving exact solutions for graphs with hundreds of nodes if sparsity is high. QUBO can also be reformulated as a mixed-integer quadratic program (MIQP) by enforcing binarity through linear constraints, such as x_i \in \{0,1\} or big-M formulations for quadratic terms if needed, though modern solvers handle general quadratic objectives directly. Commercial solvers like Gurobi and CPLEX employ branch-and-bound internally with advanced presolving, cutting planes, and heuristics to solve the MIQP exactly, making this a practical exact method despite deviating from pure QUBO. These tools integrate seamlessly with QUBO via simple variable declarations and objective specification. Performance of exact solvers typically limits to n \leq 50 for dense random instances, with solution times ranging from seconds to days depending on structure; sparse cases extend to n > 1000 using graph-based reductions. Benchmarks from QUBO workshops since 2010, such as those comparing SCIP-based branch-and-cut against BiqCrunch, highlight that Lagrangian-enhanced branch-and-bound solves 80-90% of n=30 dense instances within hours, underscoring challenges for larger n.

and Approximation Techniques

Heuristic and approximation techniques for quadratic unconstrained binary optimization (QUBO) focus on efficiently finding high-quality solutions for large-scale instances where exact methods are computationally prohibitive. These approaches trade optimality for speed, leveraging local improvements, population-based searches, and relaxation-based rounding to navigate the combinatorial landscape of binary variables. Seminal developments include local search methods that iteratively flip variables to reduce the objective function while incorporating mechanisms to avoid stagnation, as well as metaheuristics inspired by natural processes. One prominent local search heuristic is , introduced by Fred Glover for binary quadratic programs, including formulations. In this method, starting from an initial binary solution, the algorithm evaluates neighborhood moves by flipping a single variable and selects the one yielding the largest improvement in the objective function. To escape local minima, it maintains a tabu list of recently flipped variables, temporarily forbidding their reversal to promote diversification, while allowing aspiration criteria to override tabus if a superior solution is found. Glover's adaptive memory variant enhances this by storing elite solutions and guiding future searches toward promising regions, demonstrating superior performance over prior heuristics on challenging instances up to n=100 variables by finding optimal or near-optimal solutions faster than exact branch-and-bound methods. Metaheuristics extend local search through broader exploration strategies. Genetic algorithms treat binary strings as chromosomes representing QUBO solutions and evolve populations via selection, crossover, and operators to minimize the quadratic objective. For instance, uniform crossover exchanges bits between parent solutions to generate offspring, while randomly flips bits to introduce diversity; hybrid variants integrate local search for intensification, outperforming standalone and on large QUBO instances with n=200–2500 by discovering new best-known solutions for dense problems. , another metaheuristic, draws brief inspiration from thermal transitions by probabilistically accepting worsening flips at high "temperatures" to escape local minima, gradually cooling to favor improvements; applied to unconstrained quadratic pseudo-Boolean functions equivalent to QUBO, it achieves efficient solution quality on hundreds of test problems with O(n²) complexity per temperature step. For specific QUBO reductions like the problem, algorithms provide worst-case guarantees. The Goemans-Williamson algorithm relaxes the to a semidefinite program, solving it to obtain vector embeddings, then rounds via random cuts to yield a ; this delivers an expected of at least 0.878 to the optimum, marking a significant advance for graph partitioning applications of . To scale these techniques, hybrid CPU-GPU implementations parallelize components across hardware. The Diverse Adaptive Bulk Search (DABS) framework, for example, uses CPU threads to manage diverse solution pools and GPU kernels to execute bulk neighborhood evaluations and genetic operations simultaneously on multiple devices, enabling effective solving of instances with n exceeding 1000 variables, such as dense MaxCut problems, while outperforming commercial solvers like Gurobi in solution quality and speed.

Quantum and Hybrid Approaches

Quantum annealing leverages the equivalence between QUBO problems and the to map optimization objectives onto the of quantum annealers, enabling hardware-based solving through adiabatic evolution. D-Wave Systems introduced commercial quantum annealers in 2011, with processors like the D-Wave 2000Q (2017), (2020), and Advantage2 (2023, with over 7,000 s and improved connectivity) designed to natively handle Ising formulations derived from QUBO instances. As of November 2025, D-Wave announced a new annealing quantum computer scheduled for release in February 2025. These systems embed QUBO problems by representing binary variables as qubit states and quadratic terms as interactions, allowing the annealer to find low-energy configurations that correspond to near-optimal solutions. Early demonstrations showed applicability to problems like , where provided solutions competitive with classical methods for instances up to hundreds of variables. The gate-model quantum approach employs the Quantum Approximate Optimization Algorithm (QAOA), a variational introduced by Farhi et al. in 2014, which applies alternating layers of cost and mixer to approximate solutions for combinatorial problems including . In QAOA, the objective is encoded as a cost Hamiltonian, and tunable parameters are optimized classically to maximize values on noisy intermediate-scale quantum (NISQ) devices like those from or . As of 2025, this enables solving instances with up to hundreds of variables on devices exceeding 100 qubits, with performance improving through deeper circuits and error mitigation, though limited by hardware coherence times. Hybrid solvers combine quantum-inspired classical techniques with quantum hardware or simulations for larger-scale QUBO problems. Fujitsu's Digital Annealer, first detailed in 2019, uses a CMOS-based architecture mimicking dynamics to solve fully connected models without embedding constraints, achieving near-real-time performance for combinatorial tasks. Post-2020 advancements, including the third-generation unit introduced in 2021 with capacity up to 100,000 bits, integrate software layers for partitioning large problems, demonstrating scalability for dense instances beyond pure quantum limits as of 2025. These hybrids often outperform standalone quantum annealers in solution quality for industrial applications like scheduling. Recent 2025 benchmarks confirm the Digital Annealer's competitive performance on large-scale Max-Cut problems, a key application. Current quantum and hybrid methods face significant limitations, including embedding overhead in annealers like D-Wave, where dense graphs require chaining multiple qubits per variable, increasing problem size by factors of 5-10 and amplifying noise effects. NISQ devices introduce gate and readout errors, degrading QAOA accuracy for depths beyond p=3, with error rates necessitating error mitigation techniques. Benchmarks indicate speedups for sparse problems with n≈10^4 variables, where D-Wave hybrids solve instances in seconds compared to hours on classical solvers, though dense cases remain challenging due to overhead.

References

  1. [1]
    [PDF] The Quadratic Unconstrained Binary Optimization Problem
    Chapter 5 deals with so-called autarkies and persistencies for. QUBO. They can be used to fix variables a priori and to enhance exact and heuristic algorithms.
  2. [2]
    A Tutorial on Formulating and Using QUBO Models - arXiv
    Nov 13, 2018 · This tutorial discloses the basic features of the QUBO model that give it the power and flexibility to encompass the range of applications.
  3. [3]
    Analyzing Quadratic Unconstrained Binary Optimization Problems ...
    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables.
  4. [4]
    QUBOs and Ising Models — Python documentation
    Equbo(ai,bi,j;qi)=∑iaiqi+∑i<jbi,jqiqj. Note. Quadratic unconstrained binary optimization problems—QUBOs—are unconstrained in that there are no constraints on ...
  5. [5]
  6. [6]
    Solving the semidefinite relaxation of QUBOs in matrix multiplication ...
    Jan 10, 2023 · We obtain a classical algorithm to solve the semidefinite relaxation of Quadratic Unconstrained Binary Optimization problems (QUBOs) in matrix multiplication ...
  7. [7]
  8. [8]
  9. [9]
    Faster exact solution of sparse MaxCut and QUBO problems
    Apr 15, 2023 · Any QUBO instance can be formulated as a MaxCut instance in a graph with vertices, and any MaxCut instance on a graph (V, E) can be formulated ...
  10. [10]
  11. [11]
    Distance-based clustering using QUBO formulations - Nature
    Feb 17, 2022 · Many combinatorial optimization problems can be described by the Ising model or quadratic unconstrained binary optimization (QUBO) formulations.
  12. [12]
  13. [13]
    [2203.13261] Feature Selection on Quantum Computers - arXiv
    Mar 24, 2022 · We propose a novel feature selection algorithm based on a quadratic unconstrained binary optimization (QUBO) problem, which allows to select a specified number ...
  14. [14]
    Quantum computer based Feature Selection in Machine Learning
    Jun 18, 2023 · Starting with common methods in machine learning, we treat the feature selection task as a quadratic unconstrained optimization problem (QUBO) ...
  15. [15]
    Quantum-Inspired Portfolio Optimization In The QUBO Framework
    Oct 8, 2024 · A quantum-inspired optimization approach is proposed to study the portfolio optimization aimed at selecting an optimal mix of assets based on the risk-return ...
  16. [16]
    A real world test of Portfolio Optimization with Quantum Annealing
    Mar 22, 2023 · In this note, we describe an experiment on portfolio optimization using the Quadratic Unconstrained Binary Optimization (QUBO) formulation.
  17. [17]
    Achieving High-Quality Portfolio Optimization with the Variational ...
    Aug 26, 2025 · It can be formulated as a Quadratic Unconstrained Binary Optimization (QUBO) problem, which is NP-hard. Quantum computing offers the ...
  18. [18]
    An approach to solve the coarse-grained Protein folding problem in ...
    Nov 23, 2023 · The folding problem is cast in a 3D cubic lattice with degrees of freedom along edges parallel to the orthogonal axes, as well as along ...
  19. [19]
    Performance-Driven QUBO for Recommender Systems on Quantum ...
    Oct 20, 2024 · We propose Counterfactual Analysis Quadratic Unconstrained Binary Optimization (CAQUBO) to solve QUBO problems for feature selection in recommender systems.
  20. [20]
    [2407.02839] CRUISE on Quantum Computing for Feature ... - arXiv
    Jul 3, 2024 · Abstract:Using Quantum Computers to solve problems in Recommender Systems that classical computers cannot address is a worthwhile research topic.
  21. [21]
    Minimization of a quadratic pseudo-Boolean function - ScienceDirect
    Existing exact methods include variants of the branch-and-bound algorithm [53–55], Lagrangian decomposition [56], and linearizations of pure QUBO problems [57].
  22. [22]
  23. [23]
  24. [24]
    [PDF] Improved approximation algorithms for maximum cut and ...
    We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions ...
  25. [25]
    [1411.4028] A Quantum Approximate Optimization Algorithm - arXiv
    Nov 14, 2014 · Access Paper: View a PDF of the paper titled A Quantum Approximate Optimization Algorithm, by Edward Farhi and 2 other authors. View PDF · TeX ...Missing: Shor | Show results with:Shor
  26. [26]
    The Quantum Approximate Optimization Algorithm and the ...
    Jul 7, 2022 · This work studies the performance of a general-purpose quantum algorithm for combinatorial optimization, called the QAOA, applied to the famous Sherrington- ...Missing: Shor | Show results with:Shor
  27. [27]
    [PDF] Application of Digital Annealer for Faster Combinatorial Optimization
    This paper explains the technology to employ Digital Annealer to solve customers' real combina- torial optimization problems, namely, the formulation of real ...
  28. [28]
    [PDF] Description: Third Generation Digital Annealer Technology
    This is a hybrid problem-solving system in which a software intervention layer (SIL) cooperates with a search core to find optimal or sub-optimal solutions to a ...
  29. [29]
    Embedding Overhead Scaling of Optimization Problems in Quantum ...
    Nov 2, 2021 · Here we demonstrate that commonly used embedding schemes not only incur a space overhead but also a significant overhead in the time to solution.
  30. [30]
    The effect of classical optimizers and Ansatz depth on QAOA ...
    Jul 11, 2024 · The noise in the NISQ computers also adversely affects the efficacy of the classical optimization procedure. The performance of the classical ...
  31. [31]
    Comparing three generations of D-Wave quantum annealers for ...
    In this study we report concise benchmarks across three generations of D-Wave quantum annealers, consisting of four different devices, for the NP-hard discrete ...