Fact-checked by Grok 2 weeks ago

SAT solver

A SAT solver is a designed to determine whether a given formula, typically expressed in (CNF), has a satisfying of truth values to its variables, or to find such an if it exists. It addresses the (SAT), the of checking the of propositional logic formulas, which was the first problem proven to be NP-complete. The origins of SAT solvers trace back to the early 1960s with the development of the Davis–Putnam procedure in 1960 and its refinement into the Davis–Putnam–Logemann–Loveland (DPLL) algorithm in 1962, which introduced systematic search combined with unit propagation to efficiently prune the search space. These early complete solvers laid the foundation for modern approaches, though they were limited in handling large-scale instances due to exponential worst-case complexity. The field advanced significantly in the late 1990s and early 2000s with the introduction of (CDCL), pioneered in solvers like (1997) and (2001), which incorporate non-chronological , learned clauses from conflicts, and heuristics such as VSIDS (Variable State Independent Decaying Sum) for variable selection. CDCL-based solvers now routinely handle industrial problems with millions of variables and clauses, far surpassing the capabilities of their predecessors. SAT solvers have become indispensable tools across computer science and engineering, powering applications in hardware and , where they check for bugs in complex systems; ; tasks like configuration and optimization; and even mathematical proving, such as generating massive proofs for problems like the Pythagorean triples. Annual international SAT competitions, starting in 2002, have driven rapid innovation by benchmarking dozens of solvers on diverse real-world instances, fostering improvements in preprocessing, parallelization, and proof certification techniques like DRAT proofs. Extensions of SAT solving, including maximum satisfiability (MaxSAT), quantified Boolean formulas (QBF), and (SMT), further expand their utility in fields like and .

Fundamentals

The Boolean Satisfiability Problem

The (SAT) asks whether there exists an assignment of truth values—true or false—to a of variables such that a given evaluates to true. Formally, given a \phi over variables x_1, \dots, x_n, SAT determines if there is a \sigma: \{x_1, \dots, x_n\} \to \{\top, \bot\} satisfying \sigma \models \phi. In practice, SAT instances are typically expressed in conjunctive normal form (CNF), a standardized representation consisting of a of clauses, where each is a disjunction of literals and a literal is a variable x_i or its \neg x_i. Thus, a CNF formula takes the form \phi = \bigwedge_{j=1}^m C_j, where each C_j = \bigvee_{k=1}^{l_j} l_{jk} and l_{jk} \in \{x_i, \neg x_i \mid 1 \leq i \leq n\}. The problem then reduces to finding an that makes every true, which occurs if at least one literal in each is true under that assignment. SAT is NP-complete, as proven by Stephen Cook in 1971 through a polynomial-time reduction from the acceptance problem of nondeterministic Turing machines running in polynomial time to SAT. The proof constructs a CNF formula of polynomial size that encodes all possible computation paths of the machine on a given input w; the formula is satisfiable if and only if at least one path accepts w, thereby simulating nondeterministic verification within NP. This establishes both membership in NP (via nondeterministic guessing and verification) and NP-hardness. A simple satisfiable CNF example is \phi = (x_1 \vee \neg x_2) \wedge (x_2 \vee x_3) \wedge (\neg x_1 \vee \neg x_3), which holds true under the x_1 = \top, x_2 = \top, x_3 = \bot. In contrast, the formula \psi = (x_1) \wedge (\neg x_1) is unsatisfiable, as no can make both clauses true simultaneously. Variants of SAT include k-SAT, which restricts clauses to exactly (or at most) k literals. For k \geq 3, k-SAT is NP-complete, following from a of general SAT to k-SAT by splitting larger clauses using auxiliary s—for instance, transforming (l_1 \vee l_2 \vee l_3 \vee l_4) into (l_1 \vee l_2 \vee y) \wedge (\neg y \vee l_3 \vee l_4) where y is a new . For k=2, 2-SAT is solvable in linear time via graphs, but the problem remains NP-complete for k \geq 3.

Role and Importance of SAT Solvers

SAT solvers play a pivotal role in as tools for addressing the (SAT), recognized as the canonical NP-complete problem through the Cook-Levin theorem. This status implies that SAT serves as a benchmark for the P versus NP question: a polynomial-time algorithm for SAT would resolve the millennium prize problem by showing P = NP, while the intractability of SAT in the worst case underscores the fundamental limits of efficient computation. In practice, SAT solvers enable the automated resolution of diverse real-world problems by encoding them as propositional formulas in (CNF), transforming complex decision tasks into verifiable instances. For instance, scheduling conflicts or constraints can be modeled as SAT problems, allowing solvers to determine feasible configurations without exhaustive . This encoding approach bridges theoretical with tangible challenges, making SAT solvers indispensable for optimization in domains where manual analysis is infeasible. Modern SAT solvers achieve remarkable efficiency gains, routinely handling industrial-scale instances with millions of variables and clauses—far surpassing brute-force methods that would require exponential time even for modest problem sizes. Their success stems from empirical optimizations that exploit problem structure, enabling solutions to instances intractable a decade ago. The interdisciplinary reach of SAT solvers extends beyond computer science, serving as a foundational engine in for planning and probabilistic reasoning, hardware and to ensure system correctness, and broader optimization tasks that inform decision-making across engineering fields. This versatility positions them as a between theoretical foundations and applied sciences. In chip design and , their deployment has yielded substantial economic impact by streamlining verification processes, reducing development timelines, and minimizing error-related costs in high-stakes industries.

Historical Development

Early Algorithms and Milestones

The Davis–Putnam algorithm, introduced by Martin Davis and in 1960, represented the first systematic computational procedure for deciding the of propositional formulas, particularly those in (CNF). The method relies on to eliminate atomic formulas, combined with unit to simplify clauses containing a single literal and pure literal elimination to remove variables that appear only positively or negatively. Originally designed for quantification theory in , it reduces such problems to propositional by eliminating quantifiers through Herbrand interpretations. This approach provided a complete decision procedure, though its reliance on exhaustive steps made it computationally intensive even for modest formula sizes. Building on this foundation, the emerged in 1962, developed by , George Logemann, and Donald Loveland as a practical implementation for theorem proving. The key innovation was the incorporation of chronological , which replaced the full phase with recursive variable assignment and propagation, allowing the algorithm to explore partial assignments and retract choices upon conflicts. This mechanism, paired with unit propagation, enabled more efficient handling of propositional instances derived from problems, marking the birth of search-based SAT solving. Early implementations demonstrated its viability for small-scale automated deduction, though hardware constraints limited its scope. In the 1970s, SAT techniques gained traction in , with systems integrating DPLL-like procedures to verify mathematical statements and logical inferences, often reducing first-order problems to propositional cases. By the 1980s, resolution-based provers such as , initiated at , advanced these methods by incorporating paramodulation and hyperresolution for equality handling, applying them successfully to complex deduction tasks in . These developments highlighted the potential of SAT solving beyond pure logic, paving the way for applications in hardware verification in the following decade where propositional encodings of circuit behaviors were analyzed. Despite these advances, early algorithms faced significant limitations, including exponential worst-case due to the branching in and the potential for exhaustive search trees. Performance on large instances was poor, as memory demands for storing clauses and assignments quickly exceeded available resources, and average-case analyses revealed behavior only for specific random distributions with low clause-variable ratios. This restricted practical applications to formulas with fewer than 100 variables until hardware improvements in the late .

Evolution to Modern Solvers

The transition from early systematic search algorithms, such as the Davis-Putnam procedure of the 1960s, to more advanced paradigms in the marked a pivotal shift in SAT solving, driven by the need to handle increasingly large industrial instances. In the mid-, the introduction of emerged as a key innovation, with the solver pioneering conflict analysis to learn from search failures and avoid redundant . This approach laid the groundwork for more efficient complete solvers by dynamically adding clauses that constrain the search space based on detected inconsistencies. A major breakthrough came in 2001 with the solver, which refined conflict-driven learning through optimized constraint propagation and introduced the Variable State Independent Decaying Sum (VSIDS) for selection. VSIDS assigns scores to based on their occurrence in recent learned , decaying older scores over time to prioritize involved in current conflicts, thereby guiding the search toward promising branches with minimal overhead. demonstrated 1-2 orders of magnitude speedup over predecessors like on benchmarks, establishing conflict-driven techniques as the dominant paradigm for complete SAT solving. Subsequent solvers integrated further improvements, such as dynamic and selection strategies, enhancing adaptability to diverse problem structures. Solvers like MiniSat (2005), known for its simplicity and efficiency, further popularized efficient CDCL implementations and dominated early SAT competitions. The establishment of annual SAT competitions beginning in 2002 further accelerated solver evolution by providing standardized benchmarks and fostering competitive optimization. Organized initially by Edward A. Hirsch, Daniel Le Berre, and Laurent Simon in conjunction with the SAT 2002 symposium, these events evaluated solvers on crafted, random, and industrial instances, revealing performance gaps and incentivizing innovations in heuristics and implementation. Competitions drove widespread adoption of techniques like VSIDS and clause learning, with winners often setting new standards that influenced subsequent developments. For instances prone to timeouts in complete solvers, the 1990s saw the rise of incomplete methods based on search, which proved effective on hard, satisfiable problems by iteratively flipping variable assignments to reduce unsatisfied clauses. Pioneering algorithms like (1992), which greedily minimizes clause violations through flips, and WalkSAT (1997), which incorporates noise for escaping optima, outperformed systematic approaches on random 3-SAT instances by factors of up to 10 in . These methods complemented complete solvers, particularly for tasks where approximate solutions sufficed. By the 2010s, the proliferation of multi-core processors prompted adaptations in SAT solving toward , enabling solvers to exploit concurrent processing for broader search coverage. approaches, which run multiple solver configurations in parallel, and clause-sharing mechanisms, which exchange learned clauses across threads, became prominent, yielding speedups of 4-8 times on multi-core systems for industrial benchmarks. This hardware-driven evolution extended the scalability of SAT solvers to instances with millions of clauses, integrating seamlessly with established techniques like conflict-driven learning.

Core Algorithms

Davis–Putnam–Logemann–Loveland (DPLL) Algorithm

The Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a foundational complete method for deciding the of propositional formulas in (CNF), serving as the basis for most modern SAT solvers. Introduced as an improvement over earlier resolution-based techniques, it combines systematic search with rules to prune the search space efficiently. The algorithm explores partial variable assignments recursively, simplifying the formula at each step until it determines or proves unsatisfiability.

Core Components

The DPLL algorithm relies on three key inference mechanisms to reduce the formula without branching: unit propagation, pure literal elimination, and backtracking. Unit propagation identifies and satisfies unit clauses—clauses containing a single unassigned literal—by assigning the value that makes the literal true, then propagating this assignment to simplify all affected clauses by removing the literal and its occurrences. This process repeats until no unit clauses remain or an empty clause (indicating inconsistency) is detected. Unit propagation ensures that all logical consequences of an assignment are immediately applied, avoiding redundant exploration. Pure literal elimination detects literals that appear only in one polarity (all positive or all negative) across the entire formula and assigns them the value that satisfies those clauses, removing the literal and all clauses containing it. This heuristic safely reduces the number of variables without risking inconsistency, as the opposite assignment would falsify at least one clause unnecessarily. Backtracking (specifically, chronological backtracking) undoes assignments when a branch leads to inconsistency, returning to the most recent decision point and trying the alternative value. This systematic exploration builds a of variable assignments, ensuring completeness by eventually trying all possibilities if needed.

Recursive Search Procedure

The DPLL operates recursively on a CNF φ and a partial M (initially empty). It first applies unit propagation and pure literal elimination to simplify φ under M. If simplification yields an empty , the is satisfying; if it produces an empty , the branch is unsatisfying, triggering backtrack. Otherwise, it selects an unassigned for branching: it recurses by setting the to true (simplifying φ accordingly) and, if that fails, tries false. This branching on , combined with simplification, drives the search toward a model or proof of unsatisfiability. Heuristics, such as choosing the in the shortest (shortest-first branching), guide selection to minimize the tree size in practice.

Pseudocode Outline

The following pseudocode outlines the core DPLL satisfiability check, using standard notation where φ is the CNF formula, M is the partial , BCP denotes Boolean constraint propagation (combining unit propagation and pure literal elimination), and SelectVar chooses an unassigned :
function SAT(φ, M):
    if φ is empty:  // No clauses left
        return true, M  // Satisfying assignment found
    if φ contains empty [clause](/page/Clause):
        return false, nil  // Unsatisfiable under M
    φ', M' ← BCP(φ, M)  // Apply propagation and simplification
    if φ' contains empty [clause](/page/Clause):
        return false, nil
    if φ' is empty:
        return true, M'
    v ← SelectVar(φ')  // Choose unassigned [variable](/page/Variable)
    // Try v = true
    φ_true, M_true ← Simplify(φ', v=true, M')
    result_true, model_true ← SAT(φ_true, M_true)
    if result_true:
        return true, model_true
    // Backtrack and try v = false
    φ_false, M_false ← Simplify(φ', v=false, M')
    result_false, model_false ← SAT(φ_false, M_false)
    if result_false:
        return true, model_false
    return false, nil  // Both branches fail
This recursive structure ensures termination, as each call reduces the problem size or explores a finite branch.

Time Complexity

In the worst case, DPLL explores the full 2^n possible assignments for n variables, yielding O(2^n) time complexity, since each propagation step can take polynomial time but the branching dominates. However, unit propagation and pure literal elimination, along with branching heuristics, dramatically reduce the effective runtime on many practical instances, making it viable for formulas with hundreds of variables.

Example Walkthrough

Consider the 3-SAT instance φ = (p ∨ q) ∧ (¬p ∨ q) ∧ (¬r ∨ ¬q) ∧ (r ∨ ¬q), with variables p, q, r. Start with empty assignment M = {}.
  1. BCP: No unit clauses or pure literals initially. Select p (e.g., via heuristic). Branch p = true: Simplify to φ' = (q) ∧ (¬r ∨ ¬q) ∧ (r ∨ ¬q), M = {p=true}.
  2. BCP on φ': Unit clause (q) propagates q = true, yielding φ'' = (¬r) ∧ (r), M = {p=true, q=true}.
  3. Further BCP: Unit (¬r) propagates r = false, satisfying (¬r) but falsifying (r) to empty clause, detecting conflict → unsatisfiable branch, backtrack.
  4. Try p = false: Simplify to φ' = (q) ∧ (¬r ∨ ¬q) ∧ (r ∨ ¬q), M = {p=false}.
  5. BCP: q = true from unit (q), yielding φ'' = (¬r) ∧ (r), M = {p=false, q=true}.
  6. Further BCP: r = false from (¬r) falsifies (r), conflict → backtrack to root.
  7. Both branches on p fail, returning unsatisfiability for the root. (Note: This instance is unsatisfiable, as the first two clauses imply q = true, and the last two imply r ↔ ¬q, leading to r = false and true.)

Conflict-Driven Clause Learning (CDCL)

(CDCL) represents a pivotal advancement in SAT solving, extending the Davis–Putnam–Logemann–Loveland () algorithm by incorporating mechanisms to analyze s and derive new clauses that prune the search space more effectively. The core innovation lies in analysis, where an unsatisfied clause during unit propagation triggers the construction of an implication graph to trace the reasons for the , ultimately generating a learned clause that explains the failure and prevents similar conflicts in future searches. This process enables non-chronological , jumping to earlier decision levels rather than strictly chronological ones, thereby accelerating convergence to satisfying assignments or proofs of unsatisfiability. The implication graph is a directed acyclic graph where vertices represent literals (both positive and negative assignments of variables), and edges denote implications derived from unit propagation or decisions. When a conflict arises—typically both a literal and its negation being implied—a special conflict vertex connects the opposing literals. Conflict analysis then resolves antecedents backward through the graph using clause resolution to derive a new clause that asserts the negation of the conflict's cause at the highest possible decision level. A key component is the First Unique Implication Point (1UIP) learning scheme, which identifies the first vertex in the graph that lies on all paths from the current decision to the conflict, ensuring the learned clause is as general as possible while remaining asserting (unit under the current partial assignment). This scheme balances clause size and explanatory power, often yielding clauses with a single literal from the current decision level for immediate propagation upon backtrack. To illustrate conflict resolution with 1UIP learning, consider a simplified outline of the process during a :
function AnalyzeConflict(conflict_literal):
    reason = GetReason(conflict_literal)  // Antecedent [clause](/page/Clause) causing [conflict](/page/Conflict)
    learned_clause = {¬conflict_literal}
    current_level = DecisionLevel()
    while True:
        resolve learned_clause with reason  // [Resolution](/page/Resolution) step: remove [pivots](/page/Pivot)
        if learned_clause has only one literal from current_level:
            break  // Reached 1UIP
        Update reason to antecedent of next pivot
    Add learned_clause to [clause](/page/Clause) database
    Backtrack to decision level of the asserting literal in learned_clause
This procedure traces back from the conflict, iteratively resolving until the 1UIP criterion is met, producing a clause that prunes irrelevant branches. CDCL solvers integrate advanced heuristics to enhance efficiency, notably the Variable State Independent Decaying Sum (VSIDS) for variable branching. In VSIDS, each variable maintains scores for its polarities based on their occurrence in learned clauses, incremented upon learning and periodically decayed by a factor (e.g., 1/1000th) to emphasize recent conflicts. The unassigned variable with the highest score is selected for branching, with polarity chosen to satisfy the most recent unsatisfied clause containing it. This dynamic heuristic adapts to the problem's structure, focusing the search on contentious variables. Restart strategies further bolster CDCL by periodically resetting the search trail to escape suboptimal paths, particularly in problems exhibiting heavy-tailed runtime distributions. Solvers like those employing randomized restarts increase the cutoff time geometrically (e.g., starting from 100 conflicts and doubling), preserving learned clauses across restarts to retain knowledge while exploring fresh decision sequences. This approach mitigates the risk of prolonged searches in unproductive subspaces, with empirical evidence showing speedups by orders of magnitude on structured benchmarks. The performance impact of CDCL is exemplified by the MiniSat solver, which implements these components—including 1UIP learning, VSIDS, and adaptive restarts—achieving superior results on the SAT 2003 competition benchmarks by solving 158 out of 177 industrial instances within 10,000 seconds, outperforming contemporaries like zChaff (147 solved) and BerkMin (157 solved). Such advancements underscore CDCL's role in transforming SAT solving from a theoretical into a practical tool for large-scale and optimization.

Advanced Techniques

Parallel and Distributed Solving

Parallel and distributed SAT solving techniques extend sequential (CDCL) methods to leverage multi-core processors and clusters, enabling significant speedups on large-scale instances by distributing computational workload. These approaches address the inherent limitations of single-threaded solvers, such as prolonged search times on hard combinatorial problems, by exploiting parallelism in the search space without altering the underlying guarantees. One prominent strategy is the portfolio approach, where multiple instances of SAT solvers—often configured with diverse heuristics, parameters, or random seeds—run concurrently on the same , with the first to find a (or prove unsatisfiability) terminating the process. Introduced in seminal works like ManySAT, this method capitalizes on the variability in solver performance across instances, achieving speedups through complementary search behaviors rather than deep integration. Portfolio solvers typically exchange only or clauses to minimize overhead, making them suitable for shared-memory multi-core environments. In contrast, divide-and-conquer techniques partition the into independent subproblems, solving them in to conquer the overall instance. The -and-conquer paradigm exemplifies this by using a lookahead solver in an initial "cube" phase to decompose the into thousands or millions of smaller —partial assignments that simplify the problem—followed by a "conquer" phase where CDCL solvers tackle each independently under assumptions. This method, which integrates traditional lookahead with modern CDCL, has demonstrated up to 20-fold speedups on challenging benchmarks like van der Waerden problems by enabling trivial across subproblems. Distributed implementations, such as Paracooba, extend this to settings by assigning to remote nodes, supporting incremental solving to reuse learned information. Distributed solving often incorporates clause sharing to enhance cooperation among processes, where learned clauses from one solver are broadcast to others via , accelerating global progress. In systems like Mallob, this periodic exchange—occurring every few seconds—yields 10- to 100-fold speedups over sequential solvers on broad benchmarks by propagating conflict-driven insights without problem partitioning. To ensure verifiability, especially for unsatisfiability results, distributed clause-sharing solvers generate proofs using extensions like LRAT (a superset of DRAT), which track clause dependencies and allow reconstruction of a single, compact refutation from partial proofs across nodes. Key challenges in these paradigms include load balancing, where uneven subproblem difficulties can lead to idle processors or starvation in divide-and-conquer setups, and communication overhead, which escalates in distributed environments due to network latency during clause exchanges. Multi-core systems mitigate some overhead through shared memory but face synchronization costs, while clusters demand optimized strategies like dynamic clause filtering to balance benefits against bandwidth limits. Frameworks such as PaInleSS address these by modularly combining portfolio and divide-and-conquer with configurable sharing, supporting solvers like MiniSat variants on many-core architectures. Representative examples include PMSat, an early MPI-based parallelization of MiniSat that distributes search trees across nodes for basic portfolio-style solving, and more advanced variants like those in PaInleSS, which integrate diverse engines such as Glucose and Lingeling for competitive performance on hard instances. These developments underscore the shift toward scalable, verifiable SAT solving in the 2010s and 2020s. Recent advancements as of 2025 include GPU-CPU hybrid systems like TurboSAT, which achieves up to 200x speedups on satisfiable benchmarks from the 2024 SAT Competition through gradient-guided exploration, and MallobSat winning the parallel track in the 2025 SAT Competition.

Randomized and Local Search Methods

Randomized and local search methods represent an incomplete class of approaches to solving the (SAT), prioritizing speed over guaranteed completeness by employing stochastic techniques to explore the search space. These methods begin with a of truth values to variables and iteratively modify the assignment through variable flips, aiming to minimize the number of unsatisfied clauses. Unlike systematic algorithms such as DPLL, local search avoids exhaustive and instead relies on heuristics to guide flips toward promising directions, often incorporating to escape local optima where no further improving moves are available. The WalkSAT algorithm, introduced as an enhancement to earlier greedy local search procedures, exemplifies these principles through a combination of greedy hill-climbing and probabilistic random walks. In WalkSAT, at each step, a unsatisfied is selected at random, and a from that clause is chosen for flipping: with probability p (the noise parameter), a random is flipped to introduce exploration; otherwise, the whose flip breaks the fewest additional clauses is selected to maintain greediness. This randomized mechanism allows WalkSAT to navigate rugged landscapes effectively, performing well on large random 3-SAT instances where pure greedy methods stall. The algorithm was first detailed in work by Selman, Kautz, and Cohen, building on the framework to incorporate noise for better performance on hard combinatorial problems. To approximate completeness in practice, randomized local search methods often employ restarts, where the algorithm is run multiple times from fresh random for short durations, pooling results across iterations to increase the likelihood of finding a satisfying if one exists. This strategy leverages the independence of restarts to mitigate the risk of prolonged trapping in local minima during a single run, with cutoff lengths tuned empirically—such as geometric distributions—to balance exploration and efficiency. Seminal analysis shows that such restarts can achieve expected runtime for certain random SAT distributions under appropriate policies. Hybrid methods integrate local search with complete algorithms like DPLL to leverage their complementary strengths, particularly for time-bound applications where quick heuristics can inform systematic search. For instance, local search may preprocess instances to identify likely satisfying partial assignments or clauses to prioritize, passing refined subproblems to a DPLL solver upon timeout, thereby accelerating solution discovery on satisfiable benchmarks while retaining proof capabilities for unsatisfiability. Early demonstrations of this integration in domains highlighted dramatic speedups, solving previously intractable problems by encoding them as SAT and alternating stochastic and systematic phases. These methods excel on random and certain structured satisfiable instances, often outperforming complete solvers by orders of magnitude due to their ability to quickly locate solutions in vast spaces without overhead. However, they struggle with proving unsatisfiability, as failure to find a solution after extensive restarts provides no formal , limiting their utility in verification tasks requiring rigorous disproofs. Empirical evaluations confirm their superiority on uniform random 3-SAT near the but reveal vulnerabilities to tightly coupled structured formulas where local moves yield minimal progress. Recent work as of 2025, such as DiverSAT, introduces novel heuristics to handle diverse SAT problems, enhancing performance on structured instances through diversification strategies.

Applications

Hardware and Software Verification

SAT solvers play a crucial role in hardware and by encoding system behaviors and properties as propositional satisfiability problems, enabling the detection of design errors through exhaustive search within bounded scopes. In hardware verification, SAT solvers facilitate bounded (BMC), where temporal properties are translated into SAT instances by unrolling the system's transition relation over a finite number of steps. This approach, introduced by Clarke et al. in , constructs a formula [[M]]_k = I(s_0) \land \bigwedge_{i=0}^{k-1} T(s_i, s_{i+1}) representing the unrolled model M up to bound k, conjoined with the negation of a property to check for reachable violations. If the resulting CNF formula is satisfiable, a path is extracted; otherwise, the property holds within the bound. This method has proven effective for verifying properties in complex circuits, such as microprocessors, by iteratively increasing k until completeness is approached or proven. For equivalence verification, SAT solvers reduce the problem of comparing two gate-level designs to checking the unsatisfiability of a miter , which XORs corresponding outputs and feeds shared inputs into both implementations. If the miter's output cannot be asserted to 1 under any input assignment, the circuits are equivalent. This technique, leveraging CNF encodings of the gate-level netlists, has been enhanced with incremental SAT solving to handle don't-care conditions and structural optimizations, improving scalability for large designs. In practice, SAT-based equivalence checking outperforms traditional BDD methods on circuits with high or irregular structures, as demonstrated in flows where it verifies retimed or optimized implementations against golden references. In , SAT solvers enable path exploration by unwinding program loops and control flows into bounded SAT instances, checking for assertion violations or undefined behaviors. The CBMC tool, developed by Kroening et al. starting in the early , applies this to ANSI-C programs by converting the unwound code into bit-precise formulas solved via integrated SAT solvers like MiniSat. CBMC has been used to verify and concurrency properties in , generating counterexamples as feasible execution traces when bugs are found. A notable involves Intel's application of SAT solvers in processor verification since the late 1990s, where they detected subtle errors in the instruction decoder, including mismatches in handling that could lead to incorrect execution. These efforts, building on post-FDIV bug lessons, integrated SAT into equivalence and property checking flows, uncovering bugs in and logic that simulation missed. To support iterative verification in evolving designs, incremental SAT solving reuses learned clauses and assignments across successive SAT instances, reducing redundancy in BMC unrollings or equivalence checks. Techniques like those in Eén and Sörensson (2011) maintain solver during bound increments, enabling efficient proof of properties over increasing horizons in models. This extension has been pivotal in industrial settings, where design revisions require repeated verifications without restarting from scratch, as seen in processor pipelines where incremental proofs confirm after minor gate-level changes.

Artificial Intelligence and Planning

SAT solvers play a pivotal role in , particularly in automated and knowledge representation, where complex decision-making problems are encoded as propositional instances. In classical , problems defined in languages like the (PDDL) are translated into (CNF) formulas, allowing SAT solvers to search for sequences of actions that achieve desired goals from initial states. This approach, pioneered in the early , transforms the planning task into checking the of a formula that encodes state transitions, action preconditions, and goal conditions over a fixed horizon of time steps. The GraphPlan algorithm, introduced in the mid-1990s, further advanced this paradigm by using planning graphs to generate compact CNF encodings that prune irrelevant actions and states, significantly improving efficiency over naive translations. By iteratively building layered graphs representing possible propositions and actions, GraphPlan extracts mutex relations (mutually exclusive elements) that translate directly into conflict clauses in the SAT formula, enabling faster solving for STRIPS-style domains common in PDDL. This integration of graph-based analysis with SAT solving demonstrated superior performance on benchmarks like and problems, establishing a foundation for modern planners. In conformant planning, which addresses in initial states or nondeterministic action effects without intermediate observations, SAT encodings model states as sets of possible worlds or disjunctive formulas. states are represented using literals that capture known true, known false, or unknown propositions, with actions encoded to propagate these over time steps while ensuring satisfaction across all possible trajectories. This approach compiles the problem into a SAT instance by enumerating bounded initial possibilities or using approximations for scalability, as shown in early mappings that handle through exhaustive but optimized clause generation. For instance, techniques like those compiling conformant problems to SAT via bounded initial state sets have solved instances with up to moderate levels efficiently. SAT solvers also facilitate scheduling tasks in AI, such as , by encoding constraints like availability, compatibility, and sequencing as clauses. In crew rostering, for example, variables represent assignments of duties to personnel, with clauses enforcing rules on rest periods, skill requirements, and coverage; satisfiability yields a valid roster minimizing violations or costs. This has been applied to transportation scheduling, where SAT encodings handle complex preferences and regulations, outperforming traditional on certain real-world instances by leveraging conflict-driven learning for rapid constraint propagation. Tools like the planner from the late 1990s exemplify the practical integration of SAT solvers in AI planning, combining GraphPlan's graph extraction with advanced SAT engines like WalkSAT for local search and complete solvers for optimality. automates the translation of PDDL-like inputs to CNF, incorporating mutex propagation and parallel plan extraction, and achieved competitive results in early International Planning Competitions by balancing guidance with solving. In theorem proving, extensions of SAT to () enable encoding fragments of by grounding quantifiers and integrating theory-specific solvers for domains like arithmetic or equality. This allows automated provers to reduce higher-level reasoning to propositional checks augmented with domain axioms, supporting tasks like inductive proofs or consistency. encodings translate clauses into quantifier-free formulas via , with solvers like Z3 using them to verify theorems in knowledge representation systems, bridging propositional SAT with expressive logics.

Combinatorial Optimization

SAT solvers extend to by addressing variants like Maximum Satisfiability (MaxSAT) and model counting (#SAT), enabling the solution of optimization and enumeration problems in mathematics and operations research. In MaxSAT, the objective is to find a truth that maximizes the number of satisfied clauses in a , generalizing the decision-based SAT problem to an optimization setting. Weighted MaxSAT further assigns non-negative weights to clauses, aiming to maximize the total weight of satisfied clauses, which allows modeling problems with varying importance of constraints. Partial MaxSAT distinguishes between hard clauses, which must all be satisfied, and soft clauses, which contribute to the objective if satisfied; this formulation is particularly useful for incorporating soft constraints in optimization scenarios, such as where some requirements can be relaxed. The problem, or model , computes the exact number of satisfying assignments for a given , providing a count rather than a single solution. This is valuable for probabilistic , where the number of models informs probability distributions over possible worlds, such as in Bayesian networks or reliability analysis. Approximate #SAT solvers trade precision for scalability on large instances, estimating counts via sampling or bounding techniques. Combinatorial problems are often solved by reducing them to SAT or its variants, leveraging efficient encodings to exploit solver strengths. For , vertices and edges are encoded as variables and clauses ensuring adjacent vertices receive different colors, with k-colorability checked via SAT; extensions to MaxSAT optimize for the minimum number of colors by iteratively tightening constraints. Set cover problems reduce to SAT by representing sets as variables and clauses that enforce coverage of all elements, while weighted MaxSAT minimizes the total cost by assigning weights to sets. The encodes paths through variables for vertex orderings and clauses prohibiting invalid transitions, solvable as SAT for existence or #SAT for counting paths in probabilistic graph models. Notable solvers include MaxHS, developed in the , which uses an implicit hitting set approach combining SAT solving with integer linear programming to efficiently handle weighted partial MaxSAT instances, achieving strong performance on industrial benchmarks. For #SAT, Cachet employs component caching integrated with DPLL-style search and clause learning, providing both exact counts and approximations through heuristic bounding for large-scale probabilistic applications. Theoretically, SAT and its variants connect to statistical physics through phenomena in random k-SAT instances, where thresholds emerge from and landscapes, inspiring algorithms like survey that model clause-variable interactions as spin glasses. These links also extend to random structures, analyzing average-case complexity via probabilistic methods to explain solver behavior on structured versus random formulas.

Performance and Modern Developments

Benchmarks and Competitions

The evaluation of SAT solvers relies on standardized benchmarks and competitions that provide rigorous testing grounds for performance improvements. Benchmark suites such as SATLIB, established in the late , offer a foundational collection of propositional instances drawn from diverse domains, including logic synthesis and combinatorial problems, enabling reproducible comparisons across solvers. Industrial benchmark suites, sourced from verification, , and planning applications, introduce real-world complexity with large-scale instances featuring millions of clauses, reflecting practical deployment challenges. Annual SAT competitions, initiated in 2002, serve as pivotal events to benchmark solver advancements and discover challenging instances, fostering innovation in the field. These competitions feature distinct categories such as application (real-world encodings), crafted (algorithmically generated hard instances), and random/combinatorial (structured probabilistic problems), allowing targeted evaluation of solver robustness. Primary metrics include the number of solved instances within time limits, average runtime, and the PAR-2 score, which penalizes unsolved instances by assigning twice the cutoff time to emphasize both speed and coverage. For unsatisfiable instances, solvers must produce verifiable proofs, typically in DRAT (DRAT proofs) format, to certify correctness, with evaluation extending to proof size and validation efficiency using independent checkers. Reference implementations like CaDiCaL and Kissat, known for their and high performance in prior competitions, are often used as baselines for new solvers due to their clean interfaces and extensive feature sets. Recent iterations, including the SAT Competition, have emphasized and distributed solving tracks to address on multi-core systems, alongside integrated evaluations for MaxSAT variants that optimize partial solutions in weighted settings. These trends highlight growing focus on distributed architectures and extensions beyond pure SAT, with results showing solvers resolving approximately 6.6% more instances (370 vs. 347) than sequential baselines in the main track.

Notable Solvers and Recent Advancements

MiniSat, developed in 2004 by Niklas Eén and Niklas Sörensson, became a foundational CDCL-based SAT solver due to its simplicity, efficiency, and open-source implementation, influencing numerous subsequent solvers through its clean codebase and propagation mechanisms. Glucose, introduced in the by Audemard and , advanced CDCL techniques by incorporating adaptive strategies, lazy evaluation, and dynamic restarts, achieving superior performance on industrial benchmarks compared to earlier solvers like MiniSat. Kissat, created by Biere and released around 2020, won the SAT 2020 main track by introducing inprocessing techniques such as transitive reduction and non-minimal learned clauses, enabling it to solve over 90% more instances than competitors on diverse problem sets. In the 2020s, distributed solving saw innovations like ManySAT, a portfolio-based parallel SAT solver that runs multiple diversified sequential configurations simultaneously, achieving superlinear speedups on multicore systems by sharing learned clauses across threads. Recent AI-driven advancements have integrated large language models (LLMs) to automate solver evolution and optimization. SATLUTION, a 2025 framework from researchers, employs LLM agents to iteratively evolve SAT solver codebases starting from 2024 competition winners, producing variants that solved 344–347 instances in the SAT Competition 2025—outperforming the human-designed winners' 334 instances—while also surpassing prior years' benchmarks without access to future data. Similarly, AutoModSAT (2025) uses LLMs to modularize complex CDCL solvers and discover novel heuristics, yielding a 50% performance improvement over baseline Kissat on diverse datasets and a 20% speedup over parameter-tuned state-of-the-art alternatives. Further AI integrations include DynamicSAT (2025), which dynamically tunes solver parameters like branching heuristics and clause activity decay during runtime based on problem characteristics, demonstrating significant gains over static configurations on the 2024 SAT Competition benchmarks. Active learning approaches for solver selection, such as those employing to predict optimal solver portfolios per instance, have also emerged, enhancing efficiency in heterogeneous environments. Looking ahead, future directions emphasize quantum-inspired methods, like encoding SAT into for faster exploration of solution spaces, and neural heuristics using graph neural networks to approximate decision-making in CDCL, potentially extending applicability to broader NP-hard problems.

References

  1. [1]
    [PDF] Satisfiability Solvers - Cornell: Computer Science
    Modern SAT solvers provide a “black-box” procedure that can often solve hard structured problems with over a million variables and several million constraints.
  2. [2]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    Theorem 1: If a set S of strings is accepted by some nondeterministic Turing machine within polynomial time, then S is P-reducible to { DNF tautologies}.
  3. [3]
    The Silent (R)evolution of SAT - Communications of the ACM
    Jun 1, 2023 · These early complete solvers followed the general approach, called DPLL, which was first ... Modern CDCL solvers frequently delete learned ...Key Insights · Eras of Practical SAT Solving · The Art of Using SAT Solvers · Outlook
  4. [4]
    The complexity of theorem-proving procedures - ACM Digital Library
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.
  5. [5]
    [PDF] 1 On The Unreasonable Effectiveness of SAT Solvers - Rice University
    This is due to the efficiency, utility, and impact of solvers on software engineering (Cadar et al.,. 2006), verification (Clarke et al., 2001), and AI ...
  6. [6]
    A Computing Procedure for Quantification Theory | Journal of the ACM
    A Computing Procedure for Quantification Theory. Authors: Martin Davis.
  7. [7]
    A machine program for theorem-proving | Communications of the ACM
    A machine program for theorem-proving. Authors: Martin Davis. Martin Davis ... DAVIS, MARTIN, and PUTMAN, HILARY. A computing procedure for ...
  8. [8]
    Otter: An Automated Deduction System
    May 22, 2000 · Otter is designed to prove theorems stated in first-order logic with equality. Otter's inference rules are based on resolution and paramodulation.Missing: 1980s SAT
  9. [9]
    [PDF] A History of Satisfiability - SAT Association
    The early history of satisfiability, which will be sketched in the next sections, ... Another important refinement to SAT solvers is the early generation of in-.Missing: milestones | Show results with:milestones
  10. [10]
    Typical case complexity of Satisfiability Algorithms and the threshold ...
    Goldberg was among the first to investigate the frequency with which DPLL algorithms return a model quickly. He provided an average-case analysis of a variant ...
  11. [11]
    [PDF] A SAT S P - Simon Fraser University
    The first SAT solver competition of which the author is aware was held in 1992 ... VSIDS The first heuristic suited to CDCL solvers was VSIDS (Variable State.Missing: history | Show results with:history
  12. [12]
    [PDF] GRASP—A New Search Algorithm for Satisfiability
    This paper introduces a procedure for conflict analysis in satisfiability algorithms and describes a configurable algo- rithmic framework for solving SAT.Missing: 1997 | Show results with:1997
  13. [13]
    [PDF] Conflict-Driven Clause Learning SAT Solvers - cs.Princeton
    This chapter surveys the organization of CDCL solvers, from the original solvers that inspired modern CDCL SAT solvers, to the most recent and proven techniques ...
  14. [14]
    [PDF] Chaff: Engineering an Efficient SAT Solver - Princeton University
    Chaff is a new SAT solver achieving performance gains through efficient Boolean constraint propagation and a novel low overhead decision strategy.Missing: 1990s | Show results with:1990s
  15. [15]
    The SAT2002 competition | Annals of Mathematics and Artificial ...
    SAT Competition 2002 held in March–May 2002 in conjunction with SAT 2002 (the Fifth International Symposium on the Theory and Applications of Satisfiability ...
  16. [16]
    [PDF] The International SAT Solver Competitions
    Apr 17, 2012 · The SAT competition was initiated in 2002 by John. Franco and Hans van Maaren, and was organized by. Edward Hirsch, Daniel Le Berre and Laurent ...
  17. [17]
    [PDF] Systematic vs. Local Search for SAT - UBC Computer Science
    Only recently, in the beginning of the 1990s, it was found that stochastic local search (SLS) algorithms can be efficiently ap- plied to find solutions for hard ...
  18. [18]
    [PDF] Seven Challenges in Parallel SAT Solving
    In general, the trade-off between efficiency and effectiveness highly depends on the application, and it is ultimately a decision that the community of SAT ...Missing: cost savings<|control11|><|separator|>
  19. [19]
    An overview of parallel SAT solving | Constraints - ACM Digital Library
    This overview covers the main topics of parallel SAT solving, namely, different approaches and a variety of clause sharing strategies. Additionally, an ...
  20. [20]
    [PDF] Lecture Notes on Solving SAT with DPLL - Carnegie Mellon University
    Mar 18, 2021 · In this lecture we will switch gears a bit from proving logical theorems “by hand”, to algorithmic techniques for proving them automatically ...
  21. [21]
    [PDF] Efficient Conflict Driven Learning in a Boolean Satisfiability Solver
    The pseudo-code of the DPLL algorithm with learning is illustrated in Figure 1. Function decide_next_branch()chooses the branching variable based on various ...
  22. [22]
    [PDF] Boosting Combinatorial Search through Randomization - CS@Cornell
    Based on our data (Gomes 1998a), we conjec- ture that α for the tail on the left is less than 1.0 on hard combinatorial search problems. This conjecture has ...
  23. [23]
    [PDF] LNCS 2919 - An Extensible SAT-solver - LARA
    By this paper, we have provided a minimal reference implementation of a mod- ern conflict-driven SAT-solver. We have tested MINISAT against ZCHAFF and. BERKMIN ...
  24. [24]
    [PDF] Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads ⋆
    A CDCL solver can solve the branch by either adding the decisions as unit clauses, or by adding them as assumptions (see the Incremental SAT solving paragraph ...
  25. [25]
    ManySAT: a Parallel SAT Solver - IOS Press
    In this paper, ManySAT a new portfolio-based parallel SAT solver is thoroughly described. The design of ManySAT benefits from the main weaknesses of modern ...
  26. [26]
    [PDF] Distributed Cube and Conquer with Paracooba - JKU
    Each node runs at least one instance of the SAT solver CaDiCaL [5]. The sub-problems are solved incrementally to reuse information from the previous solving.
  27. [27]
    [PDF] Unsatisfiability Proofs for Distributed Clause-Sharing SAT Solvers
    Abstract. Distributed clause-sharing SAT solvers can solve problems up to one hundred times faster than sequential SAT solvers by shar- ing derived information ...
  28. [28]
    [PDF] PaInleSS: a Framework for Parallel SAT Solving - HAL
    Mar 12, 2024 · The Portfolio scheme has been introduced by [14], in ManySat. The main idea of this approach is to run sequential solvers working in parallel on.Missing: seminal | Show results with:seminal
  29. [29]
    PMSat: a parallel version of MiniSAT - Luís Gil, Paulo Flores, Luís ...
    In this context we present the research, planning and implementation of PMSat: a parallel version of MiniSAT with MPI (Message Passing Interface) technology, to ...
  30. [30]
    [PDF] A New Method for Solving Hard Satisfiability Problems. - CS@Cornell
    In this paper, we propose an algorithm for an NP-hard problem that we believe has some very definite advantages. In particular, it works very quickly.
  31. [31]
    [PDF] Local Search Strategies for Satisfiability Testing
    Jan 22, 1995 · The basic GSAT algorithm performs a local search of the space of truth- assignments by starting with a randomly-generated assignment, and then ...
  32. [32]
    [PDF] Local Search Algorithms for SAT: An Empirical Evaluation
    In this paper, we empirically analyse the behaviour of algorithms from the. GSAT and WalkSAT architectures of SLS algorithms for SAT. These algorith- m families ...
  33. [33]
    [PDF] Planning, Propositional Logic, and Stochastic Search
    Kautz and Selman (1992) described a set of sufficient conditions for ensuring that all models of the domain axioms, initial, and goal states correspond to valid.
  34. [34]
  35. [35]
    None
    **Title:** Bounded Model Checking Using Satisfiability Solving
  36. [36]
    [PDF] Equivalence Checking
    This is called a “miter” formula. If unsatisfiable, the specification and implementation are equivalent. A SAT solver can tell us this.
  37. [37]
    [PDF] Combinational Equivalence Checking Using Incremental SAT ...
    Abstract— Combinational equivalence checking is an essen- tial task in circuit design. In this paper we focus on SAT based equivalence checking making use ...<|separator|>
  38. [38]
    [PDF] Behavioral Consistency of C and Verilog Programs Using Bounded ...
    Our tool, called CBMC, takes as input a C program and a Verilog implemen- tation. The two programs are unwound in tandem and converted to a Boolean formula that ...Missing: original | Show results with:original
  39. [39]
    Effective use of Boolean satisfiability procedures in the formal ...
    We compare SAT-checkers and decision diagrams on the evaluation of Boolean formulae produced in the formal verification of both correct and buggy versions ...
  40. [40]
    [PDF] Incremental Formal Verification of Hardware
    In this section, we give the necessary definitions for our algorithm and provide overviews of a SAT solver with incremental capabilities and of the ic3 ...
  41. [41]
    [PDF] Ultimately Incremental SAT
    We used instances generated by Intel's formal verification flow applied to the incremental formal verification of hardware, where the model checker is allowed.
  42. [42]
    [PDF] Planning as Satisfiability - Cornell: Computer Science
    Abstract. We develop a formal model of planning based on satisfiability rather than deduction. The satisfiability approach not only provides a more flex-.
  43. [43]
    [PDF] Fast Planning Through Planning Graph Analysis* - IJCAI
    In this paper we introduce a new planner, Graphplan, which plans in ... BLUM AND FURST. 1639. Page 5. achieving the current goal that are not exclusive ...
  44. [44]
    [PDF] Unifying SAT-based and Graph-based Planning - CS@Cornell
    Blackbox unifies SAT-based and graph-based planning by converting STRIPS problems to SAT, then using SAT solvers, and combining features of Graphplan and ...Missing: integration | Show results with:integration
  45. [45]
    [PDF] Mapping Conformant Planning into SAT through Compilation and ...
    Abstract. Conformant planning is a variation of classical AI planning where the initial state is partially known and actions can have non-.
  46. [46]
    [PDF] Compiling Uncertainty Away in Conformant Planning Problems with ...
    Abstract. Conformant planning is the problem of finding a sequence of actions for achieving a goal in the presence of uncertainty in the initial state or ...
  47. [47]
    [PDF] Crew Rostering at NS using SAT Solvers Janssen, Guido
    In this section the details of the Rostering Problem will be explained. Duties. The Crew Rostering Problem starts after the Crew Scheduling Problem is solved.
  48. [48]
    Encoding First Order Proofs in SMT - ScienceDirect.com
    We present a method for encoding first order proofs in SMT. Our implementation, called ChewTPTP-SMT, transforms a set of first order clauses into a ...
  49. [49]
    [PDF] Maximum Satisfiability - UT Computer Science
    Feb 14, 2017 · • MaxSAT algorithms build on SAT solver technology. • MaxSAT ... Partial MaxSAT Formula: ϕh : ¬x2 ∨ ¬x1 x2 ∨ ¬x3. CNF(Pri ∈VR ri ≤ 2).
  50. [50]
  51. [51]
    [PDF] Model Counting - Cornell: Computer Science
    Propositional model counting or #SAT is the problem of computing the number of models for a given propositional formula, i.e., the number of distinct truth.
  52. [52]
    [PDF] CMSC 451: SAT, Coloring, Hamiltonian Cycle, TSP
    We show how to use 3-Coloring to solve it. We construct a graph G that will be 3-colorable iff the 3-SAT instance is satisfiable. Properties: Each of xi and ¯ ...
  53. [53]
    [PDF] Heuristics for Fast Exact Model Counting - University of Washington
    Cachet, presented in [10], is a practical tool for solving #SAT. It is implemented as a modification and extension of the state-of-the-art SAT solver zChaff [14] ...<|control11|><|separator|>
  54. [54]
    [cond-mat/0406217] Comparing Beliefs, Surveys and Random Walks
    Jun 9, 2004 · Survey propagation is a powerful technique from statistical physics that has been applied to solve the 3-SAT problem both in principle and in ...
  55. [55]
    [PDF] Solving hard industrial combinatorial problems with SAT
    The topic of this thesis is the development of SAT-based techniques and tools for solving industrial combinatorial problems. First, it describes the ...Missing: SATLIB 1990s
  56. [56]
    SAT Competitions
    SAT 2002 Competition. Organizing committee, Edward A. Hirsch, Daniel Le Berre and Laurent Simon. Judges, N/A. Reference, The SAT2002 competition report. Laurent ...Missing: history | Show results with:history
  57. [57]
    [PDF] The International SAT Solver Competitions - AAAI Publications
    This short article provides an overview of the SAT solver competitions. Page 2. A Short History. The first SAT competition took place in Paderborn.
  58. [58]
    SAT Competition 2022: Call for Solvers and Benchmarks - GitHub
    The PAR-2 score of a solver is defined to be its average runtime over all instances, whereby unsolved instances contribute twice the time limit. Cloud track.Missing: metrics | Show results with:metrics
  59. [59]
    [PDF] Proofs of Unsatisfiability - CMU School of Computer Science
    However, if a solver reports that a formula has no solutions, the validation of that claim requires a proof of unsatisfiability. The correctness of SAT solving.Missing: evaluation | Show results with:evaluation
  60. [60]
    CaDiCaL 2.0 - SpringerLink
    Jul 26, 2024 · The SAT solver CaDiCaL provides a rich feature set with a clean library interface. It has been adopted by many users, is well documented and easy to extend.
  61. [61]
    SAT Competition 2024
    SAT Competition 2024 is a competitive event for solvers of the Boolean Satisfiability (SAT) problem. The competition is organized as a satellite event.Competition Tracks · Results · Solver Submission · Output FormatMissing: MaxSAT | Show results with:MaxSAT
  62. [62]
    MaxSAT Evaluation 2024
    The 2024 MaxSAT Evaluation (MSE 2024) is the 19th edition of MaxSAT evaluations, the primary competition-style event focusing on the evaluation of MaxSAT ...Missing: trends | Show results with:trends
  63. [63]
    SAT Competition 2020 - ScienceDirect.com
    In this article, we provide a detailed account on the 2020 instantiation of the SAT Competition, including the new competition tracks and benchmark selection ...
  64. [64]
    (PDF) ManySAT: a parallel SAT solver - ResearchGate
    Sep 26, 2025 · In this paper, ManySAT a new portfolio-based parallel SAT solver is thoroughly de- scribed. The design of ManySAT benefits from the main ...
  65. [65]
    [2509.07367] Autonomous Code Evolution Meets NP-Completeness
    Sep 9, 2025 · Starting from SAT Competition 2024 codebases and benchmark, SATLUTION evolved solvers that decisively outperformed the human-designed ...
  66. [66]
    Automatically discovering heuristics in a complex SAT solver ... - arXiv
    Jul 30, 2025 · This work bridges the gap between AI-driven heuristics discovery and mission-critical system optimization, and provides both methodological ...
  67. [67]
    DynamicSAT: Dynamic Configuration Tuning for SAT Solving - DROPS
    Aug 8, 2025 · We demonstrate that DynamicSAT achieves significant performance gains over the state-of-the-art solver on 2024 SAT Competition Benchmark. Cite ...
  68. [68]
    AAAI 2025 Tutorial Machine Learning for Solvers
    Feb 26, 2025 · We will introduce the broader AI community to ML techniques that have been used in the context of logical reasoning solvers, with a sharp focus ...<|separator|>
  69. [69]
    Solving SAT and MaxSAT with a Quantum Annealer - ResearchGate
    Aug 7, 2025 · Quantum annealers (QA) are specialized quantum computers that minimize objective functions over discrete variables by physically exploiting ...