Fact-checked by Grok 2 weeks ago

Conflict-driven clause learning

Conflict-driven clause learning (CDCL) is an algorithmic paradigm for solving the (SAT), which determines whether there exists an assignment of truth values to variables in a in (CNF) that makes the formula true. CDCL enhances the classic Davis-Putnam-Logemann-Loveland (DPLL) search by incorporating mechanisms to analyze conflicts—situations where a clause becomes unsatisfied—and learn new clauses that capture reasons for the conflict, thereby pruning the search space and preventing similar failures in future explorations. This learning process, combined with non-chronological (also known as backjumping), allows the solver to retreat to earlier decision points rather than strictly chronological ones, significantly improving efficiency on large-scale instances. The origins of CDCL trace back to the mid-1990s, with the solver introduced by Marques-Silva and Sakallah in 1996, which first integrated clause learning and decision-directed to address practical SAT instances in areas like hardware verification. Subsequent advancements, notably in the solver by Moskewicz et al. in 2001, refined the approach by introducing efficient data structures and heuristics, catapulting CDCL-based solvers to dominance in SAT competitions and applications. Key components of CDCL include , where a is selected and assigned a value using heuristics like the Variable State Independent Decaying Sum (VSIDS), which prioritizes variables involved in recent conflicts; unit propagation, which efficiently infers implied literals using schemes such as two-watched literals to avoid scanning all clauses; and conflict analysis, which constructs an implication graph from the propagation trail to derive a learned clause, typically via the first Unique Implication Point (UIP) scheme for concise nogoods. Beyond core operations, CDCL solvers incorporate techniques like periodic search restarts to mitigate the effects of heavy-tailed runtime distributions, as proposed by , Selman, and Kautz in 1998, enabling better exploration of the search space over long runs. Clause minimization and deletion strategies further optimize performance by retaining only useful learned , balancing memory usage and pruning power. These elements have made CDCL the backbone of state-of-the-art SAT solvers, powering applications in , planning, and , where they routinely handle formulas with millions of .

Fundamentals of Boolean Satisfiability

Propositional Logic and CNF Formulas

Propositional logic, also known as Boolean logic, forms the foundational framework for the Boolean satisfiability (SAT) problem addressed by conflict-driven clause learning (CDCL) solvers. It consists of propositional variables, or atoms, typically denoted as x_1, x_2, \dots, x_n, which can take truth values true (1) or false (0). A literal l is either a variable x_i (positive literal) or its negation \neg x_i (negative literal). A clause C is a disjunction of one or more literals, such as l_1 \vee l_2 \vee \dots \vee l_k, which is satisfied if at least one literal is true. A propositional formula is then a conjunction of clauses, representing the overall logical expression. Conjunctive normal form (CNF) is a standardized representation where a formula \phi is expressed as a of distinct clauses, i.e., \phi = C_1 \wedge C_2 \wedge \dots \wedge C_m. SAT problems are typically solved in CNF because this form enables efficient propagation and conflict detection mechanisms in solvers, such as unit propagation, which simplifies clauses by assigning values that force literals to true or false. For instance, the formula (x_1 \vee \neg x_2) \wedge (x_2 \vee x_3) \wedge (\neg x_1 \vee \neg x_3) is already in CNF and is under the assignment \sigma = \{x_1 = 1, x_2 = 1, x_3 = 0\}. General propositional formulas, which may include negations, , or disjunctions in arbitrary nesting, can be converted to an equisatisfiable CNF using transformations like the Tseitin encoding, which introduces auxiliary variables to preserve while linearizing the structure into clauses. CDCL operates exclusively on CNF-SAT instances, where the objective is to determine if there exists a truth \sigma to the variables such that every is satisfied, or to prove unsatisfiability if no such \sigma exists. This restriction to CNF allows CDCL to leverage clause-based learning and directly on the disjunctive structure.

The SAT Problem

The (SAT) is the canonical in , central to the study of propositional logic satisfiability. It involves determining whether a given formula, typically encoded in (CNF), can be satisfied by some assignment of truth values to its variables. A CNF formula consists of a of , where each clause is a disjunction of literals (variables or their negations). The problem thus asks: for a CNF formula φ over a set of variables, does there exist a truth assignment σ such that φ(σ) = true? SAT holds historical significance as the first problem demonstrated to be NP-complete. In 1971, proved that SAT is NP-complete via his theorem, establishing that any problem in can be reduced to SAT in polynomial time, and SAT itself is verifiable in polynomial time by checking a proposed assignment. This result laid the foundation for the theory of , highlighting SAT's role in understanding the boundaries of efficient computation. The of SAT implies that no polynomial-time algorithm is known for solving it in the worst case, leading to exponential for general instances in the absence of a breakthrough resolving the P vs. NP question. Despite this theoretical hardness, SAT exhibits practical solvability for many real-world instances, where modern solvers routinely handle formulas with millions of variables and clauses efficiently due to problem structure and algorithmic advances. While this entry focuses on the decision version of SAT, it is distinct from the counting variant #SAT, which seeks the total number of satisfying assignments for a CNF formula and is known to be #P-complete, a harder class capturing enumeration challenges.

Unit Propagation and Implication Graphs

Unit propagation serves as the core inference mechanism in conflict-driven clause learning (CDCL) solvers, enabling efficient deduction of variable assignments from partial interpretations of conjunctive normal form (CNF) formulas. The unit clause rule states that if a clause reduces to a single unassigned literal l under a partial assignment—meaning all other literals in the clause are falsified—then l must be assigned true to satisfy the clause and preserve the satisfiability of the formula. Formally, for a clause C = (l_1 \lor \cdots \lor l_k), if the partial assignment falsifies all literals except l_i, then it implies l_i = \top. This rule is applied recursively: newly implied literals are propagated through the formula, potentially creating additional unit clauses, until a fixed point is reached where no further unit clauses exist. The propagation process is typically implemented using a queue-based approach, such as a first-in-first-out () queue or watched literal scheme, to track and process unit clauses efficiently. Starting with an initial partial , unit literals are enqueued and propagated by scanning relevant clauses; each propagation updates the assignment and checks for new unit clauses, continuing until the queue empties or a conflict arises. This recursive application ensures all logical consequences of the current assignment are deduced without exhaustive enumeration, and the procedure is sound, meaning it only infers assignments that are necessary for any satisfying assignment of the original . Moreover, unit propagation runs in time—linear in the total size of the under standard implementations—making it a computationally efficient building block for avoiding full search trees in SAT solving. In CDCL, unit propagation is visualized and analyzed through the implication graph, a that captures the deductive chains produced by the process. The nodes of the graph represent literals (variables and their negations), and a directed edge from literal \neg l to m (or vice versa) arises from a unit clause implication: specifically, for a clause (\neg l \lor m), assigning \neg l = \top forces m = \top, creating the edge \neg l \to m; similarly, the contrapositive adds \neg m \to l. Paths in this graph trace sequences of implications from decisions or prior assignments, revealing how conflicts propagate and enabling clause learning by identifying cuts that explain unsatisfiability. This graphical representation not only formalizes the propagation's transitive effects but also underpins the solver's ability to prune redundant search paths effectively.

Key Mechanisms in CDCL

Decision Heuristics and Branching

In conflict-driven clause learning (CDCL) solvers, the search process builds a tree by extending partial of the Boolean formula, where each represents a partial and edges correspond to decisions on unset . The solver starts with an empty and iteratively selects an unassigned to on, creating two child nodes: one assigning the variable to true and the other to false, followed by unit propagation to infer implied literals in each . Branching involves selecting a free variable v and exploring both polarities (v = \top and v = \bot), with the choice of v guided by to prioritize promising paths. Static heuristics base variable selection solely on the initial formula structure, such as choosing variables with the most literal occurrences (e.g., MOM's heuristic) to enable early . In contrast, dynamic heuristics adapt to the state and conflicts encountered, with the variable state independent decaying sum (VSIDS) being a seminal example that assigns activity scores to variables based on their appearance in recent original and learned clauses. VSIDS initializes scores by literal occurrence frequencies and increments the scores of literals in clauses involved in conflicts, followed by periodic decay to emphasize recent activity; the solver then branches on the highest-score unassigned variable. This preference for high-activity variables accelerates conflict detection by focusing the search on regions likely to reveal inconsistencies quickly, as empirically demonstrated in benchmarks where VSIDS-enabled solvers like solved instances orders of magnitude faster than prior approaches. Variants like literal block distance (LBD) refine VSIDS by incorporating the number of distinct decision levels in learned clauses, bumping variables that contribute to low-LBD (high-quality) clauses to further guide toward effective learning. Heuristics such as VSIDS are empirical yet pivotal for CDCL performance, with scores updated during clause learning to integrate conflict-driven insights into future branching. Additionally, polarity selection determines the order of exploring true/false branches for the chosen variable, often using a preferred polarity heuristic that favors the last assigned value (phase saving) to reduce unnecessary flips and reuse successful partial assignments.

Conflict Detection and Analysis

In conflict-driven clause learning (CDCL) solvers, a conflict arises when unit propagation, applied to the current partial assignment of variables, results in the falsification of a clause, meaning all literals in that clause evaluate to false under the assignment. This detection occurs during Boolean constraint propagation (BCP), where unit propagation iteratively enforces implied assignments from unit clauses until no further implications exist or a contradiction is reached, signaling an unsatisfied clause. The implication graph, a directed graph representing these propagation steps with vertices for literals and edges labeled by antecedent clauses, captures the dependencies leading to the conflict, marked by a special conflict vertex. Conflict analysis begins with a backward traversal of the implication graph starting from the conflict vertex to trace the reasons for the conflicting assignments, partitioning implications by decision levels to identify the causes rooted in prior decisions. This process recursively resolves antecedents, selecting literals assigned at lower decision levels, until a cut in the graph is found that explains the through a set of implying clauses. The analysis enables non-chronological by revealing how decisions propagate to contradictions, allowing the solver to jump back to an earlier level rather than the immediate parent. A key concept in this analysis is the First Unique Implication Point (FUIP), defined as the earliest decision literal from which there exists a unique path in the implying the conflict, dominating all paths to the conflicting literals. The traversal identifies the FUIP by continuing until the current decision level contains only one literal from the decision variable, ensuring the derived explanation is as assertive as possible. For conflicting literals l and \neg l, if there are implication paths from a decision literal d to both l and \neg l, the clause is obtained by successive along these paths, cutting the at the first UIP to yield a learned clause that blocks the conflicting subtree. This FUIP-based cutting prunes the search space effectively, as demonstrated in early implementations where it reduced the size of learned explanations and improved backjump levels compared to full approaches. The resulting from the analysis directly implies the negation of the conflicting decision upon re-propagation, facilitating deeper insights into the formula's structure.

Clause Learning and Forgetting

In conflict-driven clause learning (CDCL), clause learning derives new clauses from conflicts identified during unit propagation, adding them to the clause database to prune the search space and avoid redundant computations. This process builds on conflict analysis by traversing the implication graph backward from the conflict to identify reasons for assignments, ultimately producing a learned clause that explains the conflict. The core of clause learning involves resolving clauses along a cut in the implication graph defined by a unique implication point (UIP), which is a literal at the current decision level through which all paths from the decision literal to the pass. Typically, the first UIP (1-UIP) closest to the is selected, ensuring the resulting is asserting—meaning it contains exactly one literal from the current decision level and forces the of that literal upon . To derive this , binary resolution is applied iteratively between the conflicting clauses and antecedent clauses (reasons for ), eliminating literals until the UIP cut is reached; this yields a that subsumes the reasons leading to the . For example, resolving a unit implying a with another sharing the negated reduces the size, strengthening the learned . Learned clauses are sound because they are logically implied by the original formula, as the resolution steps preserve ; they shorten resolution proofs by capturing multi-step implications in a single , thereby guiding future unit propagation to detect conflicts earlier. This mechanism was first introduced in the solver, where conflict-induced clauses enable non-chronological . Modern solvers like MiniSat extensively employ 1-UIP learning with additional minimization techniques, such as removing redundant literals via subsumption checks against antecedent clauses, to further enhance efficiency. To manage the growing clause database and prevent performance degradation from excessive clauses, CDCL solvers implement forgetting strategies. One approach is restarting, which resets the search state and often involves partial removal of learned clauses to mitigate long-tailed runtime distributions caused by accumulated clauses leading solvers astray. Alternatively, selective clause deletion targets less useful clauses based on activity heuristics, such as tracking how recently a clause was involved in conflicts (with decay over time) or metrics like literal block distance, ensuring the database remains compact while retaining high-impact clauses. In MiniSat, for instance, learned clauses are aggressively deleted when their activity falls below thresholds, dynamically adjusting limits after restarts to balance exploration and exploitation.

The CDCL Algorithm

High-Level Procedure

Conflict-driven clause learning (CDCL) extends the Davis–Putnam–Logemann–Loveland (DPLL) algorithm by incorporating clause learning from conflicts to guide the search more efficiently. The process begins with an empty partial assignment to the in the (CNF) formula. Unit is then applied to deduce implied literal assignments until no further units exist or a conflict arises. If remain unassigned without conflict, a decision selects an undecided for branching, assigning it a value and increasing the decision level. The core loop of CDCL operates as follows: while the assignment is partial and no is detected, perform unit followed by a decision if necessary; upon detecting a during or evaluation, conduct to derive a learned , add it to the formula, and . This is non-chronological, jumping to the highest decision level where the learned becomes asserting (i.e., unit under the current partial assignment), often the first unique implication point (UIP), which prunes the search space more effectively than DPLL's chronological . Historically, CDCL evolved from earlier solvers like RELSAT (Bayardo and Schrag, 1997), which introduced look-back techniques for learning and non-chronological in SAT solving, and GRASP, which integrated clause learning with implication graphs to exploit propagation structures. The approach was standardized and optimized in the Chaff solver, which popularized key engineering choices like watched literals and the VSIDS . A high-level pseudocode outline of CDCL is:
function CDCL(φ):
    σ ← empty [assignment](/page/Assignment)
    dl ← 0
    while true:
        propagate_units(φ, σ)
        if conflict_detected(σ):
            (learned_clause, backjump_level) ← analyze_conflict(σ)
            if backjump_level < 0:
                return UNSAT
            add learned_clause to φ
            backtrack_to(σ, backjump_level)
        elif all_variables_assigned(σ):
            return SAT
        else:
            var ← select_branching_variable(σ)
            dl ← dl + 1
            assign(σ, var, true)  // or false for the other branch
This structure emphasizes the search-and-learn loop, where propagation and decisions advance the search, and conflicts trigger learning to inform backtracking.

Detailed Steps and Pseudocode

The conflict-driven clause learning (CDCL) algorithm formalizes the search process for solving Boolean satisfiability problems in conjunctive normal form (CNF) through iterative unit propagation, decision making, conflict analysis, clause learning, and backjumping. The procedure maintains a partial assignment σ to variables, a current decision level dl, and an implication graph G derived from propagation, where edges represent implications from clauses. Unit propagation is performed using the implication graph to enforce implied literals until a fixpoint is reached or a conflict arises, indicating an empty clause. If a conflict occurs during propagation, the algorithm invokes conflict analysis on the implication graph G to derive a learned clause C that explains the conflict, typically via resolution steps until reaching the first unique implication point (UIP). The learned clause C is then added to the formula φ, and the algorithm backjumps to the asserting level β, which is the maximum decision level among the literals in C excluding the asserting literal k (the literal in C that becomes unit after backjumping and forces a unique assignment). This backjump undoes assignments beyond level β, potentially asserting k at that level to avoid the conflict. If no conflict arises and the formula is not fully assigned, the algorithm selects an unassigned variable for branching, increments the decision level , and assigns a value (typically true first, per dynamic heuristics), then resumes propagation. The process repeats until either all variables are assigned (SAT, verified by satisfying all clauses) or a conflict at decision level 0 is detected (UNSAT). Implication graphs are used in propagation to track reasons for assignments efficiently. The backjump level β is computed as \beta = \max \{ \text{level}(l) \mid l \in C \setminus \{k\} \} where level(l) denotes the decision level of literal l, and k is the asserting literal in the learned clause C. The full CDCL algorithm can be expressed in procedural pseudocode as follows, incorporating key subroutines for propagation, conflict analysis, learning, and backtracking:
function CDCL(φ, σ):  // φ: CNF formula, σ: partial assignment
    dl ← 0
    while true:
        res ← propagate(φ, σ)  // Unit propagation until fixpoint or conflict
        if res = CONFLICT:
            C, β ← analyze_conflict(G)  // Analyze implication graph G for clause C and level β
            if β < 0:  // Conflict at root level
                return UNSAT
            learn_clause(φ, C)  // Add learned clause C to φ
            backtrack(σ, β)  // Backjump to level β, assert k if unit
        elif all variables assigned in σ:
            return SAT
        else:
            x ← select_variable(σ)  // Branching variable
            dl ← dl + 1
            σ ← σ ∪ {(x, true)}  // Tentative assignment at new level
            // Propagate will handle implications in next iteration

function propagate(φ, σ):
    while there exists a unit clause in φ under σ:
        assign the implied literal to σ, update G with implications
    if there exists an empty clause:
        return CONFLICT
    return NO_CONFLICT

function analyze_conflict(G):
    // Traverse G backward from conflict literals, resolve until first UIP
    C ← derive_learned_clause_via_resolution(G)
    k ← find_asserting_literal(C, σ)  // Literal that will be unit after backjump
    β ← max{level(l) for l in C \ {k}}
    return C, β

function learn_clause(φ, C):
    φ ← φ ∪ {C}  // Add to clause database (with possible forgetting later)

function backtrack(σ, β):
    undo assignments in σ with level > β
    dl ← β
    if exists asserting literal k at level β:
        assign k to σ
This pseudocode captures the core loop of , decision, and learning-driven , with subroutines modularized for clarity; in practice, implementations like optimize these using watched literals for efficient .

Integration of Learning in Backtracking

In traditional backtracking approaches, such as the original Davis-Putnam-Logemann-Loveland (, search proceeds chronologically by undoing assignments level by level upon encountering a conflict, returning to the immediate parent decision node. This can lead to inefficient exploration of redundant subtrees, as it does not leverage insights from the conflict to skip irrelevant branches. In contrast, conflict-driven clause learning (CDCL) introduces non-chronological , where the search jumps to an earlier decision level determined by the learned clause, enabling more targeted pruning of the search space. The integration of learning into occurs as follows: upon detecting a at decision level l, conflict analysis derives a new that asserts a literal assigned at some earlier level \beta < l. This learned clause is added to the , and the solver by undoing all assignments from level l down to level \beta, where the implied literal is then propagated as a unit clause. This process, often guided by identifying a unique implication point (UIP) in the implication graph, allows the solver to resume searching from a level that avoids the conditions leading to the . This mechanism provides significant benefits by avoiding the re-exploration of redundant subtrees that would inevitably lead to similar conflicts. Learned clauses act as global constraints, pruning future search paths and preventing the solver from revisiting unsatisfiable combinations across different branches. In practice, this non-chronological approach reduces the overall size exponentially, as evidenced by benchmarks where CDCL solvers explore orders of magnitude fewer nodes compared to chronological methods—for instance, jumping back multiple levels can save millions of nodes in large instances. A textual of this can be seen in the , where nodes represent decision assignments at successive levels, and backjump arrows indicate non-chronological returns to an ancestor node (e.g., from a leaf at level 6 directly to level 3), bypassing intermediate levels and their subtrees.

Practical Illustration

Toy Problem Setup

To illustrate the fundamentals of conflict-driven clause learning (CDCL), a small (CNF) formula is used as a . Consider the instance φ with six Boolean variables x₁, x₂, x₃, x₄, x₅, x₆ and five clauses: (x₁ ∨ x₃ ∨ x₄) ∧ (¬x₂ ∨ ¬x₅) ∧ (x₃ ∨ ¬x₄ ∨ x₅ ∨ ¬x₆) ∧ (¬x₁) ∧ (x₁ ∨ ¬x₂ ∨ ¬x₄ ∨ x₆). This formula involves a unit clause (¬x₁) and implication-like clauses that chain dependencies between the variables, making it an ideal toy example for CDCL due to its minimal size, which permits manual tracing of unit , conflict detection, and subsequent clause learning without overwhelming complexity. The example highlights how initial propagations and decisions lead to a , enabling the learning of new clauses that generalize the reason for the and guide backjumping to earlier levels. In CDCL, the evolving partial assignment to variables is maintained as a , an ordered list of assigned literals reflecting the chronological sequence of decisions and implications. Decision literals are explicitly marked as chosen by the branching heuristic at a given decision level, while propagated literals are derived via unit propagation and linked to their antecedent clause for later conflict analysis. This setup demonstrates how CDCL encounters a after initial propagations and subsequent decisions (such as on x₃), enabling the learning of new that capture the 's causes.

Trace of Propagation and

In the , the SAT instance begins with an empty partial at decision level 0. Unit propagation is immediately triggered by the unit {¬x₁}, forcing the x₁ = false (or equivalently, ¬x₁ = true) at level 0, with the itself as the antecedent. No further propagations occur at this stage. The solver then proceeds to the first decision at level 1, branching on the unassigned x₂ by assigning x₂ = true (antecedent is the decision marker Λ). follows from the {¬x₂, ¬x₅}: since ¬x₂ = false, it forces ¬x₅ = true (i.e., x₅ = false) at level 1, with the as antecedent. The current of s is now ¬x₁ (level 0), x₂ (level 1), and ¬x₅ (level 1). At decision level 2, the solver branches on x₃ by assigning x₃ = false (or ¬x₃ = true, antecedent Λ). then applies the {x₁, x₃, x₄}: with x₁ = false and x₃ = false, it forces x₄ = true at level 2 (antecedent the ). Next, from {x₁, ¬x₂, ¬x₄, x₆}, the falsity of x₁, ¬x₂, and ¬x₄ forces x₆ = true at level 2 (antecedent the ). Finally, via {x₃, ¬x₄, x₅, ¬x₆} sees x₃ = false, ¬x₄ = false, and x₅ = false, forcing ¬x₆ = true (i.e., x₆ = false) at level 2 (antecedent the ). This contradicts the prior x₆ = true, detecting a at level 2. The full at is ¬x₁ (0), x₂ (1), ¬x₅ (1), ¬x₃ (2), x₄ (2), x₆ (2), ¬x₆ (2). The implication graph captures these dependencies as a with for literals and a node κ. Edges represent implications from antecedent : for instance, ¬x₁ and ¬x₃ imply x₄ (via the clause {x₁ ∨ x₃ ∨ x₄}), x₄ and prior assignments imply x₆ (from {x₁ ∨ ¬x₂ ∨ ¬x₄ ∨ x₆}), and assignments imply ¬x₆ (from {x₃ ∨ ¬x₄ ∨ x₅ ∨ ¬x₆}), culminating in edges from x₆ → κ and ¬x₆ → κ. This graph highlights the propagation chain leading to the .

Analysis of Learned Clauses

Upon detecting the conflict, the solver constructs the implication graph and performs conflict analysis using to derive a learned . The arises from the {x₃ ∨ ¬x₄ ∨ x₅ ∨ ¬x₆} attempting to propagate ¬x₆ while x₆ is already true. The antecedent of x₆ is {x₁ ∨ ¬x₂ ∨ ¬x₄ ∨ x₆}. Resolving these on x₆ yields {x₁ ∨ ¬x₂ ∨ x₃ ∨ ¬x₄ ∨ x₅}. Next, resolve this with the antecedent of x₄, which is {x₁ ∨ x₃ ∨ x₄}, on x₄ to obtain the learned {x₁ ∨ ¬x₂ ∨ x₃ ∨ x₅}. This learned clause {x₁ ∨ ¬x₂ ∨ x₃ ∨ x₅} has literals from levels 0 (x₁) and 1 (¬x₂, x₅), with x₃ as the asserting literal at the current decision level. The solver performs non-chronological (backjumping) to level 1 and asserts x₃ = true using the learned clause as antecedent. This prunes the search space by avoiding similar conflicts in future explorations at higher levels. The learned clause encapsulates the reason for the conflict in a concise form, demonstrating how CDCL generalizes failures to improve efficiency on larger instances.

Theoretical Foundations

Soundness and Completeness Proofs

Conflict-driven clause learning (CDCL) inherits its soundness from the underlying –Putnam–Logemann–Loveland (DPLL) procedure, which ensures that unit propagation and decision-making steps preserve the of the input formula φ. Unit propagation enforces implied literals from clauses under the current partial assignment, adding only consequences that do not introduce contradictions unless one exists in φ. Specifically, if a clause becomes unit under the assignment, propagating the literal maintains , as the new clauses are tautologies or direct inferences from φ. The clause learning mechanism in CDCL further upholds by deriving new s through repeated applications of the rule, ensuring they are logical consequences of φ. The rule states that from two s (a ∨ C) and (¬a ∨ D), where a is a literal and C, D are disjunctions of literals, one infers the resolvent (C ∨ D), which is entailed by the premises. analysis constructs the learned by resolving the conflicting with its antecedent s along the implication graph, typically up to the first unique implication point (UIP), yielding a ω such that φ ⊨ ω. Adding ω to the set thus preserves : φ is satisfiable if and only if φ ∪ {ω} is satisfiable. This is proven by on the steps, where each resolvent is a consequence, and the process terminates due to the acyclic nature of the implication graph. Non-chronological , guided by the learned , also preserves by undoing assignments only to levels justified by the analysis. For completeness, CDCL exhaustively explores the assignment space through recursive decision-making and , ensuring that if φ is satisfiable, some branch will reach a complete satisfying all without . The maintains invariants that the current partial trail M is consistent with φ, and propagated literals are entailed by decisions and φ. If no arises along a to a full , M models φ. Learning prunes redundant branches but does not eliminate satisfying assignments, as learned are implied by φ and only forbid inconsistent partial assignments. If φ is unsatisfiable, repeated and backjumping will cover all possible branches, eventually propagating a at decision level 0, yielding an empty . This is formalized in abstract transition systems where derivations terminate in either a model or failure state, confirming CDCL as a complete decision for propositional SAT, unlike incomplete local search heuristics. The proofs rely on structural invariants, such as the containing no duplicate literals and learned being non-tautological consequences, ensuring partial and correctness. For instance, if the procedure halts with a model M, then M ⊨ φ; if it halts in failure, then the empty clause is derived, implying unsatisfiability. These properties hold across CDCL variants, including those with watched literals for efficient propagation.

Termination and Complexity

The CDCL algorithm terminates because the search space of variable assignments is finite, consisting of at most $2^n possible partial and complete for a formula with n variables. systematically explores this space, either finding a satisfying when all variables are assigned without or detecting unsatisfiability upon a at decision level 0 after unit propagation. Clause learning contributes to termination by deriving new that prune redundant subtrees in the search space, shortening the proof required to certify unsatisfiability. Restarts further ensure progress by periodically resetting the search state to avoid prolonged exploration of unproductive branches, with strategies like geometric or Luby sequences guaranteeing without infinite loops, as the solver retains learned across restarts. In the worst case, CDCL exhibits exponential O(2^n) and , mirroring exhaustive search over all possible assignments, as the algorithm may need to explore nearly the entire before resolving a . Unrestricted clause learning can generate an exponential number of , exacerbating usage, though practical implementations bound learning (e.g., by length or activity) to maintain polynomial per conflict analysis. There is no known polynomial-time bound for CDCL on general SAT instances, as the problem is NP-complete, and CDCL polynomially simulates full , which requires exponential-size proofs for certain hard formulas like Tseitin encodings. CDCL's performance ties to resolution proof complexity, where learned clauses form a refutation whose size can be superpolynomial; for instance, resolution proofs for formulas grow ly. While worst-case bounds remain , empirical scaling on benchmarks is sub- due to heuristics and learning, often solving industrial instances with millions of clauses efficiently. Recent analyses highlight -case behavior under random 3-SAT distributions, where CDCL solvers transition sharply near the satisfiability threshold (around 4.26 clauses per variable) and exhibit polynomial-time for instances below it, though theoretical explanations for this efficacy remain partial. Post-2020 studies, such as those formalizing CDCL's reasoning , underscore that restarts yield speedups over non-restarting variants even in cases.

Implementations and Applications

Role in Modern SAT Solvers

Conflict-driven clause learning (CDCL) forms the foundational backbone of all competitive SAT solvers in use as of 2025, including prominent implementations such as Glucose, Lingeling, Kissat, CaDiCaL, and recent winners like AE-Kissat-MAB. These solvers leverage CDCL's conflict analysis and non-chronological to efficiently explore the search space of satisfiability problems. Clause learning, as the core distinguishing feature of CDCL, allows these systems to derive new clauses from conflicts, pruning future search paths and accelerating convergence to satisfying assignments or proofs of unsatisfiability. Modern CDCL implementations incorporate several key optimizations to enhance performance. The two-watched literals scheme provides a lazy for unit , avoiding the evaluation of irrelevant clauses and reducing propagation time from to near-linear in practice. Search restarts, employing sequences like geometric or Luby policies, mitigate the risk of getting trapped in unproductive search regions by periodically resetting the solver while retaining learned clauses. These techniques, combined with conflict-driven branching heuristics such as VSIDS, enable solvers to handle massive formula sizes with millions of clauses. The adoption of CDCL has dramatically transformed the field, powering the success of SAT competitions where industrial instances—previously intractable before 2000—now resolve in seconds on commodity hardware. For instance, solvers like Kissat and CaDiCaL have dominated recent competitions, with CaDiCaL securing wins in the main track of SAT Competition 2023, Kissat variants in 2024, and AE-Kissat-MAB in 2025. Post-Chaff evolutions in the , such as clause vivification through unit , further refine management by eliminating redundant literals, improving propagation efficiency without additional learning overhead.

Extensions to Other Domains

Conflict-driven clause learning (CDCL), originally developed for propositional (SAT) solving, has been extended to various domains beyond classical decision problems, enabling efficient handling of optimization, , and tasks through adaptations of its conflict analysis and clause learning mechanisms. In optimization problems such as Maximum Satisfiability (MaxSAT), CDCL is integrated into core-guided solvers that iteratively extract unsatisfiable cores from weighted formulas using a CDCL-based to minimize the number of unsatisfied clauses. For instance, the MaxHS solver employs a hybrid approach alternating between CDCL-driven unsat-core extraction and for optimal solutions in weighted partial MaxSAT instances, achieving significant performance gains on benchmarks by leveraging learned clauses to guide relaxation variable assignments. Similarly, the WMaxCDCL algorithm combines branch-and-bound search with CDCL clause learning for weighted partial MaxSAT, demonstrating improved on optimization problems compared to traditional core-based methods. In hardware and , CDCL plays a central role in (BMC), where temporal properties are unrolled into propositional formulas solved via CDCL SAT solvers to detect counterexamples within finite bounds. Tools like from UC Berkeley integrate CDCL solvers, such as those based on MiniSat, with binary decision diagrams (BDDs) for equivalence checking and BMC, allowing hybrid symbolic-explicit exploration that prunes search spaces through learned conflict clauses. This integration has proven effective in verifying complex circuits, where CDCL's propagation and learning reduce the state explosion typical in pure BDD-based approaches. For AI planning, particularly in the Planning Domain Definition Language (PDDL), CDCL is applied by encoding planning problems as SAT instances, with conflict-driven learning accelerating the search for valid plans. In conformant planning, which handles uncertainty in initial states and actions, PDDL problems are compiled into SAT formulas solved by CDCL solvers enhanced with planning-specific heuristics like VSIDS variants tailored to action variables, enabling efficient discovery of plans that work regardless of unknown states. Seminal work has shown that CDCL-based SAT planning outperforms earlier black-box planners on conformant benchmarks by learning clauses that capture plan invariances. CDCL's conflict analysis has also been adapted to (SMT) solvers, notably Z3, where it supports theories like bit-vectors through lazy learning and bit-blasting. In Z3's DPLL(T) , CDCL handles the propositional skeleton while solvers propagate constraints lazily, generating lemmas as learned clauses during conflicts to resolve bit-vector inconsistencies, which enhances performance on tasks involving fixed-size integers. Recent applications (post-2023) extend this to , particularly , where CDCL s verify properties like robustness by encoding network behaviors as SAT problems. The DeepCDCL tool, for example, adapts CDCL with neuron splitting and specialized branching for non-linear activations, proving safety in benchmarks like ACAS Xu while handling implicit XOR-like constraints in binary decision layers. Emerging 2024-2025 work further incorporates proof-driven clause learning in CDCL for scalable , addressing XOR clauses arising from exclusive activations in deeper architectures.

Classical DPLL and Resolution

The Davis–Putnam–Logemann–Loveland (, introduced in , is a foundational backtracking-based procedure for deciding the of propositional formulas in (CNF). It operates by recursively selecting an unassigned variable, branching on both possible truth values, and applying preprocessing techniques such as unit propagation—where a with only one unset literal forces that literal's value—and pure literal elimination, which assigns values to variables appearing only once across all clauses to simplify the formula. Upon encountering a conflict, DPLL employs chronological , undoing the most recent decision and exploring the alternative branch, continuing until a satisfying assignment is found or all possibilities are exhausted. This approach ensures soundness and completeness for SAT, as it systematically enumerates partial assignments while pruning inconsistent subtrees through propagation. A key limitation of classical DPLL arises from its chronological mechanism, which re-explores large portions of the search space upon detecting deep in the . Specifically, when a occurs, DPLL backtracks only to the immediately preceding decision point, potentially repeating propagations and branches that led to the same failure in similar contexts, resulting in inefficient redundancy for complex instances. This lack of memory about past means that the algorithm does not prune future searches proactively, limiting its scalability on large-scale SAT problems despite its . Unit propagation, while effective for local , is shared with later extensions but insufficient alone to mitigate these exploratory inefficiencies in DPLL. In parallel to search-based methods like DPLL, the proof provides a deductive for establishing unsatisfiability in CNF formulas, originating from J. A. Robinson's 1965 work on machine-oriented logic. operates as a single-rule : given two containing complementary literals l and \neg l, it derives a new (resolvent) by removing those literals and combining the remaining disjuncts, allowing repeated applications to derive the empty as a refutation proof of unsatisfiability. This Hilbert-style axiomatic approach is refutation-complete, meaning any unsatisfiable CNF has a proof, and it underpins theoretical analyses of SAT . However, constructing a full proof can require exponential size in the worst case, rendering it impractical for direct without heuristics, as the proof tree may explode due to the need for global consistency across all . Conflict-driven clause learning (CDCL) addresses these limitations by extending DPLL with non-chronological —jumping directly to the deepest relevant decision point causing a conflict—and clause learning, which analyzes conflict traces to add implied clauses that prevent redundant exploration. In essence, CDCL equates to DPLL augmented by these mechanisms, enabling more efficient search guidance while preserving completeness through -based learning that shortcuts the exponential full-proof requirement of classical . Unlike DPLL's repetitive failures or 's exhaustive derivations, CDCL prunes the search space by recording conflict-driven insights, dramatically improving on practical SAT instances.

Advanced Variants and Hybrids

Parallel CDCL solvers extend the traditional single-threaded approach by distributing the search across multiple threads or processes, often incorporating clause sharing to propagate learned clauses between them for improved efficiency. In frameworks like PaInleSS, developed in the , clause sharing is implemented through periodic exchanges of short, high-quality learned clauses among parallel CDCL instances, enabling solvers to benefit from conflicts discovered in other threads without full synchronization overhead. This mechanism has been shown to scale well on multi-core systems, with empirical gains in solving time for industrial benchmarks. Similarly, distributed variants like D-Painless build on portfolio strategies by combining clause sharing with varied solver configurations across nodes, achieving significant speedups on large-scale problems. Portfolio solvers represent another evolution, running multiple CDCL instances in with diverse parameter settings—such as different learning rates, branching heuristics, or restart policies—to exploit complementary strengths without explicit partitioning of the search space. HordeSat, a portfolio solver, exemplifies this by executing numerous CDCL configurations concurrently and sharing clauses lazily to minimize communication costs, demonstrating superior performance on hard combinatorial instances compared to sequential solvers. ManySAT further refines this by using size-based metrics for clause selection in sharing, where clauses of length eight or shorter are exchanged, leading to effective cooperation among threads. Hybrid approaches integrate CDCL with incomplete methods like local search to leverage the strengths of both systematic and paradigms. For instance, deep cooperation techniques embed local search probes within CDCL restarts to guide variable selection and clause learning, resulting in solvers that solve a broader range of instances than pure CDCL or local search alone, as evidenced by improvements in Glucose, MapleLCMDistChronoBT, and Kissat variants on SAT competition benchmarks. The IPASIR interface facilitates such hybrids by standardizing incremental solving, allowing dynamic addition of clauses and propagators during search, which supports fine-grained integration of external components like local search or domain-specific constraints in CDCL frameworks. In the 2020s, advances in inprocessing techniques, such as subsumption during the search phase, have enhanced CDCL efficiency by continuously simplifying the without resetting the solver state. Solvers like MapleCOMSOL incorporate incremental inprocessing, including lazy clause subsumption and vivification, to reduce redundancy in learned mid-search, yielding measurable improvements in memory usage and solving speed on application benchmarks from recent SAT competitions. These methods build on hash-based detection for fast subsumption checks, avoiding the pitfalls of full preprocessing. Quantum-inspired approaches explore reformulating SAT instances as quadratic unconstrained binary optimization (QUBO) problems for hardware solvers like Ising machines. As of 2024, such methods have shown comparable accuracy to CDCL-based Max-SAT solvers on small 3-SAT benchmarks with tens of variables. In 2025, NeuroCore integrated machine learning methods to enhance CDCL heuristics in solvers like Glucose and MiniSat, significantly improving problem-solving capabilities on benchmarks.

Alternative Learning Strategies

Lookahead learning represents a proactive alternative to the reactive conflict-driven approach of CDCL, where implications are precomputed before branching decisions to guide selection and simplify the . In lookahead solvers, such as OKsolver, the process involves evaluating the impact of assigning a literal by performing unit and analyzing resulting unit clauses or conflicts, often using heuristics like the "" measure to prioritize variables with high potential. This contrasts with CDCL's post-conflict , as lookahead embeds learning directly into the branching phase, deriving clauses from failed literals or double-lookahead resolvents to avoid redundant subtrees. For instance, OKsolver employs local learning to add clauses that block repeated failures, enhancing efficiency on structured problems. Performance-wise, lookahead methods excel on random k-SAT instances with low clause-variable density, where OKsolver and similar solvers like kcnfs solved more unsatisfiable 3-SAT problems in early competitions compared to initial CDCL implementations. However, CDCL solvers dominate on industrial benchmarks with high density and large implication graph diameters, as lookahead's eager becomes computationally expensive, leading to slower overall solving times. Hybrid approaches, such as cube-and-conquer, leverage lookahead for problem decomposition before handing subproblems to CDCL, outperforming pure lookahead on competition benchmarks by combining proactive splitting with reactive learning. Probabilistic learning strategies introduce uncertainty handling through clause weighting in local search solvers, particularly for MAX-SAT and QBF instances where optimization or quantification adds beyond pure SAT. In solvers like the series, clauses are assigned dynamic weights reflecting their violation frequency, and variable flips are selected probabilistically based on these weights to escape local optima, contrasting CDCL's deterministic implication graph . For MAX-SAT, this enables soft clause satisfaction by prioritizing high-weight falsified clauses, as seen in weighted local search algorithms that adjust penalties during search to approximate optimal solutions. In QBF contexts, probabilistic extensions adapt clause selection under universal/, using weighted learning to prune inconsistent paths without full . Other alternatives include forgetting mechanisms like blocked literal elimination, which simplify by removing redundant literals during preprocessing or search, serving as a lightweight counterpart to CDCL's clause addition. A literal is blocked if its removal from all containing preserves , allowing efficient reduction without learning new , often integrated into solvers for quick simplification. In comparison, CDCL's unique implication point (UIP) learning remains deterministic, systematically resolving the conflict graph to derive a single asserting that shortens proofs by minimizing literal count and enabling deeper backjumps. UIP's proof-shortening effect arises from targeting the conflict's "center of action," yielding derivable in fewer steps than multi-UIP alternatives. XOR-enhanced local search has shown effectiveness on instances hard for CDCL, such as those involving many XOR constraints like verification, by natively handling without long chains. Overall, while CDCL with UIP learning proves superior for general SAT instances due to its balance of completeness and empirical speed, alternatives shine in niche cases. Recent advancements since 2023 incorporate for guided learning, such as graph neural networks (GNNs) in NeuroBack, which predict backbone phases to initialize CDCL searches and refine learned clauses by reducing unnecessary conflicts, solving 5-7% more SATCOMP instances. Similarly, AsymSAT uses GNN-RNN hybrids for sequential assignment prediction, implicitly learning dependencies to boost solution rates on symmetric problems by over 40% compared to prior neural methods. These ML approaches extend beyond traditional determinism, offering adaptive guidance for clause generation in neural SAT solvers.

References

  1. [1]
    [PDF] Conflict-Driven Clause Learning SAT Solvers - cs.Princeton
    Conflict-Driven Clause Learning (CDCL) Boolean Satisfiability (SAT) solvers are so effective in practice. Since their inception in the mid-90s, CDCL SAT solvers.
  2. [2]
  3. [3]
  4. [4]
    [PDF] GRASP—A New Search Algorithm for Satisfiability
    This paper introduces GRASP (Generic seaRch Algorithm for the Satisfiability Problem), an integrated algorithmic frame- work for SAT that unifies several ...Missing: CDCL | Show results with:CDCL
  5. [5]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    Theorem 1: If a set S of strings is accepted by some nondeterministic Turing machine within polynomial time, then S is P-reducible to { DNF tautologies}.
  6. [6]
    The Silent (R)evolution of SAT - Communications of the ACM
    Jun 1, 2023 · The DIMACS CNF format describes a propositional formula. The preamble provides the format (CNF), the number of variables, and the number of ...
  7. [7]
    [PDF] The complexity of enumeration and reliability problems
    The technique used is that of polynomial time reduction with oracles via translations that are of algebraic or arithmetic nature. Key words, counting, ...
  8. [8]
    A machine program for theorem-proving
    Martin Davis, George Logemann, and. Donald Loveland. Institute of Mathematical Sciences, New York University. The programming of a proof procedure is discussed ...
  9. [9]
    [PDF] Chaff: Engineering an Efficient SAT Solver - Princeton University
    In this paper we describe the development of a new complete solver,. Chaff, which achieves significant performance gains through careful engineering of all ...
  10. [10]
    [PDF] Satisfiability Solvers - Cornell: Computer Science
    All practical satisfiability algorithms, known as SAT solvers, do produce such an assign- ment if it exists. It is natural to think of a CNF formula as a set ...
  11. [11]
    [PDF] Predicting Learnt Clauses Quality in Modern SAT Solvers - IJCAI
    The challenge is to predict the usefulness of new learnt clauses in SAT solvers, as deleting useful clauses can be dramatic. This paper aims to design a fast, ...
  12. [12]
    [PDF] Rsat 1.03: SAT Solver Description - Automated Reasoning Group
    Improved Phase Selection Heuristic. Rsat uses an unconventional phase selection heuristic that was designed to avoid solving the same part of the problem.
  13. [13]
    [PDF] GRASP: A Search Algorithm for Propositional Satisfiability
    A SAT algorithm is said to be complete if, for each instance of SAT, a solution is found if a solution exists. Theorem 1. The GRASP SAT algorithm is correct.
  14. [14]
    [PDF] Clause Learning in SAT - cs.Princeton
    Apr 25, 2006 · The performance of common SAT-Solvers relies primarily on two issues, namely unit propagation and backtracking. Unit propagation involves in-.Missing: seminal | Show results with:seminal
  15. [15]
    [PDF] MiniSat v1.13 – A SAT Solver with Conflict-Clause Minimization
    Clause deletion. MiniSat aggressively deletes learned clauses based on an ac- tivity heuristic similar to the one for variables. The limit on how many learned.<|control11|><|separator|>
  16. [16]
  17. [17]
    [PDF] Using CSP Look-Back Techniques to Solve Real-World SAT Instances
    Using CSP Look-Back Techniques to Solve Real-World SAT Instances. Roberto J. Bayardo Jr. The University of Texas at Austin. Department of Computer Sciences ...
  18. [18]
    [PDF] Efficient Conflict Driven Learning in a Boolean Satisfiability Solver
    A conflict clause is generated by a bipartition of the implication graph. The partition has all the decision variables on one side (called reason side), and the ...
  19. [19]
    Boolean Satisfiability: From Theoretical Hardness to Practical Success
    Aug 1, 2009 · In particular, a technique called conflict-driven learning and non-chronological backtracking has greatly enhanced the power of DPLL SAT solvers ...
  20. [20]
    [PDF] Conflict Driven Clause Learning
    Jun 8, 2020 · Most of these modern SAT solvers are based on an algorithm called Conflict Driven Clause Learning (CDCL) which is presented here. Before ...
  21. [21]
    Conflict-driven clause learning (CDCL) SAT solvers — CS-E3220
    Conflict-driven clause learning (CDCL) SAT solvers¶. Most modern SAT solvers are not using the simple The DPLL backtracking search procedure approach as such.
  22. [22]
    [PDF] Solving SAT and SAT Modulo Theories: from an Abstract Davis ...
    We first introduce Abstract DPLL, a rule-based formulation of the Davis-Putnam-Logemann-. Loveland (DPLL) procedure for propositional satisfiability.
  23. [23]
    [PDF] A Verified SAT Solver Framework with Learn, Forget, Restart, and ...
    This article presents our formalization of CDCL (conflict-driven clause learning) based ... Soundness and completeness proofs by coinductive methods. J ...
  24. [24]
    [PDF] Proof Complexity and SAT Solving - Jakob Nordström
    Proof complexity studies the difficulty of proofs, and SAT solving determines if a formula has a satisfying assignment. They are connected, with this article ...
  25. [25]
  26. [26]
    Conflict-Driven Clause Learning SAT Solvers - ResearchGate
    Clause learning entails adding a new clause for each conflict during backtrack search. This new clause prevents the same conflict from occurring again during ...
  27. [27]
    Impact of Community Structure on SAT Solver Performance
    Aug 7, 2025 · Modern CDCL SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time.
  28. [28]
    CaDiCaL 2.0 - SpringerLink
    Jul 26, 2024 · The SAT solver CaDiCaL provides a rich feature set with a clean library interface. It has been adopted by many users, is well documented and easy to extend.
  29. [29]
    Clause Vivification by Unit Propagation in CDCL SAT Solvers - arXiv
    Jul 29, 2018 · The proposed clause vivification is activated before the SAT solver triggers some selected restarts, and only affects a subset of original and ...
  30. [30]
    A Machine-Oriented Logic Based on the Resolution Principle
    A Machine-Oriented Logic Based on the Resolution Principle. Author: J. A. Robinson ... First page of PDF. Formats available. You can view the full content in the ...
  31. [31]
    [PDF] PaInleSS: a Framework for Parallel SAT Solving - HAL
    Mar 12, 2024 · We present PaInleSS: a framework to build parallel SAT solvers for many-core environments. Thanks to its genericity and modularity, it pro- ...
  32. [32]
    D-Painless: A Framework for Distributed Portfolio SAT Solving
    May 1, 2025 · The portfolio approach when combined with clause-sharing mechanisms [22], has proven to be the most effective for parallelizing CDCL solvers. ...
  33. [33]
    [PDF] HordeSat: A Massively Parallel Portfolio SAT Solver - ktiml
    The simplest approach is to run CDCL several times on the same problem in parallel with different parameter settings and exchanging learned clauses. If there is ...
  34. [34]
    [PDF] Parallel Clause Sharing Strategy Based on Graph Structure of SAT ...
    ManySAT [17], a portfolio-type parallel SAT solver, uses size as a metric for sharing learnt clauses, where all clauses of size eight or less are shared. Then, ...
  35. [35]
    [PDF] Deep Cooperation of CDCL and Local Search for SAT (Extended ...
    Since their inception in the mid-90s, CDCL-based SAT solvers have been applied, in many cases with remarkable success, to a number of prac- tical applications.
  36. [36]
    [PDF] IPASIR-UP: User Propagators for CDCL
    Our interface IPASIR-UP enables a more fine-grained way of incremental SAT solving, where new clauses may be added not only between two solve calls, but also ...
  37. [37]
    [PDF] Incremental Inprocessing in SAT Solving - Katalin Fazekas
    Abstract. Incremental SAT is about solving a sequence of related SAT problems efficiently. It makes use of already learned information to avoid.Missing: MapleCOMSOL 2020s
  38. [38]
    Hash-Based Preprocessing and Inprocessing Techniques in SAT ...
    Aug 7, 2025 · This paper explores the application of hash functions in encoding clause structures and bitwise operations for detecting relations between ...Missing: MapleCOMSOL 2020s
  39. [39]
    [PDF] arXiv:2408.03757v3 [math.OC] 29 Oct 2024
    Oct 29, 2024 · Despite their ability to handle large instances, CDCL-based solvers have fundamental scalability limitations. In light of this, we propose ...
  40. [40]
    3SAT on an all-to-all-connected CMOS Ising solver chip - Nature
    May 10, 2024 · This work solves 3SAT, a classical NP-complete problem, on a CMOS-based Ising hardware chip with all-to-all connectivity.
  41. [41]
    [PDF] Look-Ahead Based SAT Solvers - CMU School of Computer Science
    The look-ahead architecture is based on the DPLL framework [DLL62]: It is a complete solving method which selects in each step a decision variable xdecision and ...
  42. [42]
    [PDF] Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads ⋆
    Removal of learnt clauses is realized by reseting the clause deletion policy after solving a cube (line 6). So the size of the clause database is reduced to ...Missing: forgetting | Show results with:forgetting
  43. [43]
    Clause Weighting Local Search for SAT - SpringerLink
    This paper investigates the necessary features of an effective clause weighting local search algorithm for propositional satisfiability testing.Missing: March | Show results with:March
  44. [44]
    [PDF] Blocked Literals are Universal? - CMU School of Computer Science
    Blocked literals are redundant universal literals that can be removed or added to clauses. We show that blocked literal elimination (BLE) and blocked literal ...
  45. [45]
    [PDF] XOR Local Search for Boolean Brent Equations
    Two of the most successful approaches to SAT solving are Conflict-Driven Clause. Learning (CDCL) and Stochastic Local Search (SLS). Modern CDCL solvers are very ...Missing: alternatives heavy
  46. [46]
    [PDF] Improving CDCL SAT Solving using Graph Neural Networks
    Mainstream modern SAT solvers are based on the. CDCL algorithm Marques-Silva & Sakallah (2003). It mainly relies on two kinds of variable related heuristics ...
  47. [47]
    [PDF] arXiv:2304.08738v1 [cs.AI] 18 Apr 2023
    Apr 18, 2023 · We argue that a GNN-based SAT solver should sequentially predict variable assignments in order to take variable dependency into consid- eration.