Fact-checked by Grok 2 weeks ago

Boolean satisfiability problem

The Boolean satisfiability problem (SAT) is the fundamental in computational logic and that asks whether there exists an assignment of truth values (true or false) to the variables in a given Boolean formula such that the formula evaluates to true. Typically formulated in (CNF), where the formula is a of clauses each being a disjunction of literals (variables or their negations), SAT determines the satisfiability of such expressions, with unsatisfiability implying a contradiction in the propositional logic represented by the formula. Proven to be NP-complete by Stephen A. Cook in his 1971 paper "The Complexity of Theorem-Proving Procedures," SAT serves as the canonical example for the NP complexity class, meaning any problem in NP can be reduced to SAT in polynomial time, and it underpins the unsolved P vs. NP question central to theoretical computer science. This NP-completeness result, the first of its kind, sparked the development of the entire field of NP-completeness, influencing reductions for thousands of other problems across mathematics, optimization, and engineering. Despite its theoretical intractability—requiring exponential time in the worst case for exact solutions—modern SAT solvers employ advanced heuristics, , and branch-and-bound techniques derived from the (introduced in 1962), enabling them to resolve industrial-scale instances with millions of variables and clauses in seconds. These solvers power diverse applications, including hardware and software verification (e.g., checking circuit equivalence or bug detection), planning (e.g., automated scheduling), (e.g., constraint satisfaction in logistics), and bioinformatics (e.g., protein folding models), demonstrating a remarkable gap between worst-case theory and practical performance. Annual international SAT competitions since 2002 have accelerated progress, akin to "" for SAT solving, with empirical studies revealing that real-world instances often exhibit structure exploitable by and preprocessing techniques. Variants like 2-SAT (linear-time solvable) and 3-SAT (NP-complete even for clauses of three literals) highlight the problem's , while extensions to quantified Boolean formulas (QBF) address higher levels of the polynomial hierarchy.

Definitions and Formalism

Boolean Formulas

In propositional logic, the fundamental building blocks are propositional variables, also known as Boolean variables, which represent atomic propositions that can take one of two truth values: true (often denoted as T or 1) or false (F or 0). These variables, typically denoted by lowercase letters such as p, q, r, do not possess internal structure and serve as placeholders for declarative statements whose truth values are to be evaluated. Propositional formulas are constructed using a set of logical connectives that combine these variables to form more complex expressions. The primary connectives include (\neg), (\land), disjunction (\lor), (\rightarrow), and biconditional (\leftrightarrow). Each connective defines a truth-functional operation, meaning the truth value of the resulting depends solely on the truth values of its components, as specified by their truth tables. The for (\neg A) is:
A\neg A
TF
FT
For (A \land B):
ABA \land B
TTT
TFF
FTF
FFF
For disjunction (A \lor B):
ABA \lor B
TTT
TFT
FTT
FFF
Implication (A \rightarrow B) is true unless A is true and B is false:
ABA \rightarrow B
TTT
TFF
FTT
FFT
The biconditional (A \leftrightarrow B) is true when A and B have the same :
ABA \leftrightarrow B
TTT
TFF
FTF
FFT
Propositional formulas are defined recursively, starting from atomic elements and building through combinations with connectives. Specifically, the set of formulas is the smallest set such that: (1) every propositional variable is a formula (an ); (2) if \phi is a formula, then \neg \phi is a formula; and (3) if \phi and \psi are formulas, then (\phi \land \psi), (\phi \lor \psi), (\phi \rightarrow \psi), and (\phi \leftrightarrow \psi) are formulas. This recursive structure ensures that every formula can be parsed unambiguously, often represented as a where leaves are atoms or their negations, and internal nodes are connectives with their operands as subtrees. For example, consider the formula (p \lor \neg q) \land (\neg p \lor r). Its has a root node labeled \land, with left child \lor (subchildren p and \neg q) and right child \lor (subchildren \neg p and r). This tree representation highlights the hierarchical structure and precedence enforced by parentheses. Every corresponds to a , or , that maps assignments of s to its variables to a single , and such functions can equivalently be realized by Boolean circuits composed of gates mirroring the connectives.

Satisfiability and Models

In propositional logic, a satisfying assignment for a \phi is a that assigns truth values (true or false) to each in \phi such that the entire evaluates to true. Any such satisfying assignment is also called a model of \phi. If no satisfying assignment exists for \phi, the is unsatisfiable, meaning it evaluates to false under every possible assignment of truth values to its variables. The satisfiability problem (SAT) is the fundamental in this context: given a \phi over a of propositional variables, determine whether there exists at least one satisfying (model) for \phi. Formally, SAT asks if \phi is , with a "yes" instance admitting at least one model and a "no" instance being unsatisfiable. For a with n distinct variables, the total number of possible is $2^n, representing the exhaustive search to for satisfiability. To illustrate, consider the Boolean formula \phi = (p \lor \neg q) \land (\neg p \lor r). One satisfying assignment is p = \false, q = \false, r = \true: here, p \lor \neg q evaluates to \false \lor \true = \true, and \neg p \lor r evaluates to \true \lor \true = \true, so \phi is \true \land \true = \true. Special cases arise with tautologies and contradictions. A tautology is a formula that evaluates to true under every possible assignment, meaning every assignment is a model (e.g., p \lor \neg p). Conversely, a is a formula that evaluates to false under every assignment, having no models (e.g., p \land \neg p). These represent the extreme ends of satisfiability: universal satisfaction for tautologies and complete unsatisfiability for contradictions.

Conjunctive Normal Form

In logic, a literal is either a variable x_i or its negation \neg x_i. A is a disjunction (logical OR) of one or more literals, such as x_1 \lor \neg x_2 \lor x_3. A formula is in if it is a conjunction (logical AND) of one or more clauses, for example, (x_1 \lor \neg x_2) \land (\neg x_1 \lor x_3 \lor \neg x_4). This representation is the standard input format for the Boolean satisfiability problem (SAT), as it structures the formula into a product of sums that facilitates analysis of satisfying assignments. The clause width of a CNF refers to the maximum number of literals in any clause; for instance, a in 3-CNF has clauses with at most three literals each. More generally, a k-CNF restricts all clauses to have at most (or, in some contexts, exactly) k literals, which controls the structural complexity and is central to variants like 2-SAT or 3-SAT. Any general Boolean formula can be converted to an equivalent CNF formula using a systematic process: first, eliminate non-standard connectives like implication (p \to q \equiv \neg p \lor q) and biconditional by replacing them with disjunctions; second, apply double negation elimination and De Morgan's laws to push negations inward to literals; finally, use the distributive law to expand any remaining conjunctions over disjunctions, yielding a conjunction of clauses. For example, the implication p \to q directly converts to the single clause \neg p \lor q, which is already in CNF. However, this conversion can result in an exponential increase in formula size, as distributing over multiple terms may produce up to $2^n clauses for certain inputs with n variables.

Computational Complexity

NP-Completeness

The (SAT) belongs to the NP because, given a in and a candidate assignment of truth values to its variables, one can verify whether the assignment satisfies the in polynomial time by evaluating each clause individually. The NP-hardness of SAT is established by the Cook-Levin theorem, which provides a from any problem in NP to SAT. To show this, consider an arbitrary language L \in \mathrm{NP}; by definition, there exists a M that decides L in polynomial time, say p(n) steps on inputs of length n. For an input string w of length n, the reduction constructs a \phi_{M,w} in CNF such that \phi_{M,w} is if and only if M accepts w. The construction encodes the possible computation paths of M on w using variables that represent the state of the tape, head , and at each of the p(n) time steps. Specifically, variables are defined for each possible symbol in each tape cell at each time step, ensuring the formula size remains polynomial in n. Clauses in \phi_{M,w} enforce three key properties: (1) the initial configuration matches w on the input tape with the head at the starting and initial ; (2) consecutive configurations adhere to M's transition rules, allowing changes in at most a constant number of adjacent tape cells per step; and (3) at least one computation path ends in an accepting configuration after at most p(n) steps. This encoding simulates the nondeterministic choices and verifications of M, reducing the acceptance problem to . The Cook-Levin theorem implies that every problem in can be reduced to SAT in polynomial time, making SAT the first problem proven to be and providing a canonical hard problem for the class. This result, originally proven by in his 1971 paper "The Complexity of Theorem-Proving Procedures," formalized the notion of and profoundly influenced by enabling reductions from SAT to establish hardness for numerous other problems. Independently, developed a similar proof around the same time, though it was published later in the West.

k-SAT and Clause Width

The k-SAT problem is a restriction of the Boolean satisfiability problem to formulas in which every consists of exactly k literals, where each literal is a or its . The cases k=1 and k=2 are solvable in polynomial time. For k=1, each is a unit clause of the form (x) or (¬x), and the formula is satisfiable unless a and its negation both appear as clauses; this can be checked and resolved in linear time by propagating forced assignments. For k=2, 2-SAT instances can be solved in linear time via construction of an implication , where vertices represent literals and edges encode implications between clauses, followed by detection of strongly connected components to identify contradictions. In contrast, k-SAT is NP-complete for every fixed k ≥ 3. Since 3-SAT is NP-complete, the result for larger k follows from a from (k-1)-SAT to k-SAT, applied recursively from 3-SAT. Specifically, given a (k-1)-clause C = (1 ∨ ⋯ ∨ k-1), introduce a fresh y and replace C with the two k-clauses (\ell_1 \vee \cdots \vee \ell_{k-1} \vee y) \wedge (\ell_1 \vee \cdots \vee \ell_{k-1} \vee \neg y). This preserves satisfiability: if C is falsified, both new clauses require contradictory values for y; if C is satisfied, both hold regardless of y. Each original clause thus generates a constant number of new clauses and variables, yielding a polynomial-size k-CNF equivalent to the input. The case k=3, known as 3-SAT, is the paradigmatic hard instance of k-SAT and underpins much of the theory of . Beyond , finer-grained hardness results stem from the (ETH), which asserts that no algorithm solves n-variable 3-SAT in time 2o(n). Using sparsification techniques, ETH implies concrete lower bounds on the exponential base. Under ETH, 3-SAT cannot be solved in subexponential time 2^{o(n)}. Current best algorithms achieve O(1.307^n) time (randomized) or O(1.328^n) time (deterministic) as of 2021, highlighting the gap between theoretical lower bounds and practical algorithms. These bounds highlight that exact solving remains computationally intensive even for moderate instance sizes, influencing algorithm design and . Random instances of 3-SAT exhibit a in as a of the clause-to-variable ratio α = m/n. Empirical investigations show that for α ≲ 4.26, uniformly random 3-CNF formulas with n variables and m = αn clauses are satisfiable with probability approaching 1 as n → ∞; for α ≳ 4.26, they are unsatisfiable with high probability. This threshold, refined through large-scale experiments, demarcates easy-to-satisfy underconstrained regimes from overconstrained unsatisfiable ones, with peak hardness near the transition due to exponentially many near-solutions.

Schaefer's Dichotomy Theorem

Schaefer's Dichotomy Theorem, established in 1978, classifies the computational complexity of generalized Boolean satisfiability problems parameterized by a finite set S of Boolean relations, where each relation in S is a subset of \{0,1\}^n for some arity n. The problem SAT(S) asks whether a given conjunction of relational constraints from S (with variables ranging over Boolean values) admits a satisfying assignment. The theorem asserts that SAT(S) is either solvable in polynomial time or is NP-complete, providing a sharp dichotomy based on structural properties of S. This result extends the classical CNF-SAT framework by allowing arbitrary relational constraints rather than solely disjunctive clauses, yet maintains the same complexity boundary. The polynomial-time cases correspond to six specific classes of S, each admitting an efficient recognition and solving algorithm:
  • 0-valid: Every relation in S is satisfied by the all-false (all variables set to 0).
  • 1-valid: Every relation in S is satisfied by the all-true (all variables set to 1).
  • Horn: Every relation in S is definable by a Horn formula, meaning it corresponds to the models of a conjunction of clauses with at most one positive literal each.
  • Anti-Horn: The dual of Horn, where every relation corresponds to clauses with at most one negative literal.
  • 2-CNF: Every relation in S has width at most 2, equivalent to implications between at most two literals, reducible to 2-SAT.
  • Affine: Every relation in S consists of solutions to a over the GF(2), solvable via .
For instance, in the class, satisfiability can be decided in linear time using unit propagation: iteratively assign values to unit clauses (those with a single literal) and remove implied literals until or contradiction. These classes are closed under the operations considered in the , ensuring the dichotomy applies uniformly. If S does not belong to any of these six classes, SAT(S) is NP-complete. The proof constructs polynomial-time reductions from 3-SAT, using gadgets tailored to the polymorphisms or syntactic properties of relations in S to simulate arbitrary 3-clauses; for example, if S lacks the above structures, one can positive and negative literals to force NP-hard behavior. This hardness holds even when restricting to relations closed under complement, of variables, or of variables, highlighting the theorem's robustness. The theorem's framework has significant implications for problems (CSPs), as SAT(S) is precisely the Boolean CSP over the constraint language S. It establishes that all Boolean CSPs fall into exactly two complexity categories, inspiring subsequent dichotomies for CSPs over finite domains and influencing studies in logic and AI.

Polynomial-Time Solvable Cases

2-SAT

The problem, or 2-SAT, restricts the Boolean satisfiability problem to formulas where each clause contains exactly two literals. This case is solvable in polynomial time, distinguishing it from the NP-complete general SAT problem. A central technique for solving 2-SAT involves constructing an . Given a 2-CNF \phi with n variables, the graph G has $2n vertices, one for each literal x_i and \neg x_i for i = 1, \dots, n. For every (a \lor b) in \phi, where a and b are literals, add directed edges \neg a \to b and \neg b \to a to G. These edges encode the logical implications required for the clause to hold: if \neg a is true, then b must be true, and similarly for the other direction. The resulting graph has at most $2m edges, where m is the number of clauses. The formula \phi is unsatisfiable if and only if there exists a x such that x and \neg x lie in the same (SCC) of [G](/page/G). In this case, there are directed paths from x to \neg x and from \neg x to x, implying a where both x and \neg x must be true. Conversely, if no such pair exists, \phi is satisfiable. The full algorithm proceeds as follows: first, build the implication [G](/page/G); second, compute the SCCs of [G](/page/G) using a linear-time method such as Tarjan's algorithm or ; third, check each to see if x and \neg x share an SCC—if any do, declare \phi unsatisfiable. If satisfiable, construct an assignment by obtaining a topological ordering of the SCCs (treating the condensation as a DAG). For each x, set x to true if the SCC of \neg x precedes the SCC of x in this ordering; otherwise, set x to false. This assignment satisfies all implications, as paths in [G](/page/G) respect the topological order, ensuring no clause is violated. The algorithm runs in O(n + m) time, matching the size of the input formula, since constructing G is linear and SCC computation is linear in the number of vertices and edges. As established by Schaefer's dichotomy theorem, 2-CNF formulas belong to a tractable class within the classification of generalized problems, confirming their polynomial-time solvability. For example, consider the 2-CNF formula with clauses (p \lor q), (\neg q \lor r), and (\neg p \lor \neg r). The implication graph includes edges \neg p \to q, \neg q \to p, q \to r, \neg r \to \neg q, p \to \neg r, and r \to \neg p. These form cycles such as p \to \neg r \to \neg q \to p and \neg p \to q \to r \to \neg p, but p and \neg p lie in different SCCs, so the formula is satisfiable. For instance, the assignment p = true, q = false, r = false satisfies all clauses.

Horn-SAT

A consists of a conjunction of s, where each is a disjunction of literals containing at most one positive literal. For instance, the clause \neg a \vee \neg b \vee c is a , equivalent to the (a \wedge b) \to c. Horn-SAT, the problem of determining for such formulas, is one of the six tractable classes identified in Schaefer's dichotomy theorem. The of a can be tested efficiently using a algorithm based on unit propagation. This begins by constructing an implication graph where, for each with negative literals \neg x_1, \dots, \neg x_k and positive literal y (or no positive literal for a negative unit clause), directed edges are added from each x_i to y. clauses, which force a to true, are identified and propagated: setting a v to true removes all clauses containing \neg v (simplifying others by removing \neg v) and adds implications from v to its consequences, using a to assignments in breadth-first order. continues until no new unit clauses arise; if an empty clause or a forcing a both true and false appears, the formula is unsatisfiable. Otherwise, the is satisfiable, with all unforced variables set to false yielding a model. If satisfiable, this algorithm uniquely identifies the minimal model, defined as the satisfying with the fewest true variables (or, equivalently, the least model under the componentwise partial order on ). The minimal model consists precisely of the variables forced true by , as any satisfying must include these to avoid , and setting others false preserves satisfiability due to the Horn structure. The algorithm runs in linear time, O(n + m), where n is the number of variables and m the number of clauses, achieved by processing each edge and node in the implication at most once via the . Example. Consider the Horn with clauses (\neg p \vee q), (\neg q \vee r), encoding the implications p \to q and q \to r. Unit yields no initial units, so it is satisfiable with minimal model all false. Adding the unit clause (\neg r) forces r false, but through the reverse implications (from the ) leads to q false, then p false—no . However, if instead the positive unit (r) is added along with (\neg r), it would force r true and false, yielding unsatisfiability.

Renamable Horn-SAT and Other Tractable Classes

The Boolean satisfiability problem restricted to renamable formulas is solvable in time. A CNF is renamable if there exists a subset of variables whose polarities can be flipped such that the resulting is , meaning every has at most one positive literal. This class generalizes Horn-SAT, extending tractability to formulas that become after such a renaming. The recognition and of renamable formulas can be decided in linear time. The algorithm models the renaming choice as a 2-SAT instance. For each x_i, introduce a new f_i indicating whether x_i is flipped (f_i = \top means flip). For every original , construct clauses ensuring that, after the implied renaming, the has at most one positive literal. Specifically, for each pair of literals in a that could both become positive under some renaming (i.e., both original positives not flipped or both original negatives flipped), add a 2-clause forbidding that pair from both being positive, such as (f_i \lor f_j) or (\neg f_i \lor \neg f_j) depending on the literals. Solving this 2-SAT instance determines if a valid renaming exists; if so, apply the renaming to obtain a , then solve it using linear-time unit as in Horn-SAT. This leverages the polynomial-time solvability of 2-SAT via implication graphs and strongly connected components. Affine SAT, another tractable class, consists of formulas where all clauses are linear equations over the , equivalent to XOR clauses (e.g., exclusive-or of literals equal to true or false). Such a can be represented as a A \mathbf{x} = \mathbf{b}, where A is an m \times n over with rows corresponding to clauses and columns to variables, \mathbf{x} is the variable vector, and \mathbf{b} encodes the required (1 for number of true literals, 0 for even). is determined by on the [A | \mathbf{b}], which runs in O(n^3) time in the worst case for dense systems, or more efficiently for sparse ones. For example, the clause p \oplus q \oplus r (implying an number of true variables) corresponds to the equation x_p + x_q + x_r = 1 over , and solving the full system yields a solution if consistent, or detects inconsistency via rank comparison. This approach directly computes a satisfying when one exists. Schaefer's dichotomy theorem identifies additional tractable classes within its framework: 0-valid formulas, where every contains at least one negative literal (satisfied by the all-false ); 1-valid formulas, where every contains at least one positive literal (satisfied by the all-true ); and anti-Horn formulas, the dual of Horn-SAT, consisting of clauses with at most one negative literal (solvable in linear time by duality, mapping to Horn-SAT via of all variables). These classes, alongside Horn, affine, and 2-CNF, exhaust the polynomial-time solvable cases under Schaefer's closure properties for generalized SAT. Recognition for these is straightforward: verify clause types directly, with following from the respective algorithms (trivial constant assignments for 0-valid and 1-valid, provided no empty clauses).

NP-Hard Variants

Exactly-1 3-SAT

The Exactly-1 3-SAT problem, also known as 1-in-3-SAT, is a restricted variant of the Boolean satisfiability problem where the input is a 3-CNF , but a satisfying must evaluate exactly one literal to true in every , imposing a constraint of precisely one true literal per . This differs from standard 3-SAT, which requires at least one true literal per without upper bounds. The problem is in , as a candidate can be verified in time by checking each for exactly one true literal. For example, consider the clause (p \lor q \lor r). A satisfying for this clause in Exactly-1 3-SAT must set exactly one of p, q, or r to true and the other two to false; assignments where zero, two, or all three are true would violate the constraint. Exactly-1 3-SAT is NP-hard, established via a from 3-SAT that introduces auxiliary variables to transform the "at least one" condition into "exactly one" by encoding mutual exclusions among literals in each original . Specifically, for a 3-SAT (x \lor y \lor z), the reduction adds fresh variables to create a set of Exactly-1 3-SAT clauses that are satisfiable exactly one of x, y, z (or their negations, depending on ) is true, ensuring no more than one can hold simultaneously. This construction preserves while enforcing the exact , and the overall transformation is linear in the input size. The positive variant of Exactly-1 3-SAT, where all literals are unnegated, models the exact cover by 3-sets (X3C) problem, an NP-complete set partitioning task where a of 3q elements must be exactly covered by disjoint 3-element from a given collection. In this correspondence, variables represent selections, and clauses ensure each appears in exactly one selected , directly capturing partitioning constraints. Counting the number of satisfying assignments to an Exactly-1 3-SAT formula is #P-complete, even for the positive case, highlighting the added computational difficulty of beyond mere existence.

Not-All-Equal 3-SAT

The Not-All-Equal 3-SAT (NAE-3-SAT) is a constraint satisfaction problem where the input consists of a conjunction of clauses, each containing exactly three literals over a set of Boolean variables, and a clause is satisfied by a truth assignment if and only if the three literals do not all evaluate to the same truth value—that is, at least one literal is true and at least one is false. This formulation forbids both the all-true (TTT) and all-false (FFF) cases for each clause, distinguishing it from standard 3-SAT, which only requires at least one true literal per clause. A simple example illustrates the satisfaction condition: consider the clause NAE(x_1, \neg x_2, x_3). This clause is satisfied by any where the literals do not all evaluate to true or all to false—for instance, if x_1 = \top, x_2 = \top (making \neg x_2 = \bot), and x_3 = \bot, the values are \top, \bot, \bot (mixed); it is unsatisfied only if all are \top or all \bot. NAE-3-SAT is NP-complete, as established in Schaefer's seminal classification of Boolean constraint satisfaction problems. Even the restricted variant, where all literals are unnegated (positive), remains NP-complete; this version corresponds to ensuring diversity in assignments and can be shown NP-hard by from the 3-coloring problem, via equivalence to 2-coloring 3-uniform hypergraphs where no hyperedge is monochromatic (variables as vertices, clauses as hyperedges, true/false as colors). In applications, NAE-3-SAT models constraint satisfaction scenarios requiring diversity or non-uniformity among variables, such as in relational databases or logic programming where clauses enforce that not all elements in a tuple share the same value. Regarding parameterized complexity, NAE-3-SAT is fixed-parameter tractable when parameterized by the treewidth of the primal or incidence graph of the formula, admitting algorithms running in time O(2^{O(k)} n) where k is the treewidth and n the input size, similar to general SAT; however, it is W-hard for parameters like the solution size or number of variables in unrestricted cases.

Algorithms for Solving SAT

DPLL and Backtracking

The Davis–Putnam–Logemann–Loveland () algorithm, introduced in 1962, forms the basis for exact solvers of the Boolean satisfiability problem by combining recursive search with propositional rules to prune the search space efficiently. Developed as a machine-implemented for theorem proving, it takes a in (CNF) as input and systematically explores partial truth assignments until satisfiability is determined or all possibilities are exhausted. At each step, DPLL first applies simplification techniques to the current formula under a partial : unit propagation and pure literal elimination. Unit propagation identifies reduced to a single unassigned literal (a unit ) and assigns that literal the value needed to satisfy the , then propagates this by substituting into all and repeating until no unit remain or an empty (contradiction) arises. Pure literal elimination detects variables that appear only positively (or only negatively) across all remaining and assigns them true (or false, respectively) to satisfy those , removing the affected from consideration. These rules provide sound inference without branching, reducing the formula's size before further decisions. If simplification yields no clauses, the formula is satisfiable under the current partial , and DPLL returns a satisfying by to complete it. If an empty appears, the current is unsatisfiable, prompting to the last decision point. Otherwise, DPLL selects an unassigned (typically using a like most frequent occurrence) and branches by recursing on two subproblems: one assigning the variable true and the other false. This continues until a satisfying is found or all branches fail, proving unsatisfiability. Pruning occurs via early termination on contradictions during propagation, avoiding full exploration of doomed subtrees. In the worst case, DPLL exhibits exponential time complexity of O(2^n), where n is the number of variables, as it may need to evaluate all possible truth assignments in a binary decision tree. Despite this, the combination of inference rules and branching heuristics makes it surprisingly effective on structured instances, laying the groundwork for subsequent advancements. To illustrate, consider a small 3-SAT instance with variables p, q, r and clauses: (¬p ∨ q), (¬p ∨ r), (q ∨ r), (¬q ∨ ¬r). No initial unit clauses or pure literals exist, so select p for branching.
  • Branch p = true: Substitute to get (q), (r), (q ∨ r), (¬q ∨ ¬r). Unit propagation sets q = true (from first clause), then r = true (from second). The third clause is satisfied, but the fourth becomes (false ∨ false), yielding an empty clause—unsatisfiable. Backtrack.
  • Branch p = false: Substitute to get empty (from first two clauses, satisfied), leaving (q ∨ r), (¬q ∨ ¬r). No units or pures; select q = true: (q ∨ r) is satisfied, (¬q ∨ ¬r) becomes ¬r. Unit propagation sets r = false—no conflict, clauses empty—satisfiable with p=false, q=true, r=false.
This trace shows how propagation detects conflicts early on one branch while finding a solution on the other after limited exploration.

CDCL and Modern Solvers

The conflict-driven clause learning (CDCL) algorithm represents a significant advancement in SAT solving, extending the Davis–Putnam–Logemann–Loveland (DPLL) backtracking procedure by incorporating mechanisms to analyze search failures and learn from them. Upon encountering a conflict—where unit propagation leads to an empty clause—CDCL performs conflict analysis to identify the causes of the failure, rather than simply backtracking chronologically. This analysis leverages the graph, a constructed during unit propagation where nodes represent literals and edges capture implications from clauses, to trace the sequence of decisions and propagations leading to the conflict. By examining the first unique implication point (1UIP) in this graph, the algorithm derives a new "learned" clause that generalizes the conflict, preventing similar failures in future searches. These learned clauses are added to the original formula, effectively pruning the search space. The CDCL framework was first introduced in the GRASP solver in the late 1990s, marking a shift toward more efficient, learning-based exact solving. A core innovation of CDCL is non-chronological , where the solver jumps back to an earlier decision level in the search tree based on the learned , rather than the most recent decision. This is guided by the asserting literal in the learned , which identifies the decision level to resume from, allowing the solver to avoid re-exploring irrelevant branches. learning itself proceeds via repeated on the implication graph: starting from the conflicting , the algorithm resolves backward along unique implication paths until reaching the 1UIP, producing a that explains the conflict succinctly. For instance, consider a simple where propagations from decisions d_1 and d_2 imply contradictory literals l and \neg l; might yield a learned like ( \neg d_1 \lor \neg d_2 \lor x ), where x is the asserting literal, blocking the combination of d_1 and d_2 without x. This ensures learned s are valid of the original and are typically short, enhancing efficiency. Modern CDCL solvers incorporate several key components to boost performance. Variable ordering heuristics, such as the Variable State Independent Decaying Sum (VSIDS), prioritize branching on variables involved in recent s by maintaining a score that decays over time but boosts on conflict participation, dynamically adapting to the formula's . Restart strategies periodically the search to a shallow depth, preventing stagnation in deep but unpromising branches; for example, Glucose employs a dynamic policy based on the LBD (literal block distance) metric to measure "activity" and trigger restarts when progress slows. These elements, combined with efficient structures like two-watched literals for unit propagation, enable solvers to scale dramatically. Seminal implementations include MiniSat, a lightweight, extensible CDCL solver emphasizing simplicity and speed, and Glucose, which refines learning by predicting and prioritizing high-quality clauses using metrics like LBD to minimize redundant learning. Empirically, CDCL solvers have demonstrated remarkable success on industrial benchmarks, routinely solving instances with millions of variables and clauses that were intractable a decade earlier, despite the NP-hardness of SAT. For example, in the International SAT Competitions, top CDCL solvers like Glucose have resolved problems from hardware verification and software analysis containing up to 10 million variables in seconds on commodity hardware. This performance stems from the synergistic effect of learning, heuristics, and restarts, which exploit the modular structure prevalent in real-world formulas.

Local Search and Heuristics

Local search algorithms for the Boolean satisfiability problem (SAT) represent a class of incomplete, methods that explore the of possible variable assignments by iteratively making local changes to reduce the number of unsatisfied clauses. Unlike systematic approaches, these heuristics prioritize speed over completeness, making them particularly effective for large-scale instances where exact methods may timeout. They typically start from a and perform or randomized flips of variable values to improve satisfaction, often incorporating restarts to escape local optima. The algorithm, introduced in , exemplifies basic hill-climbing local search for SAT. It begins with a random truth and, for a fixed number of iterations or restarts, selects a variable flip that maximizes the reduction in unsatisfied clauses; if no such improving flip exists, it chooses randomly among all possible flips. This greedy strategy aims to minimize the number of unsatisfied clauses, terminating when zero are left (indicating ) or after a predefined limit. Empirical evaluations show GSAT performs well on structured problems but can get trapped in local minima on certain random instances. WalkSAT, a refinement of developed in 1993, enhances performance by balancing greedy and random moves to better navigate rugged search landscapes. In WalkSAT, for each unsatisfied clause, a is selected (often the one appearing in the fewest clauses to minimize disruption), and the flip is made greedily if it satisfies the clause without increasing unsatisfied clauses elsewhere; otherwise, a flips a from the clause regardless of global impact. This noise injection helps escape local optima, leading to superior empirical results on random 3-SAT instances compared to pure hill-climbing. Variants like Novelty and Adaptive Novelty further tune the based on recent history to avoid repeated failures. Local search extends naturally to approximation in the maximum satisfiability (MAX-SAT) problem, where the goal is to maximize satisfied clauses rather than achieve full satisfaction. A simple randomized achieves a 3/4- ratio in expectation for general MAX-SAT, by iteratively assigning variables to satisfy the maximum weighted fraction of currently unsatisfied clauses. For MAX-3-SAT specifically, this yields the same 3/4 ratio, providing a worst-case guarantee that is tight under standard complexity assumptions. Such methods are computationally efficient and often outperform exact solvers on overconstrained instances. Hybrid approaches combine local search with complete methods like CDCL solvers to leverage their complementary strengths, using local search for quick exploration and switching to exact methods on timeouts or promising partial assignments. For instance, the SatHyS framework integrates clause learning into local search by periodically adding learned clauses to guide flips, improving solution quality on industrial benchmarks while maintaining speed. These hybrids solve more instances than either method alone, particularly on mixed structured-random problems. Despite their efficiency, local search heuristics lack completeness guarantees, potentially missing solutions on unsatisfiable or highly structured instances where systematic search succeeds. However, they excel on random k-SAT instances near the , often solving millions of variables in seconds due to their anytime nature and low overhead. Ongoing research focuses on parameter tuning and parallelization to mitigate these limitations.

Extensions and Applications

MAX-SAT and Optimization

The (MAX-SAT) extends the decision version of SAT by seeking a truth to the variables of a CNF that maximizes the number of satisfied clauses, rather than requiring all clauses to be satisfied. This formulation arises naturally when modeling problems with "soft" constraints, where some violations are tolerable if they allow more overall feasibility. In the weighted variant of MAX-SAT, each clause is assigned a positive weight, and the objective is to maximize the total weight of satisfied clauses, enabling finer-grained prioritization of constraints. MAX-SAT is NP-hard, as the decision problem of whether all clauses can be satisfied reduces directly to it in polynomial time, inheriting the NP-completeness of SAT. Even the restriction to clauses of size at most two (MAX-2-SAT) remains NP-hard. Despite this hardness, MAX-SAT admits algorithms; for instance, Goemans and Williamson (1994) developed a 3/4- algorithm for unweighted MAX-SAT using relaxations and randomized rounding, guaranteeing a solution whose value is at least three-quarters of the optimal. This ratio has been improved to approximately 0.797 through refinements combining and semidefinite techniques. Exact algorithms for MAX-SAT typically employ branch-and-bound frameworks, which systematically explore the search space while pruning branches based on upper bounds on the remaining unsatisfied clauses. For approaches, iterative local search methods, building on techniques like those for decision SAT, repeatedly flip variable assignments to incrementally increase the number of satisfied clauses until a local optimum is reached. Weighted MAX-SAT can be solved approximately by relaxing the integer program to a linear program and rounding the fractional solution, achieving ratios close to 0.8 in practice for many instances. MAX-SAT finds applications in VLSI design, where it optimizes layouts by maximizing satisfied timing or constraints under limits. It is also used in scheduling tasks, such as generating flight or production schedules that maximize adherence to preferences while minimizing conflicts. For example, consider a weighted MAX-SAT instance with soft clauses representing optional preferences in a scheduling problem: clauses like (x_1 \lor \neg x_2) with weight 5 (preferring task 1 before task 2) and (x_3) with weight 3 (preferring task 3 on); an optimal assignment might satisfy both for a total weight of 8, or sacrifice one for feasibility in a larger formula.

Quantified Boolean Formulas

Quantified Boolean formulas (QBFs) extend propositional formulas by incorporating universal (\forall) and existential (\exists) quantifiers over variables, allowing the expression of more complex logical dependencies. Formally, a QBF is a formula in , consisting of a sequence of quantifiers followed by a quantifier-free , typically a (CNF) clause set. For instance, the formula \forall p \exists q (p \lor \neg q) asserts that for every to p, there exists an to q that satisfies the disjunction. This structure generalizes the satisfiability problem (SAT), where SAT corresponds to the special case of a QBF with only existential quantifiers (\exists-QBF). The evaluation of a QBF proceeds sequentially through the quantifiers: universal quantifiers require the subformula to hold for all possible truth values of their variables, while existential quantifiers require at least one satisfying ; the process alternates based on the quantifier , modeling a two-player game between universal and existential players. The for QBFs, denoted TQBF (true quantified Boolean formulas), asks whether a given QBF evaluates to true, and it is . This complexity result, established by Stockmeyer and Meyer, holds for QBFs in prenex CNF with arbitrary quantifier alternations; the membership in follows from a recursive evaluation that uses polynomial space, while hardness is shown via reduction from other problems. The number of quantifier alternations influences the precise complexity level: zero alternations (purely existential) yield (SAT), one alternation corresponds to , and increasing alternations climb the up to \Sigma_k^p or \Pi_k^p-completeness; unbounded alternations capture full . QBFs can be solved through techniques like recursive Skolemization, which eliminates existential quantifiers by replacing them with Skolem functions dependent on preceding universal variables, thereby reducing the formula to an equisatisfiable SAT instance that can be solved recursively. Alternatively, methods progressively simplify the formula by removing quantifiers while preserving . Modern QBF solvers, such as those implementing the quantified Davis-Putnam-Loveland-Longemann (QDPLL) algorithm, adapt (CDCL) from SAT solvers but incorporate dependency schemes to respect quantifier scopes and variable dependencies, enabling efficient propagation and learning across quantifier levels. The QDPLL framework, introduced by Zhang et al., extends search with these mechanisms to handle the universal-existential game structure effectively.

Circuit SAT and Real-World Uses

Circuit-SAT, or the circuit satisfiability problem, determines whether there exists an input assignment to a given —composed of AND, OR, and NOT gates—that evaluates to true at the output. This formulation is NP-complete, serving as a foundational problem in via the Cook-Levin theorem. Unlike the standard CNF-SAT, Circuit-SAT inputs are naturally represented as circuits, which are often more compact for describing computational structures like hardware designs. To solve Circuit-SAT using CNF-based solvers, the circuit is typically encoded into (CNF) via the Tseitin transformation. This method introduces auxiliary variables for each gate's output, adding clauses that enforce the gate's semantics—for instance, for an with inputs a and b and output x, it generates clauses (\neg a \lor \neg b \lor x), (a \lor \neg x), and (b \lor \neg x), ensuring x \leftrightarrow (a \land b). The resulting CNF formula is equisatisfiable to the original circuit, with a size linear in the circuit's gates, though it introduces additional variables. In hardware verification, SAT solvers encoded via Circuit-SAT are pivotal for and checking. uses SAT to verify whether a hardware satisfies temporal properties by unrolling transition relations into and checking for counterexamples. For checking, a miter combines a reference and an by XORing corresponding outputs and feeding identical inputs; the designs are equivalent if the miter output is unsatisfiable under SAT encoding. A representative example is verifying a ripple-carry : the miter connects two (e.g., a 32-bit against a golden reference), and Tseitin transformation yields a CNF instance solvable by CDCL solvers to detect discrepancies. Bounded model checking extends this to software testing, encoding program executions up to a fixed depth as circuits unrolled over time steps, then reducing to SAT via Tseitin to detect bugs like assertion violations. In AI planning, classical planning problems are modeled as circuits representing state transitions across time steps, with SAT solving finding a satisfying plan trajectory. Cryptography employs Circuit-SAT for attack modeling, such as encoding stream ciphers like Trivium or hash functions like SHA-256 as circuits to search for weak keys or collisions via SAT solvers. The evolution of SAT solvers for these applications has been propelled by international SAT competitions, initiated in 1992 in , which benchmark solvers on diverse instances and foster algorithmic innovations like improved branching heuristics. These events, held annually since , evaluate performance on real-world benchmarks, driving solvers to handle industrial-scale Circuit-SAT instances with millions of variables and clauses, such as those from hardware verification suites. Despite advances, challenges persist in scaling to ultra-large instances, where variable clustering and community structure influence solver efficiency.

References

  1. [1]
    Boolean Satisfiability: Theory and Engineering
    Mar 1, 2014 · The Boolean Satisfiability Problem (SAT, for short) asks whether a given Boolean formula, with Boolean gates such as AND and NOT, has some ...Missing: definition | Show results with:definition
  2. [2]
    The complexity of theorem-proving procedures - ACM Digital Library
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.
  3. [3]
    Propositional Logic - Stanford Encyclopedia of Philosophy
    May 18, 2023 · The truth-functional analysis of propositional logic proceeds by associating n-ary truth functions with the n-ary propositional connectives.
  4. [4]
    [PDF] EE369: Discrete Math Propositional Logic - Purdue Engineering
    Truth tables define how each of the connectives operate on truth values. • Truth table for implication (→). • Equivalence connective A ↔ B is shorthand for (A → ...
  5. [5]
    [PDF] 1. the propositional calculus pl - UCLA Department of Mathematics
    Apr 5, 2021 · The formulas (or well formed formulas) of PL are. defined recursively by the following three clauses: (a) Each propositional variable Ai is a ...
  6. [6]
    [PDF] Propositional Logic: Introduction and Syntax
    One can use parse trees to analyze formulas. Draw the parse tree for the following formula.
  7. [7]
    Assignment 5: Logic Engine - CS 242
    Part 1: SAT solver. One of the most famous problems in complexity theory is boolean satisfiability, or SAT. ... One way would be to enumerate all 2n possible ...
  8. [8]
    [PDF] 2. Propositional Equivalences 2.1. Tautology/Contradiction ...
    A tautology is a proposition that is always true. Example 2.1.1. p ∨ ¬p. Definition 2.1.2. A contradiction is a proposition that is always false.
  9. [9]
    [PDF] Lecture Boolean Satisfiability (SAT) Solvers - cs.Princeton
    Use a SAT solver to check if formula f is satisfiable. • If f is satisfiable, then C1 and C2 are not equivalent. • If f is unsatisfiable, then C1 and C2 are ...
  10. [10]
    [PDF] Boolean Satisfiability Solving Part I: Basics - People @EECS
    Why CNF? • Input-related reason. – Can transform from circuit to CNF in linear time & space (HOW?)Missing: definition | Show results with:definition
  11. [11]
    [PDF] Satisfiability Solvers - Cornell: Computer Science
    When every clause of F has k literals, we refer to F as a k-CNF formula. The SAT problem restricted to 2-CNF formulas is solvable in polynomial time, while for ...
  12. [12]
    [PDF] Lecture 7 - People @EECS
    Notice that in the above evaluation rules we use the Boolean operators ¬, ∧, and so on as functions operating on Boolean values rather than as logical operators ...
  13. [13]
    [PDF] The Complexity of Theorem-Proving Procedures - Computer Science
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings means a ...
  14. [14]
    [PDF] Lecture 19 19.1 2SAT - Harvard SEAS
    Solutions for 2SAT formulae can be found in polynomial time (or it can be shown that no solution exists) using resolution techniques; in contrast, 3SAT is NP- ...
  15. [15]
    [PDF] Reductions-2
    Theorem 5 For any k ≥ 3, k-SAT is NP–complete. Proof: We give a polynomial time reduction of SAT to k-SAT. This proof is not finished. The result follows ...<|separator|>
  16. [16]
    [PDF] Lower bounds based on the Exponential Time Hypothesis
    The Exponential Time Hypothesis (ETH) and Strong Exponential Time Hypothesis (SETH) state lower bounds on how fast satisfiability problems can be solved.
  17. [17]
    [PDF] Algorithmic Lower Bounds - Exponential Time Hypothesis
    Dec 11, 2021 · A series of results established running time lower bounds for solving 3-SAT in terms of the number of variables n. For example, (i) Makino ...
  18. [18]
    [PDF] The SAT Phase Transition - Princeton University
    Figure (2) shows how the probability of satisfiab- ility varies with the ratio of clauses to variables.
  19. [19]
    The complexity of satisfiability problems - ACM Digital Library
    In this paper, we consider an infinite class of satisfiability problems which contains these two particular problems as special cases.
  20. [20]
    [PDF] The Complexity of Satisfiability Problems: Refining Schaefer's ...
    Abstract. Schaefer proved in 1978 that the Boolean constraint satis- faction problem for a given constraint language is either in P or is NP-.
  21. [21]
  22. [22]
    Linear-Time Algorithms for Testing the Satisfiability of Propositional ...
    Semantic Scholar extracted view of "Linear-Time Algorithms for Testing the Satisfiability of Propositional Horn Formulae" by W. F. Dowling et al.
  23. [23]
    [PDF] Polynomial-Time Formula Classes 1 Horn Formulas
    In this section we describe a very simple randomised algorithm Walk-SAT for deciding satisfiability of CNF formulas. We show that Walk-SAT yields a polynomial- ...
  24. [24]
    [PDF] On 2-SAT and Renamable Horn
    PropUnit stands for the well-known algorithm for unit resolution used in almost all SAT solvers, also called unit propagation or boolean constraint ...Missing: forward chaining
  25. [25]
    [PDF] 6.890 1 Welcome 2 Some NP-hard variants - csail
    Sep 16, 2014 · Next two versions of SAT are in that spirit. • 1-in-3SAT = exactly-1 3SAT is NP-complete [Schaefer 1978]. – ... SLIDE 2 (Schaefer 1978): Reduce ...
  26. [26]
    Not All Equal 3SAT | Discussions of NP-Complete Problems
    Jul 22, 2014 · We're requiring at least one literal to be true and at least one literal to be false. So we're removing the case where all three literals can be true.
  27. [27]
    [PDF] naesat
    3-coloring Is NP-Complete a. • We will reduce naesat to 3-coloring. • We are given a set of clauses C1,C2,...,Cm each with 3 literals. • The boolean ...
  28. [28]
    On a simple hard variant of Not-All-Equal 3-Sat - ScienceDirect
    May 2, 2020 · We show that Not-All-Equal 3-Sat remains NP-complete if (1) each variable appears exactly four times, (2) there are no negations in the formula, ...
  29. [29]
    [PDF] Constraint Satisfaction, Databases, and Logic - IJCAI
    We discuss some of these connections in the next section. 3 Constraint Satisfaction and Relational ... SAT, NOT-ALL-EQUAL 3-SAT, and MONOTONE 3-SAT as special ...
  30. [30]
    Complexity of SAT parameterized by treewidth
    Oct 6, 2019 · SAT parameterized by treewidth is FPT, with complexity of O(2kkNd) for primal and O(2kk(l+2k)‖F‖) for incidence treewidth, where k is treewidth.
  31. [31]
    [PDF] The complexity of satisfiability problems
    The problem of deciding whether a given prop- ositional formula in conjunctive normal form is satisfiable has been widely studied. It is known.
  32. [32]
    [PDF] From Spin Glasses to Hard Satisfiable Formulas
    ) Of course, XOR-SAT is solvable in polynomial time by. Gaussian elimination, but standard 3-SAT algorithms can still take exponential time on random. XOR-SAT ...
  33. [33]
    [PDF] XORSAT
    Oct 9, 2007 · Several algorithms have been devised to accomplich such a task in polynomial time. The best known is Gauss elimination, that has O(N3) ...
  34. [34]
    A machine program for theorem-proving | Communications of the ACM
    Abstract: The programming of a proof procedure is discussed in connection with trial runs and possible improvements.
  35. [35]
    [PDF] The Davis-Putnam-Logemann-Loveland Procedure
    Many existing SAT solvers are based on the Davis-Putnam-Logemann-. Loveland procedure, or DPLL [Davis and Putnam, 1960, Davis et al., 1962]. It allows us to ...Missing: original | Show results with:original
  36. [36]
    [PDF] SAT solvers - cs.Princeton
    presented by Davis, Logemann, and Loveland in 1962, which we will refer to as Davis-Putnam-Logemann-Loveland (DPLL). We first consider the original DP algorithm ...
  37. [37]
    [PDF] GRASP: A Search Algorithm for Propositional Satisfiability
    MARQUES-SILVA AND SAKALLAH: GRASP: A SEARCH ALGORITHM FOR PROPOSITIONAL SATISFIABILITY. 507. 1. The GRASP software is available for downloading from http://.
  38. [38]
    [PDF] Conflict-Driven Clause Learning SAT Solvers - cs.Princeton
    Learning new clauses from conflicts during backtrack search [MSS96]. • Exploiting structure of conflicts during clause learning [MSS96]. • Using lazy data ...Missing: seminal | Show results with:seminal
  39. [39]
    [PDF] Chaff: Engineering an Efficient SAT Solver - Princeton University
    In this paper we describe the development of a new complete solver,. Chaff, which achieves significant performance gains through careful engineering of all ...
  40. [40]
    [PDF] MiniSat - An Extensible SAT-solver
    From the do umentation in this paper we hope it is possible for you to implement a fresh SAT-solver in your favorite language, or to grab the C++ version of ...
  41. [41]
    [PDF] On the Glucose SAT solver - HAL Artois
    Feb 28, 2022 · In this paper, we review the set of ingredients that were typically introduced in the series of glucose releases and try to propose some ...
  42. [42]
    [PDF] The International SAT Solver Competitions - AAAI Publications
    In contrast, the biggest application instance solved by at least one solver contained 10 million variables, 32 million clauses, and a total of 76 million ...
  43. [43]
    [PDF] Local Search Algorithms for SAT: An Empirical Evaluation
    The GSAT algorithm was introduced in 1992 by Selman, Levesque, and Mitchell. [60]. It is based on a rather simple idea: GSAT tries to minimise the number of.
  44. [44]
    [PDF] A New Method for Solving Hard Satisfiability Problems. - CS@Cornell
    In this paper, we propose an algorithm for an NP-hard problem that we believe has some very definite advantages. In particular, it works very quickly.
  45. [45]
    [PDF] Local Search Strategies for Satisfiability Testing
    Jan 22, 1995 · The basic GSAT algorithm performs a local search of the space of truth- assignments by starting with a randomly-generated assignment, and then ...
  46. [46]
    New 3 4 -Approximation Algorithms for the Maximum Satisfiability ...
    We give a simple, randomized greedy algorithm for the maximum satisfiability problem (MAX SAT) that obtains a 3 4 -approximation in expectation. In contrast to ...
  47. [47]
    [PDF] Boosting local search thanks to CDCL - HAL
    Sep 24, 2013 · In this paper, we propose a new hybridization of local search and modern SAT solver, named SATHYS (Sat Hybrid Solver). In our approach, both ...
  48. [48]
    [PDF] Algorithms for Maximum Satisfiability - with Applications to AI
    Mar 23, 2016 · ▷ A SAT solver acts as the NP oracle most often in practice. MaxSat ... ▷ NP-hard optimization problem. [Bansal, Blum, and Chawla, 2004].
  49. [49]
    [PDF] Solving Optimization Problems via Maximum Satisfiability - a4cp.org
    This thesis investigates solving combinatorial optimization problems using MaxSAT, focusing on solving and encoding techniques, including re-encoding and new ...Missing: seminal | Show results with:seminal
  50. [50]
    [PDF] Maximum Satisfiability
    Mar 17, 2006 · MAX-SAT is NP-complete. Even MAX-2SAT, the restriction to instances in which each clause has at most two literals in it, is NP-complete.Missing: seminal reference
  51. [51]
    [PDF] Approximation Algorithms for the Maximum Satisfiability Problem
    In 5, we describe a class of -34- approximation algorithms for MAX SAT based on a variant of randomized rounding. We conclude with a few remarks in 6. 2.
  52. [52]
    [PDF] Bounds on Greedy Algorithms for MAX SAT - Cornell University
    The currently best approximation ratio of 0.797 for MAX SAT is achieved by an algorithm due to ... However, no deterministic greedy algorithm with approximation ...
  53. [53]
    [PDF] Quality of LP-based Approximations for Highly Combinatorial ...
    We consider LP-based approximation algorithms for which we solve the lin- ear programming relaxation of the corresponding formulation (assignment for- mulation ...
  54. [54]
    A continuous-time MaxSAT solver with high analog performance
    Nov 19, 2018 · Applications include scheduling, planning and automated reasoning, electronic design automation, bounded model checking, design of ...
  55. [55]
  56. [56]
    [PDF] Evaluating QBFs via Symbolic Skolemization - sKizzo
    We leverage the Skolem theorem to map any QBF instance onto a satisfiability equiv- alent SAT instance featuring a very compact symbolic representation. We ...
  57. [57]
    [PDF] Conflict Driven Learning in a Quantified Boolean Satisfiability Solver
    The basic framework of our algorithm for QBF solving is described in Figure 1. It differs from a SAT solving process. (e.g. [12]) in some aspects. First, the ...
  58. [58]
    [PDF] Lecture 7: NP-Complete Problems 1. Circuit Satisfiability
    Jul 25, 2000 · The circuit satisfiability problem (CIRCUIT-SAT) is the circuit analogue of SAT. Given a Boolean circuit C, is there an assignment to the ...
  59. [59]
    [PDF] Conjunctive Normal Form. Tseitin Transform
    our formula. ∧(p1 ↔ c ∨ d) meaning of p1. ∨. ¬. ∨. ∧. ¬p1 a. ¬b c d p1. Convert each equivalence to CNF (by the above rules) combine them with ∧. (a ∨ ¬b) ...
  60. [60]
    [PDF] Example of Tseitin Transformation
    Example of Tseitin Transformation: Circuit to CNF sat. Revision: 1.10. 1. CNF c b a w v w u o x y o ∧. (x ↔ a∧c) ∧. (y ↔ b∨x) ∧. (u ↔ a∨b) ∧.
  61. [61]
    Boolean Satisfiability Solvers and Their Applications in Model ...
    Aug 26, 2015 · The continuing advances of SAT solvers are the driving force of modern model checking tools, which are used to check the correctness of hardware ...
  62. [62]
    [PDF] Equivalence Checking using Trace Partitioning - Daniel Kroening
    Figure 2 shows an example miter for checking combinational equivalence of a 32-bit floating-point adder/subtractor circuit. We provide the same floating ...
  63. [63]
    [PDF] Bounded Model Checking Using Satisfiability Solving
    In Section 3, we shall discuss bounded model checking, a technique which has enabled orders of magnitude increases in efficiency for some model checking ...
  64. [64]
    [PDF] Planning as Satisfiability - Cornell: Computer Science
    Planning as satisfiability is a formal model based on satisfiability, not deduction, providing a more flexible and accurate way to formalize planning.
  65. [65]
    [PDF] Attacking Bivium Using SAT Solvers - Uni Ulm
    In this paper we present experimental results of an applica- tion of SAT solvers in current cryptography. Trivium is a very promising stream cipher candidate in ...Missing: modeling | Show results with:modeling
  66. [66]
    On the Hierarchical Community Structure of Practical SAT Formulas
    Mar 27, 2021 · Modern CDCL SAT solvers easily solve industrial instances containing tens of millions of variables and clauses, despite the theoretical ...