Fact-checked by Grok 2 weeks ago

Computational logic

Computational logic is the application of formal logical systems and computational techniques to automate reasoning, prove theorems, and verify properties in mathematical, computational, and knowledge-based domains. Originating in 19th-century philosophical and mathematical efforts to reduce arithmetic to logic—pioneered by figures like Gottlob Frege with his Begriffsschrift (1879)—the field addressed foundational paradoxes such as Russell's paradox and evolved through 20th-century developments including Church's lambda calculus and Gödel's incompleteness theorems. At its core, computational logic encompasses subfields like propositional logic for basic truth-value reasoning, for quantifiers and predicates enabling automated deduction via methods such as and tableaux, for expressing functions and types as in lambda calculi, and for structured knowledge representation in ontologies. These foundations support practical tools, including automated theorem provers (e.g., resolution-based systems) and interactive provers like HOL Light, Isabelle, and , which facilitate mechanical verification of complex proofs such as the four-color theorem or . Notable applications include of hardware (e.g., preventing flaws like the 1994 Intel Pentium FDIV bug) and software (e.g., securing systems against vulnerabilities like WannaCry). paradigms such as Prolog for declarative computation, and enabling technologies for the Semantic Web through standards like OWL built on description logics. In contemporary computer science, computational logic underpins artificial intelligence tasks like knowledge bases and planning, while advancing interdisciplinary areas such as homotopy type theory for univalent foundations in mathematics and recent integrations with machine learning for enhanced automated reasoning (as of 2024).

History

Early foundations

The foundations of computational logic trace back to the mid-19th century, when mathematicians began formalizing through algebraic and symbolic methods, paving the way for its eventual . George Boole's 1847 pamphlet, The Mathematical Analysis of Logic, marked a pivotal advancement by introducing , which treated logical operations as algebraic manipulations of symbols representing classes of objects. This approach reduced to a akin to , emphasizing operations like (multiplication) and disjunction () to manipulate propositions symbolically. Boole's innovation shifted logic from qualitative philosophical discourse to a quantitative framework, demonstrating that inferences could be derived mechanically through equations, though no devices existed at the time. Building on similar ideas, independently contributed to this algebraic tradition in his 1847 book Formal Logic; or, The Calculus of Inference, Necessary and Probable. De Morgan articulated key laws governing logical operations, particularly De Morgan's theorems, which state that the of a is equivalent to the disjunction of negations (¬(A ∧ B) ≡ ¬A ∨ ¬B) and vice versa (¬(A ∨ B) ≡ ¬A ∧ ¬B). These relations formalized the interplay between , , and disjunction, enabling more systematic treatment of syllogisms and probabilities within a symbolic system. De Morgan's work complemented Boole's by extending the scope to include relational inferences and hypothetical reasoning, further distancing logic from ambiguous toward precise, rule-based manipulation. A decisive step toward modern computational logic came with Gottlob Frege's 1879 : eine der arithmetischen nachgebildeten Formelsprache des reinen Denkens, which introduced the predicate calculus and quantifiers to express generality and formally. Frege's two-dimensional notation allowed for the representation of complex mathematical statements, such as "all properties of a thing follow from its being a ," using variables bound by universal (∀) and existential (∃) quantifiers. This system transcended propositional logic by incorporating predicates and arguments, enabling the formalization of arbitrary judgments and proofs in a way that anticipated automated verification. Collectively, these 19th-century developments transformed logic into a symbolic algebra suitable for computational representation, influencing later formal systems in the .

20th-century developments

In the early 20th century, David Hilbert advanced his program for the formalization of mathematics, culminating in his 1928 address where he sought to establish a complete and consistent axiomatic foundation for all mathematical truths using finitary methods. Central to this initiative was the Entscheidungsproblem, or decision problem, which posed the challenge of devising an algorithm to determine the validity of any mathematical statement within a formal system. This program aimed to secure mathematics against paradoxes by proving the consistency of infinite methods through finite, constructive proofs, influencing subsequent developments in logic and computability. Kurt Gödel's incompleteness theorems of 1931 revealed profound limitations to Hilbert's vision, demonstrating that no sufficiently powerful for can be both complete and consistent. The first states that in any consistent system containing basic , there exist true statements that cannot be proved within the system; the second asserts that such a system cannot prove its own consistency. Gödel achieved this through arithmetization of syntax, employing to encode logical statements as numbers using primitive recursive functions, such as encoding via prime , to represent syntactic structures computably. These results underscored the inherent undecidability in s, shifting focus from total formalization to partial, computable subsets. Alonzo Church's development of in the 1930s provided an alternative foundation for , formalizing functions as central objects through and application. Introduced as a system for expressing computable functions without explicit , it used notation like \lambda x.M to denote anonymous functions, enabling the definition of higher-order functions and beta-reduction for evaluation. This framework not only laid the groundwork for paradigms but also proved equivalent to other models, resolving questions about the scope of effective calculation in logic. Alan Turing's 1936 paper on computable numbers introduced the Turing machine as a precise model of computation, addressing the Entscheidungsproblem by showing first-order logic to be undecidable. A Turing machine consists of an infinite tape, a read-write head, and a finite set of states, with its behavior governed by a transition function \delta(q, s) = (q', s', D), where q is the current state, s the scanned symbol, q' the next state, s' the symbol to write, and D the direction (left, right, or stay). Computable numbers were defined as those whose digits can be generated by such a machine in finite steps, establishing a countable class of effectively calculable reals and proving the existence of uncomputable functions. Jacques Herbrand's theorem, published in his 1930 dissertation, advanced decidability results for clausal fragments of by reducing to propositional logic over the Herbrand universe. The theorem states that a set of clauses is unsatisfiable there is a finite disjunction of ground instances (Herbrand disjunction) that is propositionally unsatisfiable, providing a semi-decision procedure for validity. This work highlighted the constructive nature of in logic, influencing later techniques while respecting the undecidability of full .

Modern advancements

Following , computational logic advanced significantly through the integration of theoretical foundations with practical computing tools, enabling on digital hardware. Building on earlier theoretical work, such as Alan Turing's concepts of , researchers began developing systems that could mechanically derive logical conclusions from premises. A landmark early system was the , developed in 1956 by Allen Newell and Herbert Simon, which proved theorems from Whitehead and Russell's using search, marking the beginning of research in automated deduction. A pivotal contribution came in 1958 when John McCarthy developed (LISt Processor), a programming language designed for that incorporated primitives for symbolic manipulation and , such as recursive functions and list structures that facilitated representation of logical expressions. Lisp's emphasis on treating code as data allowed for meta-level reasoning, influencing subsequent logical programming paradigms. In 1965, J.A. Robinson introduced resolution theorem proving, a complete and sound inference method for that became a cornerstone of automated deduction. The resolution rule enables the derivation of new clauses by unifying complementary literals: from clauses C_1 = (A \lor L_1 \lor \dots \lor L_n) and C_2 = (\neg A \lor L_{n+1} \lor \dots \lor L_m), where A and \neg A are unifiable, infer the resolvent clause (L_1 \lor \dots \lor L_n \lor L_{n+1} \lor \dots \lor L_m). This principle, implemented in early computer programs, dramatically improved the efficiency of mechanical theorem proving by reducing search spaces through unification. The 1970s saw the emergence of sophisticated systems, including the development of in 1972 by Alain Colmerauer, , and Philippe Roussel, which implemented a subset of (Horn clauses) for and automated deduction. Exemplified by the Boyer-Moore prover, initially developed as a tool for proving properties of programs using a combination of simplification and strategies. This system, later formalized as Nqthm, automated the verification of mathematical theorems and software correctness, marking a shift toward interactive, user-guided proof assistants. Key institutional milestones included the founding of the Association for Logic Programming in 1986, which fostered collaboration on logic-based programming languages like and promoted research in declarative computing. The 1990s witnessed rapid growth in SAT solvers, building on the Davis–Putnam–Logemann–Loveland (DPLL) procedure from 1962, with advancements like evolving into efficient tools for solving propositional satisfiability problems, driven by advances that scaled to industrial applications. A notable application of logical search techniques occurred in 1997 when IBM's defeated chess champion , employing search with alpha-beta pruning to evaluate game trees—though reliant on brute-force computation rather than pure logical inference.

Fundamental Concepts

Propositional logic

Propositional logic, also known as sentential logic, forms the foundational layer of computational logic by dealing exclusively with declarative statements that are either true or false, without internal structure or relations between objects. It abstracts reasoning to the level of propositions—atomic units representing basic assertions—and the ways they can be combined using logical connectives, enabling the formal analysis of truth preservation in arguments. This system underpins many computational tasks, such as and , due to its finite, decidable nature. The syntax of propositional logic consists of a set of propositional atoms, typically denoted by variables such as p, q, or r, which serve as the basic building blocks. These atoms are combined using a finite set of logical connectives: conjunction (\land), disjunction (\lor), negation (\lnot), implication (\to), and biconditional (\leftrightarrow). Well-formed formulas (wffs) are recursively defined, starting from atoms and applying connectives, such as (p \land q) \to r, ensuring every subformula is properly parenthesized to avoid ambiguity. Semantically, propositional logic assigns truth values—true (T) or false (F)—to atoms under an interpretation, which extends to compound formulas via truth-functional rules for each connective: for instance, p \land q is true only if both p and q are true, while p \to q is false only if p is true and q is false. A formula's exhaustively lists all possible $2^n assignments for n distinct atoms, evaluating the formula in each case; a formula is a if it evaluates to true in every assignment, a if false in every one, and a otherwise. Computationally, propositional logic is decidable, but determining —whether there exists an making a formula true—is NP-complete, as established by the Cook-Levin theorem, which reduces the verification of computations to SAT. The Davis-Putnam-Logemann-Loveland (DPLL) procedure provides a foundational for solving SAT by recursively branching on atom assignments, applying unit propagation to simplify clauses, and pure literal elimination to prune search space, originally implemented for theorem proving in 1962. A key representational tool in propositional logic is normal form conversion, particularly (CNF), where any formula is equivalent to a of clauses, each a disjunction of literals (an atom or its ), expressible as \bigwedge_i \bigl( \bigvee_j l_{ij} \bigr) with literals l_{ij}. CNF facilitates efficient SAT solving, as modern solvers like CDCL build directly on DPLL extensions for this format.

First-order logic

First-order logic, also known as first-order predicate logic, extends the expressiveness of propositional logic by incorporating predicates, functions, and quantifiers to reason about objects and their relations within a domain. This formalism enables the representation of infinite structures and general statements, such as properties holding for all or some elements, which is essential for computational applications like and knowledge representation. Unlike propositional logic, which deals solely with truth values of atomic propositions, first-order logic allows variables to range over a domain, facilitating more sophisticated modeling of mathematical and real-world scenarios. The syntax of is defined over a consisting of symbols, symbols of various arities, and symbols of various arities (excluding , which is often included primitively). Terms are built inductively: variables (e.g., x, y), (e.g., a), or applications (e.g., f(t_1, \dots, t_n) where each t_i is a term). Atomic formulas are applications to terms (e.g., P(t_1, \dots, t_n)) or (t_1 = t_2). Well-formed formulas (formulas) are then constructed using propositional connectives—negation (\neg), conjunction (\wedge), disjunction (\vee), implication (\to), and biconditional (\leftrightarrow)—and quantifiers: universal (\forall x \phi) and existential (\exists x \phi), where \phi is a formula. Sentences are closed formulas without free variables, such as \forall x (P(x) \to Q(f(x))), which asserts that for every object, if it satisfies P, then the result of applying f to it satisfies Q. The quantifier-free fragment corresponds to propositional logic, where predicates act as atomic propositions under fixed interpretations. Semantically, is interpreted relative to a structure \mathcal{M} = (D, I), where D is a non-empty , and I assigns interpretations to symbols: constants to elements of D, functions to functions on D, and predicates to relations on D. The satisfaction relation \mathcal{M} \models \phi[\sigma] (where \sigma is a assignment) is defined recursively for atomic formulas by checking if the interpreted terms satisfy the relation or equality; for connectives, it follows classical truth tables; and for quantifiers, \mathcal{M} \models \forall x \phi[\sigma] \iff \forall d \in D, \mathcal{M} \models \phi[\sigma[x \mapsto d]], \mathcal{M} \models \exists x \phi[\sigma] \iff \exists d \in D, \mathcal{M} \models \phi[\sigma[x \mapsto d]], with similar clauses for other connectives. A key construct is the Herbrand universe, the set of all ground terms (variable-free terms) generated from the signature's constants and functions, which forms the domain for Herbrand interpretations; these provide a canonical way to test satisfiability by reducing to propositional instances via Herbrand's theorem, stating that a set of sentences is satisfiable if and only if it has a Herbrand model. From a computational , logic's validity problem—determining whether a sentence is true in all interpretations—is undecidable, meaning no can solve it for all inputs, as independently proved by and in 1936 by reducing it to the or equivalent undecidable problems. Nonetheless, the problem is semi-decidable: algorithms exist that confirm validity when it holds (by finding a proof) but may not terminate otherwise, with analytic tableaux methods providing a systematic procedure that expands formulas into branches representing possible interpretations, closing contradictory branches to establish unsatisfiability. A notable decidable fragment is monadic first-order logic, restricted to predicates (no relations of greater than one, excluding ), for which validity is algorithmically solvable, as shown by Leopold Löwenheim in 1915 via semantic arguments establishing a finite model property.

Higher-order logics

Higher-order logics extend the expressive power of by permitting quantification over s, functions, and higher-level relations, enabling more sophisticated reasoning about mathematical structures and computations. Unlike , which quantifies solely over individuals, higher-order logics treat s and functions as first-class citizens, allowing statements like ∀P ∃x P(x), where P is a and x is an individual , asserting that every applies to at least one element in the domain. This added expressivity allows , a foundational higher-order system, to fully capture the semantics of Peano arithmetic through second-order axioms, including an induction schema that quantifies over sets of natural numbers to ensure the . However, this power comes at a cost: the validity problem for is undecidable, meaning no algorithm exists to determine whether arbitrary sentences are true in all models, as demonstrated by reductions from undecidable problems like Hilbert's tenth. A key framework for higher-order logics is , which builds on the to structure expressions and avoid paradoxes like Russell's. Alonzo Church's simple type theory, introduced in 1940, assigns types to terms such that basic types like individuals (ι) and truth values (o) combine via function types τ → σ, enabling safe higher-order quantification while preserving decidable fragments for certain subclasses. Extending this, Per Martin-Löf's from the 1980s incorporates dependent types, where types can depend on values, and leverages the to equate with types and proofs with programs—thus, a proof of a A corresponds to a term of type A, facilitating constructive and program verification. This , first articulated by in his work on and formalized by William Howard, underscores the computational interpretation of logical proofs. In computational applications, higher-order logics underpin proof assistants for , where systems like (HOL) provide a conservative extension of Church's with axioms for equality () and (ensuring an infinite domain for natural numbers). The HOL Light theorem prover, developed by , implements classical HOL in a minimalist OCaml-based , supporting over complex theorems in and while maintaining soundness through a small set of trusted axioms. serves as a definable fragment within these systems, but higher-order features enable meta-level reasoning essential for advanced verification tasks.

Methods and Techniques

Automated theorem proving

Automated theorem proving encompasses algorithms and systems designed to mechanically search for proofs of logical validity within formal systems, particularly in , by systematically exploring paths from axioms and to conclusions. These methods rely on refutation procedures, where the negation of a is assumed, and a is derived if the conjecture holds, enabling complete and sound proof discovery through exhaustive search enhanced by heuristics. In computational logic, automated theorem provers prioritize in handling large search spaces, often incorporating ordering strategies to prune irrelevant branches. Resolution refutation stands as a cornerstone technique, introduced by J. A. Robinson in 1965, which transforms formulas into clausal normal form and applies binary resolution rules to combine clauses until deriving the empty clause, proving unsatisfiability. The method's power derives from its completeness for first-order logic, allowing provers to handle quantifiers via Skolemization and Herbrand's theorem. Essential to resolution is the unification algorithm, also pioneered by Robinson in the same work, which computes most general substitutions to match terms; for instance, unifying the predicates P(x) and P(a) produces the substitution \{x / a\}, enabling generalized inferences across variable bindings. Tableaux methods and sequent calculi offer alternative analytic frameworks for proof search, emphasizing tree-like expansions that reveal contradictions. The Beth tableau, developed by E. W. Beth in 1955, constructs semantic tableaux for propositional logic by branching on connectives—such as splitting \neg (A \land B) into A and B on separate branches—while applying closure rules to terminate branches containing both a formula and its negation, like A and \neg A, signaling inconsistency. Sequent calculi, formalized by G. Gentzen in 1934, structure proofs as sequences of implications between multisets of formulas, supporting but often automated via resolution-like implementations for . Prominent modern systems include , initiated in 1994 by Andrei Voronkov at and later developed at the , and , first released publicly in 1998 by S. Schulz at TU . Both leverage the superposition calculus, advanced by L. Bachmair and H. Ganzinger in the early 1990s, which extends with ordered paramodulation rules to efficiently reason about equalities, such as inferring substitutions in equational clauses while preventing redundant computations through term ordering. These provers incorporate saturation algorithms, indexing for literal matching, and machine learning-guided heuristics to accelerate proof discovery on complex problems. The efficacy of these systems is evaluated annually through the CADE ATP System Competition (CASC), established in 1994 alongside the Conference on Automated Deduction, which tests provers on diverse problems from the TPTP library, fostering innovations in speed and coverage. For instance, in 2025, dominated by winning all eight categories in the competition. Building briefly on propositional techniques like SAT solvers, higher-order provers integrate variants for scalable tasks.

Logic programming

Logic programming is a paradigm in which programs are expressed as a set of logical statements, typically Horn clauses, and execution proceeds by to derive conclusions from these statements using an . This approach treats as logical , where queries are resolved against the program to find substitutions that satisfy the goals, enabling concise representations of and flexible query processing. Unlike imperative paradigms, logic programs focus on what relations hold rather than how to compute them step-by-step, with the underlying system handling the search for solutions. A prominent implementation of logic programming is the language , developed in the early 1970s by Alain Colmerauer and his team at the University of Marseille for . Prolog programs consist of facts, which assert atomic relations; rules, which define implications between relations; and queries, which seek to prove goals or find bindings for variables. For example, a fact might declare a parent relationship as parent(tom, bob)., indicating Tom is a parent of Bob. A rule could define a grandparent relation as:
grandparent(X, Z) :- parent(X, Y), parent(Y, Z).
This states that X is a of Z if there exists a Y such that X is a of Y and Y is a of Z. A query like ?- grandparent(tom, Z). would then attempt to find all Z satisfying the rule, binding Z to children of Bob if additional facts are provided. These elements form the declarative core, where the syntax directly mirrors restricted to definite clauses. Execution in logic programming, particularly in Prolog, relies on SLD resolution (Selective Linear Definite clause resolution), a refutation-based procedure that operates via with . SLD resolution selects a literal from the current goal (using a selection rule, often left-to-right), unifies it with the head of a program clause, and replaces the goal with the clause body, proceeding linearly until success or failure. This mechanism, akin to a form of resolution from but specialized for definite programs, ensures computed answers are correct substitutions under the least Herbrand model semantics. Key developments include Robert Kowalski's 1974 procedural interpretation of Horn clauses, which formalized logic programs as executable procedures where implications drive goal reduction, bridging declarative semantics with operational behavior. In the , David H. D. Warren introduced the (WAM), an abstract architecture with a memory model and instruction set optimized for execution, enabling efficient compilation and widespread adoption by reducing interpretive overhead. Central concepts include negation as failure, which infers the negation of a goal if all attempts to prove it fail, relying on the that all relevant facts are explicitly stated in the program. Additionally, unification—the process of finding substitutions to make terms identical—incorporates an occur-check to prevent binding a variable to a term containing itself, avoiding infinite structures and ensuring termination in sound implementations.

Model checking

Model checking is an algorithmic verification technique in computational logic that exhaustively checks whether a finite-state model of a system satisfies a given specification, typically expressed in temporal logic. This method systematically explores the model's state space to determine if all possible executions conform to the desired properties, providing a counterexample if a violation occurs. Originating in the early 1980s, model checking addresses the verification of concurrent and reactive systems by automating the analysis of temporal behaviors. A key formalism for specifications in is (LTL), which captures linear-time over sequences of states, known as paths. Introduced by Amir Pnueli in 1977, LTL extends propositional logic with temporal operators such as \Diamond [p](/page/P′′) (eventually p, or F(p)) and \Box [p](/page/P′′) (always p, or G(p)), where the semantics are defined over infinite paths in the model: \Diamond [p](/page/P′′) holds if there exists a future state along the path satisfying p, and \Box [p](/page/P′′) holds if p is true in every state along the path. LTL model checking algorithms typically convert the formula to an automaton and check for path acceptance, enabling verification of liveness and in linear-time models. For branching-time properties, Computational Tree Logic (CTL) provides a more expressive framework suited to concurrent systems with nondeterministic choices. Developed by Edmund M. Clarke and E. Allen Emerson in 1981, with contributions from A. Prasad Sistla in subsequent works, CTL combines path quantifiers (A for all paths, E for some path) with temporal operators, such as in AG(p), which asserts that on all computation paths (A), p holds globally (\Box, or G). CTL formulas are interpreted over labeled transition systems, where states are labeled with atomic propositions, and proceeds via fixed-point computations on the structure's transition relation to compute predecessor sets satisfying the formula. This branching semantics allows CTL to distinguish between universal and existential behaviors in tree-like computation structures. The primary challenge in model checking is the state explosion problem, where the number of states grows exponentially with the system's size. To mitigate this, symbolic represents sets of states and transitions implicitly using Binary Decision Diagrams (BDDs), compact data structures for functions. Introduced by Randal E. Bryant in , BDDs are directed acyclic graphs where nodes represent decision points on variables, enabling efficient manipulation of large state spaces through operations like and existential . Kenneth L. McMillan's 1993 work formalized symbolic model checking with BDDs, demonstrating its application to CTL by computing fixed points symbolically over the transition relation, which has verified systems with over $10^{20} states. A prominent implementation is the model checker, developed by Gerard J. Holzmann starting in the 1980s at for verifying communication protocols. uses a process algebra-like language (Promela) to model systems as concurrent processes and supports LTL specifications via automata-based checking, incorporating partial-order reduction to further combat state explosion. It has been widely applied to detect flaws in distributed algorithms, such as the FireWire protocol, by generating error traces for .

Applications

Artificial intelligence and knowledge representation

Computational logic plays a pivotal role in (AI) and knowledge representation by enabling the formal encoding of knowledge in a manner that supports and inference. In AI systems, logical frameworks allow for the declarative specification of facts, relationships, and rules, facilitating tasks such as query answering, consistency maintenance, and derivation of implicit knowledge from explicit assertions. These methods underpin symbolic AI approaches, where computation involves theorem proving or model construction to simulate human-like reasoning over structured domains. Description logics (DLs) form a cornerstone for ontologies in knowledge representation, providing a decidable fragment of tailored for defining concepts and hierarchies. The foundational DL ALC (Attributive Language with Complements) includes atomic concepts denoting unary predicates, role names for binary relations, operators like and , and quantifiers for existential (\exists r . C) and universal (\forall r . C) restrictions over roles to concepts. Here, concepts represent classes of individuals, while roles capture relations between them, allowing the construction of complex expressions such as "a who has a child who is a " via nested quantifiers. Subsumption checking in ALC—determining whether one concept is subsumed by another (i.e., all instances of the first are instances of the second)—relies on tableau reasoning, an algorithm that systematically expands a set of assertions into a model or detects inconsistency through rules like decomposing quantifiers and applying blockers to avoid redundancy. This procedure ensures efficient reasoning for ontology-based applications, with ALC's problem being , balancing expressivity and computational tractability. Non-monotonic reasoning addresses the limitations of monotonic logics in by permitting inferences that can be revised with new information, crucial for modeling and exceptions in knowledge bases. Reiter's default logic (1980) achieves this through a monotonic extension of augmented with default rules of the form A : B_1, \dots, B_n / C, where A is a classical prerequisite, the B_i are consistent justifications, and C is the derived conclusion; an extension is a fixed point set closed under defaults whose justifications hold. For instance, the default "birds fly unless specified otherwise" can be encoded to infer flight for typical birds but exclude penguins via overriding facts, enabling robust commonsense reasoning in systems handling incomplete knowledge. In the Semantic Web, DLs power knowledge representation through the , a 2004 W3C standard that builds on ALC and extensions like ALCQ for qualified number restrictions to support web-scale ontologies. OWL enables inference over distributed knowledge, such as entailment of class memberships or property assertions, by translating ontologies into DLs amenable to tableau-based reasoners, thus facilitating and automated discovery across sources. The project, launched by in 1984, demonstrates computational logic's application in constructing massive knowledge bases using predicate calculus to formalize everyday commonsense knowledge. Employing variants, Cyc encodes assertions like taxonomic hierarchies and rules to mitigate AI brittleness, amassing over a million axioms for inference in and tasks.

Software and hardware verification

Computational logic enables the of software and hardware designs by mathematically proving that systems satisfy their intended specifications, thereby preventing errors that could lead to failures in safety-critical applications. This process typically involves expressing system properties in logical terms and using automated or interactive theorem provers to establish their validity. A cornerstone of this field is , proposed by C. A. R. Hoare in 1969, which formalizes program correctness through axiomatic reasoning. Hoare logic employs triples of the form \{P\} \, C \, \{Q\}, where P is a asserting the initial state, C is a program command, and Q is a postcondition describing the desired outcome after execution, provided C terminates. To derive these triples, the framework uses inference rules and the concept of the weakest precondition \mathrm{wp}(C, Q), which specifies the least restrictive initial condition guaranteeing Q upon completion of C. This approach allows modular of program components, scaling to complex systems by composing proofs hierarchically. In , theorem proving techniques inspired by have been implemented in tools like /Java, developed in the early 2000s by researchers at Systems Research Center. ESC/Java extends static analysis by enabling developers to add formal annotations to programs, specifying preconditions, postconditions, and invariants using a subset of . These annotations are then verified automatically via the Simplify theorem prover, which employs decision procedures to check for common errors such as null pointer dereferences or array bounds violations, often uncovering subtle bugs missed by traditional compilers. Hardware verification leverages similar logical methods, with the ACL2 theorem prover—based on a computational logic for applicative common Lisp—applied industrially since the mid-1990s to confirm the correctness of digital circuits. At , ACL2 was used to formally verify register-transfer level (RTL) models of floating-point units, including the multiplier and adder in the processor, ensuring compliance through exhaustive proofs of arithmetic operations without simulation-based testing limitations. This work, initiated in 1996, demonstrated ACL2's capability to handle intricate hardware behaviors by modeling them in executable logic and discharging proof obligations via and .

Other domains

Computational logic extends into databases through languages like , developed in the 1980s as a declarative for deductive databases that supports recursive queries based on principles. Datalog rules express relationships using Horn clauses, enabling efficient computation of transitive closures and other recursive derivations; for instance, to find all paths in a , one defines:
path(X, Y) :- edge(X, Y).
path(X, Z) :- path(X, Y), edge(Y, Z).
This example computes by iteratively applying the rules until a fixed point is reached, leveraging bottom-up evaluation for termination on stratified programs. The influence of on relational databases is evident in SQL standards, where recursive common table expressions (CTEs) in SQL: were directly inspired by its recursive query mechanisms to handle hierarchical and graph data without procedural code. In , computational logic underpins semantic formalisms such as , which integrates to model sentence meanings compositionally within . Introduced by in the , this approach translates natural language syntax into logical expressions, where quantifiers and predicates are treated as functions of type-theoretic signatures, allowing precise of truth conditions; for example, noun phrases like "every man" are lambda-abstracted to yield predicates applicable to verb phrases. This framework has influenced modern by providing a foundation for and in systems that bridge syntax and meaning without ambiguity. Beyond these areas, computational logic applies to bioinformatics for inferring biological pathways, where rewriting logic formalisms like Pathway Logic model cellular processes as executable specifications to simulate and query reaction networks. In this domain, logical rules represent biochemical transitions, enabling the discovery of pathway variants through formal analysis rather than simulation alone. Similarly, in legal reasoning, deontic logic formalizes obligations, permissions, and prohibitions to support automated normative inference in computational models of argumentation. Systems based on deontic logics, such as standard deontic logic (SDL), reason about legal norms by encoding rules like "if action A is obligatory, then not-A is forbidden," facilitating compliance checking and conflict resolution in rule-based decision support.

Challenges and Future Directions

Computational complexity

Computational complexity in computational logic examines the resources required to solve decision problems involving logical formulas, focusing on time and space bounds as well as hardness results. Propositional satisfiability (SAT), the problem of determining whether a formula has a satisfying assignment, belongs to the NP, as a can verify a solution in polynomial time by guessing an assignment and checking it. The seminal Cook-Levin theorem establishes the of SAT by showing that every problem in can be reduced in polynomial time to the of Boolean formulas in 3-conjunctive normal form (), a restricted version of SAT. This reduction involves encoding the computation of a into a , whose corresponds to the existence of an accepting path. As a result, is NP-complete, implying that SAT is among the hardest problems in . Algorithms for solving SAT, such as the Davis-Putnam-Logemann-Loveland (DPLL) procedure, achieve exponential worst-case . Specifically, DPLL's search explores up to $2^n possible assignments in the worst case for a with n variables, as it may need to branch on each variable without pruning. Extending to higher expressiveness, the problem of quantified formulas (QBF), which involves alternating existential and universal quantifiers over variables, resides in the complexity class . QBF is , meaning it captures the full power of polynomial-space computations, with reductions showing that any problem can be encoded as a QBF instance. For , the validity problem—determining whether a formula is true in all models—is undecidable, equivalent in computational power to the for . This equivalence arises from reductions that encode computations into first-order sentences, such that the machine halts if and only if the sentence is valid, leveraging the undecidability of the established by Turing.

Integration with machine learning

The integration of computational logic with has given rise to hybrid approaches known as , which combine the interpretability and reasoning capabilities of logical structures with the strengths of neural networks. This synergy addresses limitations in pure systems, such as lack of explainability and difficulty in handling symbolic , while enhancing logical systems' ability to learn from . A prominent example is , where logical rules are embedded within neural architectures to enable joint optimization of learning and inference. Logic Tensor Networks (LTNs), introduced in , exemplify this integration by representing logical formulas as neural computations, allowing logical satisfaction to be incorporated into a differentiable . In LTNs, truth values of logical atoms are computed via tensor operations, and the degree of satisfaction of rules (e.g., implications or conjunctions) is measured continuously between 0 and 1, enabling gradient-based training alongside data-driven objectives. This approach has been applied to tasks like semantic image interpretation, where logical constraints guide neural predictions for object and detection, achieving improved accuracy on benchmarks such as the Visual Genome dataset by enforcing relational consistency. Inductive logic programming (ILP) represents another foundational hybrid method, focusing on learning logical clauses from examples and background to generalize rules in a symbolic form. ILP systems like Progol, developed in the 1990s, use inverse entailment to search for hypotheses that cover positive examples while excluding negatives, producing concise logic programs interpretable by humans. Progol has been successfully applied to scientific discovery tasks, such as identification in , where it induces rules from molecular structure data with high predictive accuracy, outperforming statistical methods in coverage and compression. A landmark development in scaling these hybrids is seen in AlphaGo (2016), which combined Monte Carlo Tree Search (MCTS)—a logical search algorithm—with deep neural networks for move selection and evaluation, effectively pruning the vast game tree using learned policies that incorporate strategic logic. Although primarily a search-based system, this integration demonstrated how machine learning can accelerate logical reasoning in complex domains like Go, achieving superhuman performance by reducing search depth through neural-guided pruning. Recent advances in the have focused on differentiable logic, making symbolic reasoning fully compatible with gradient-based optimization in end-to-end neural pipelines. Techniques such as differentiable allow rules to be learned via , treating logical operations as soft, continuous functions that enable efficient training on large datasets. For instance, systems like GLIDR (2025) extend this to graph-like structures, improving completion by jointly optimizing logical rules and embeddings, with reported gains in accuracy on datasets like WN18RR. These methods enhance scalability for real-world applications, bridging the gap between discrete logic and continuous learning. One prominent emerging trend in computational logic is the application of quantum logic frameworks, particularly the ZX-calculus, for verifying quantum circuits. Developed in the 2010s, the ZX-calculus provides a graphical language for reasoning about linear maps between qubits, enabling efficient equivalence checking and optimization of quantum computations. This approach has proven effective for handling complex quantum operations, such as those involving Clifford+T gates, by rewriting diagrams to simplify circuits while preserving semantics. For instance, tools like PyZX leverage ZX-calculus to achieve significant reductions in T-count and gate count in benchmark suites, demonstrating its practical impact on quantum software verification. Interactive theorem provers have evolved significantly in the 2020s, with Lean 4 exemplifying advancements in metaprogramming capabilities that facilitate the construction of extensive formal mathematics libraries. Released in 2021 as a reimplementation of its predecessor in Lean itself, Lean 4 integrates functional programming with dependent type theory, allowing users to define custom tactics and automate proof generation through meta-level programming. This has accelerated the formalization of mathematical structures, such as the Liquid Tensor Experiment, where metaprogramming enabled the verification of over 10,000 lines of advanced algebraic proofs. By 2025, Lean 4's ecosystem supports collaborative projects like mathlib, encompassing formalizations in topology, analysis, and category theory, thus bridging computational logic with pure mathematics. A key development since 2023 involves augmenting large language models (LLMs) with logical verifiers to enhance reasoning fidelity, as seen in frameworks like Logic-LM. These systems translate problems into symbolic logical forms, then apply theorem provers or solvers for verification, reducing hallucinations in LLM outputs by integrating deductive checks. For example, Logic-LM demonstrates significant improvements in tasks by combining LLM generation with external symbolic reasoning engines. This hybrid paradigm extends to interactive assistants, where verifiers ensure consistency in multi-step deductions. The rise of in autonomous systems has gained momentum since 2020, driven by the need to certify in AI-driven and vehicles. Techniques such as runtime monitoring and probabilistic are increasingly applied to ensure properties like collision avoidance and task completion under . A structured review of post-2020 highlights over 150 publications focusing on scalable for unmanned aerial vehicles and self-driving cars, with tools like enabling probabilistic guarantees for hybrid systems. This trend underscores computational logic's role in regulatory compliance, such as standards for .

References

  1. [1]
    Computational logic: its origins and applications - Journals
    Feb 28, 2018 · Computational logic is the use of computers to establish facts in a logical formalism. Originating in nineteenth century attempts to understand the nature of ...A brief history of formal logic · Mechanized logic: the LCF... · The way forward
  2. [2]
    [PDF] Computational Logic 320441 CompLog Lecture Notes - KWARC
    Computational Logic: The field of Computational Logic looks at computational aspects of logic. It is essentially the computer-science perspective of logic. The ...
  3. [3]
    Computational Logic | Course - Stanford Online
    It shows how to encode information in the form of logical sentences; it shows how to reason with information in this form; and it provides an overview of logic ...Missing: definition | Show results with:definition
  4. [4]
    The mathematical analysis of logic : being an essay towards a ...
    Aug 2, 2006 · The mathematical analysis of logic : being an essay towards a calculus of deductive reasoning. by: Boole, George ... Publication date: 1847.
  5. [5]
    George Boole - Stanford Encyclopedia of Philosophy
    Apr 21, 2010 · George Boole (1815–1864) was an English mathematician and a founder of the algebraic tradition in logic. He worked as a schoolmaster in ...
  6. [6]
    The Mathematical Analysis of Logic
    Book description. Self-taught mathematician George Boole (1815–1864) published a pamphlet in 1847 – The Mathematical Analysis of Logic – that launched him ...
  7. [7]
    Formal logic (1847) : De Morgan, Augustus, 1806-1871
    Aug 9, 2019 · Formal logic (1847). by: De Morgan, Augustus, 1806-1871. Publication date: 1926. Topics: Logic, Symbolic and mathematical, Probabilities.
  8. [8]
    The Algebra of Logic Tradition - Stanford Encyclopedia of Philosophy
    Mar 2, 2009 · De Morgan's book of 1847 was part of a revival in logic studies that originated at the beginning of 19th Century with Joseph Diez Gergonne (1771 ...
  9. [9]
    [PDF] Begriffsschrift ^ a formula language, modeled upon that of arithmetic ...
    Begriffsschrift ^ a formula language, modeled upon that of arithmetic, for pure thought. GOTTLOB FREGE. (1879). This is the first work that Frege wrote in the ...
  10. [10]
    Frege's Logic - Stanford Encyclopedia of Philosophy
    Feb 7, 2023 · Friedrich Ludwig Gottlob Frege (b. 1848, d. 1925) is often credited with inventing modern quantificational logic in his Begriffsschrift.
  11. [11]
    Gottlob Frege - Stanford Encyclopedia of Philosophy
    Sep 14, 1995 · Though this notation was first outlined in his Begriffsschrift (1879), the most mature statement of Frege's system was in his 2-volume ...Frege's Theorem · Frege's Logic · 1. Kreiser 1984 reproduces the...
  12. [12]
    Hilbert’s Program (Stanford Encyclopedia of Philosophy)
    ### Summary of Hilbert’s Program and Entscheidungsproblem
  13. [13]
    Gödel's Incompleteness Theorems
    Nov 11, 2013 · Gödel's two incompleteness theorems are among the most important results in modern logic, and have deep implications for various issues.
  14. [14]
  15. [15]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  16. [16]
  17. [17]
  18. [18]
    [PDF] HISTORY OF INTERACTIVE THEOREM PROVING
    Dec 11, 2015 · Although Boyer and Moore briefly worked together on the popular theme of resolution proving, they soon established their own research agenda: ...
  19. [19]
    A Machine-Oriented Logic Based on the Resolution Principle
    A Machine-Oriented Logic Based on the Resolution Principle. Author: J. A. Robinson ... This paper focuses on resolution-based automated reasoning theory in a ...
  20. [20]
    Milestones from the Pure Lisp theorem prover to ACL2
    Jul 30, 2019 · We discuss the evolutionary path from the Edinburgh Pure Lisp Theorem Prover of the early 1970s to its modern counterpart, AComputational Logic ...
  21. [21]
    ALP History - Association for Logic Programming
    The ALP was founded in 1986 at the 3rd ICLP conference, held at Imperial College. One of its main purposes was to oversee the profits of the London meeting.
  22. [22]
    The Silent (R)evolution of SAT - Communications of the ACM
    Jun 1, 2023 · Initial progress in practical SAT solving began in the early 1990s, leading to a breakthrough around the millennium. The last two decades have ...Key Insights · Eras of Practical SAT Solving · The Art of Using SAT Solvers · OutlookMissing: growth | Show results with:growth
  23. [23]
    Propositional Logic - Stanford Encyclopedia of Philosophy
    May 18, 2023 · Propositional logic is the study of the meanings of, and the inferential relationships that hold among, sentences based on the role that a specific class of ...The Classical Interpretation · Gentzen's calculi · Non-Classical InterpretationsMissing: syntax | Show results with:syntax
  24. [24]
    The complexity of theorem-proving procedures - ACM Digital Library
    The complexity of theorem-proving procedures. Article. Free access. Share on. The complexity of theorem-proving procedures. Author: Stephen A. Cook. Stephen A.
  25. [25]
    [PDF] On Herbrand's Theorem - UCSD Math
    Herbrand's theorem is one of the fundamental theorems of mathematical logic and allows a certain type of reduction of first-order logic to propositional logic.
  26. [26]
    [PDF] Undecidability of First-Order Logic - Computer Science
    Church's proof uses the fact that it is undecidable whether two expressions in λ-calculus are equivalent, and then reduces this to the decision problem. On the ...
  27. [27]
    [PDF] Smullyan-Book-Full.pdf - Cornell: Computer Science
    Chapter IV consists of purely introductory material for First-Order. Logic. Chapter V treats tableaux for First-Order Logic. There we give proofs of the well ...Missing: method | Show results with:method
  28. [28]
    A Syntactic Proof of the Decidability of First-Order Monadic Logic
    Feb 9, 2024 · In the present paper we introduce a syntactic proof of decidability of monadic first-order logic in innex normal form which exploits G3-style ...
  29. [29]
    [PDF] Second-order Logic
    In second-order Peano arithmetic PA2, induction can be stated as a single sentence. PA2 consists of the first eight axioms above plus the (second-order) ...
  30. [30]
    Undecidability, incompleteness, and completeness of second-order ...
    Undecidability, incompleteness, and completeness of second-order logic in Coq | Proceedings of the 11th ACM SIGPLAN International Conference on Certified ...
  31. [31]
    A Formulation of the Simple Theory of Types - jstor
    The purpose of the present paper is to give a formulation of the simple theory ... 2 See, for example, Alonzo Church, Mathematical logic (mimeographed), Princeton ...
  32. [32]
    [PDF] Intuitionistic Type Theory
    Stockholm, January 1984,. Per Martin-Löf. Page 7. Introductory remarks. Mathematical logic and the relation between logic and mathematics have been interpreted ...
  33. [33]
    [PDF] HOL Light: an overview
    HOL Light is an interactive proof assistant for classical higher- order logic, intended as a clean and simplified version of Mike Gordon's original HOL system.
  34. [34]
    About Vampire - TPTP
    About Vampire. To Quote ... An automatic theorem prover for first-order logic ... History. 1993 - Implementation I (Paris); 1994 - Code trees (Uppsala); 1996 ...
  35. [35]
    The E Theorem Prover
    The first public release was in in 1998, and the system has been continuously improved ever since. I believe that E now is one of the most powerful and friendly ...Missing: history | Show results with:history
  36. [36]
    The CADE ATP System Competition - TPTP
    The CADE ATP System Competition (CASC) is the annual evaluation of fully automatic, classical logic, ATP systems - the world championship for such systems.
  37. [37]
    (PDF) Predicate Logic as Programming Language - ResearchGate
    PDF | On Jan 1, 1974, Robert A. Kowalski published Predicate Logic as Programming Language | Find, read and cite all the research you need on ResearchGate.
  38. [38]
    [PDF] The birth of Prolog - Alain Colmerauer
    Abstract. The programming language, Prolog, was born of a project aimed not at producing a programming language but at processing natural languages; in this.
  39. [39]
    [PDF] LOGIC PROGRAMMING - Department of Computing
    However, the first explicit reference to the procedural interpretation of Horn clauses appeared in [Kowalski, 1974]. The abstract begins:
  40. [40]
    [PDF] PROLOG IN lO F I G U R E S Alain Colmerauer Centre Mondial d' In ...
    Abstract; Prolog is presented in a rigourous way, through 10 easily understandable figures. I t s theoretical model is completly rewrought. After.
  41. [41]
    [PDF] SLD-Resolution And Logic Programming (PROLOG)
    We shall present SLD-resolution and show its completeness for Horn clauses. SLD-resolution is also interesting because it is the main computation procedure used ...
  42. [42]
    [PDF] Negation as failure - Department of Computing
    NEGATION AS FAILURE. Keith L. Clark. Department of Computer Science & Statistics. Queen Mary College, London, England. 113. ABSTRACT. A query evaluation process ...
  43. [43]
    On prolog and the occur check problem | ACM SIGPLAN Notices
    It is well-known that omission of the occur check in unification leads to unsound Prolog systems. Nevertheless, most Prolog systems omit the occur check ...
  44. [44]
    The temporal logic of programs - ACM Digital Library
    The main proof method suggested is that of temporal reasoning in which the time dependence of events is the basic concept.
  45. [45]
    Automatic verification of finite-state concurrent systems using ...
    We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) ...
  46. [46]
    Symbolic Model Checking - SpringerLink
    In stock Free deliverySymbolic Model Checking deals with methods of automatic verification as applied to computer hardware.
  47. [47]
    [PDF] Symbolic Model Checking - Ken McMillan
    A system called SM V can automatically verify programs in this language with respect to temporal logic formulas, using the symbolic model chec k ing technique.
  48. [48]
    [PDF] The Model Checker SPIN - Department of Computer Science
    In 1980, Holzmann wrote one of the first automated protocol verification systems, called PAN Since then, he has written several other on-the-fly ...
  49. [49]
    A logic for default reasoning - ScienceDirect.com
    In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occuring defaults.
  50. [50]
    CYC: Using Common Sense Knowledge to Overcome Brittleness ...
    Mar 15, 1985 · CYC: Using Common Sense Knowledge to Overcome Brittleness and Knowledge Acquisition Bottlenecks. Authors. Douglas B. Lenat; Mayank Prakash; Mary ...Missing: project 1984
  51. [51]
    An axiomatic basis for computer programming - ACM Digital Library
    This paper explores the logical foundations of computer programming using axioms and rules of inference for proving program properties.Missing: original | Show results with:original
  52. [52]
    [PDF] Extended Static Checking for Java
    This paper in- troduces the Extended Static Checker for Java (ESC/Java), ... ESC/Java checks each routine by invoking the automatic theorem prover Simplify.
  53. [53]
    Simplify: a theorem prover for program checking | Journal of the ACM
    This article provides a detailed description of the automatic theorem prover Simplify, which is the proof engine of the Extended Static Checkers ESC/Java ...
  54. [54]
    [PDF] Formal Verification of Floating-Point RTL at AMD Using the ACL2 ...
    This paper describes ongoing work, begun in 1996, in the use of theorem proving in the verification of register- transfer logic (RTL) models of the floating- ...
  55. [55]
    Every planar map is four colorable - Project Euclid
    Every planar map is four colorable. K. Appel, W. Haken. DOWNLOAD PDF + SAVE TO MY LIBRARY. Bull. Amer. Math. Soc. 82(5): 711-712 (September 1976).
  56. [56]
    (PDF) Datalog: concepts, history, and outlook - ResearchGate
    PDF | This chapter is a survey of the history and the main concepts of Datalog.We begin with an introduction to the language and its use for database.
  57. [57]
    [PDF] What You Always Wanted to Know About Datalog (And Never Dared ...
    The first method for pushing selections into recursive expressions was proposed by Aho and Ull- man in a seminal paper [lo]; the static filtering technique.
  58. [58]
    [PDF] Logical Query Languages Datalog Datalog example Datalog Rules ...
    Datalog recursion has inspired the introduction of recursion in the SQL-99 standard. • More difficult: SQL allows grouping and aggregation → requires a more ...<|control11|><|separator|>
  59. [59]
    Montague Grammar - Semantic Scholar
    This paper considers where this elegant logical paradigm stands when confronted with the wear and tear of reality, and discusses three main issues: the fit ...
  60. [60]
    A Logic Computational Framework to Query Dynamics on Complex ...
    Pathway Logic is a rewriting logic development applied to symbolic systems biology. Rewriting logic language Maude allows us to define transition rules and to ...
  61. [61]
    The Role of Logic in Computational Models of Legal Argument
    This article surveys the use of logic in computational models of legal reasoning, against the background of a four-layered view on legal argument.
  62. [62]
    Deontic Logic - Stanford Encyclopedia of Philosophy
    Feb 7, 2006 · Deontic logic is a branch of logic concerned with notions like permissible, impermissible, obligatory, and optional, and their logical ...Standard Deontic Logic · Conditional Obligations and... · Other Enrichments of SDL
  63. [63]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    Theorem 1: If a set S of strings is accepted by some nondeterministic Turing machine within polynomial time, then S is P-reducible to { DNF tautologies}.
  64. [64]
    Word problems requiring exponential time(Preliminary Report)
    Meyer, A.R. and L.J. Stockmeyer. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space, 13th Annual IEEE Symp. on ...Missing: PSPACE | Show results with:PSPACE
  65. [65]
    A review of neuro-symbolic AI integrating reasoning and learning for ...
    This approach allows logical principles into AI models while preserving differentiability, which is essential for gradient-based optimization techniques ( ...
  66. [66]
    [PDF] Logic Tensor Networks for Semantic Image Interpretation - IJCAI
    In this paper, we develop and apply. LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the ...
  67. [67]
    Logic Tensor Networks for Semantic Image Interpretation - arXiv
    May 24, 2017 · In this paper, we develop and apply LTNs to two of the main tasks of SII, namely, the classification of an image's bounding boxes and the detection of the ...
  68. [68]
    Inverse entailment and progol | New Generation Computing
    Apr 2, 1995 · This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE)
  69. [69]
    [PDF] Inductive Logic Programming: Inverse Resolution and Beyond - IJCAI
    The paper firstly provides a reappraisal of the development of techniques for inverting deduction, secondly introduces. Mode-Directed Inverse Entailment ...
  70. [70]
    GLIDR: Graph-Like Inductive Logic Programming with Differentiable ...
    Aug 8, 2025 · We introduce GLIDR, a differentiable rule learning method that models the inference of logic rules with more expressive syntax than previous ...Missing: 2020s | Show results with:2020s
  71. [71]
    Differentiable Logic Programming for Distant Supervision - arXiv
    Aug 22, 2024 · This paper introduces a method for integrating neural networks with logic programming for distant supervision, using differentiable evaluation ...Missing: 2020s | Show results with:2020s
  72. [72]
    Equivalence Checking of Quantum Circuits with the ZX-Calculus
    Aug 26, 2022 · The aim of this work is to evaluate the ZX-calculus as a tool for equivalence checking of quantum circuits.
  73. [73]
    [2501.18639] A Comprehensive Survey of the Lean 4 Theorem Prover
    Jan 28, 2025 · This comprehensive survey examines Lean 4, a state-of-the-art interactive theorem prover and functional programming language.