Fact-checked by Grok 2 weeks ago

Logic in computer science

Logic in computer science encompasses the application of to the , analysis, and of computational systems, providing for specifying behaviors, proving properties, and automating reasoning processes. It serves as a foundational tool across the field, comparable to the role of in and , influencing areas from via circuits to and . At its core, the discipline draws on propositional logic and first-order predicate logic, which define syntax for constructing formulas, semantics for interpreting truth values, and decision procedures for determining or validity. These logical systems enable precise modeling of system states, transitions, and properties, such as in transition systems or Kripke structures, facilitating rigorous analysis of correctness and reliability. Key applications include formal verification through theorem proving, which uses deductive methods like or to establish that a satisfies specified requirements, and model checking, an automated technique that exhaustively explores finite-state models against temporal logics like LTL or CTL to detect errors or confirm properties. In practice, these approaches support by ensuring bug-free implementations, database query optimization via , and AI tasks like in knowledge representation. Tools such as NuSMV for and for specification exemplify how logic integrates theory with to address real-world challenges in design.

Foundations of Logic

Propositional Logic

Propositional logic, also known as sentential logic, is a that studies the logical relationships among propositions, which are declarative statements that can be either true or false but not both. In , it serves as a foundational tool for modeling conditions, verifying program correctness, and designing algorithms for . Propositions are represented by atomic symbols such as P, Q, or R, each assigned a of true (T) or false (F) under a given . The core of propositional logic consists of logical connectives that combine propositions to form compound statements. The primary connectives are conjunction (\wedge), disjunction (\vee), negation (\neg), material implication (\rightarrow), and biconditional (\leftrightarrow). Their semantics are defined by truth tables, which enumerate all possible truth value assignments and the resulting truth value of the compound proposition. For negation (\neg P):
P\neg P
TF
FT
For conjunction (P \wedge Q):
PQP \wedge Q
TTT
TFF
FTF
FFF
For disjunction (P \vee Q):
PQP \vee Q
TTT
TFT
FTT
FFF
For implication (P \rightarrow Q):
PQP \rightarrow Q
TTT
TFF
FTT
FFT
For biconditional (P \leftrightarrow Q):
PQP \leftrightarrow Q
TTT
TFF
FTF
FFT
A propositional formula is a tautology if it evaluates to true under every possible truth assignment, a contradiction if it evaluates to false under every assignment, and a contingency if its truth value varies depending on the assignment. For example, P \vee \neg P is a tautology (law of excluded middle), P \wedge \neg P is a contradiction, and P \rightarrow Q is a contingency since it is true under all assignments except when P is true and Q is false. These classifications are determined by constructing a complete truth table for the formula. Inference in propositional logic relies on rules that preserve truth, allowing derivation of new propositions from given premises. is a fundamental rule: from P and P \rightarrow Q, one infers Q. Another key rule is , introduced by Robinson in , which applies to clauses in disjunctive form: given two clauses containing complementary literals p and \neg p, the resolvent is obtained by combining the remaining disjuncts. For instance, from P \vee Q and \neg Q \vee R, resolution yields P \vee R. Resolution is sound and, when combined with conversion to clausal form, complete for propositional entailment. Propositional formulas can be systematically rewritten into normal forms for analysis and computation. (DNF) is a disjunction of conjunctions of literals, corresponding to the union of minterms from a where the formula is true. (CNF) is a of disjunctions of literals, useful for testing. Conversion to DNF involves identifying satisfying assignments and forming the OR of corresponding minterms; for CNF, one negates the formula, converts to DNF, and negates again to obtain clauses. A general eliminates implications (A \rightarrow B \equiv \neg A \vee B), removes double negations, and distributes connectives (e.g., \neg P \vee (Q \wedge R) \equiv (\neg P \vee Q) \wedge (\neg P \vee R)) to achieve CNF. The foundations of propositional logic trace back to George Boole's 1847 pamphlet The Mathematical Analysis of Logic, which introduced an algebraic treatment of logical operations using symbols restricted to two values, foreshadowing the binary logic essential for digital computing. Boole's work enabled the representation of logical statements as equations, directly influencing the design of electronic circuits and .

Predicate Logic

Predicate logic, also known as , extends propositional logic by introducing , which are or properties involving , allowing the expression of statements about objects and their relationships in a . A is symbolized as P(x_1, \dots, x_n), where x_1, \dots, x_n are ranging over elements of a , and the predicate asserts a property or relation; for instance, P(x) might denote "x is even," where x is a representing an . This enables the formalization of computational concepts such as database queries or program specifications that involve quantification over data elements. Central to predicate logic are the quantifiers: the universal quantifier \forall ("for all") and the existential quantifier \exists ("there exists"), which bind variables within their scope to specify the extent of applicability of predicates. The of a quantifier is the subformula it governs, and binding rules ensure that each is either or bound by exactly one quantifier, preventing ambiguities in ; for example, in \forall x (P(x) \to Q(x)), the quantifier binds both occurrences of x. Formulas in predicate logic are constructed hierarchically: atomic formulas consist of predicates applied to terms ( or function symbols), and compound formulas are formed using logical connectives such as \neg, \land, \lor, \to, and \leftrightarrow, with quantifiers prefixing subformulas. A key structural form is the , where all quantifiers are moved to the front of the formula, yielding Q_1 x_1 \dots Q_n x_n \phi, with \phi quantifier-free, facilitating transformations like skolemization. Semantically, predicate logic formulas are evaluated relative to an , which assigns a non-empty to the variables, interprets and constant symbols as domain elements or operations, and assigns predicates to relations on the ; a is if there exists an (a model) making it true, and valid if true in all interpretations. Herbrand's theorem provides a foundational result for by linking the of a to that of a set of ground instances in the Herbrand universe—the set of all ground terms constructible from the symbols—and the Herbrand base of atomic ground formulas; specifically, after skolemization (replacing existentially quantified variables with Skolem functions to eliminate \exists), a set of clauses is if and only if it has a Herbrand model, reducing reasoning to propositional-like checks over this universe. An illustrative example in computer science is expressing graph connectivity using recursive path definitions: the formula \forall x \forall y \, (Path(x,y) \leftrightarrow (x = y \lor \exists z \, (Edge(x,z) \land Path(z,y)))) defines that there is a path from x to y if they are the same vertex or if there exists an adjacent z leading to a path from z to y, capturing transitive closure essential for graph algorithms. Despite its expressiveness, first-order predicate logic faces fundamental limitations, as its validity problem is undecidable, meaning no algorithm can determine for arbitrary formulas whether they are true in all models; this undecidability follows from the Church-Turing theorem, which demonstrates that the halting problem for Turing machines reduces to first-order entailment, with details elaborated in computability theory.

Advanced Logics in Computing

Advanced logics in extend the foundations of logic by incorporating modalities for reasoning about , possibility, time, , and structured descriptions, enabling the analysis of dynamic and uncertain systems. These logics are essential for modeling behaviors in concurrent programs, knowledge representation, and multi-agent interactions, where static truth values alone are insufficient. Modal logic introduces operators for necessity (\square \phi), meaning \phi holds in all possible worlds accessible from the current one, and possibility (\Diamond \phi), meaning \phi holds in at least one accessible world. Its semantics, developed through Kripke frames, model a set of possible worlds connected by an accessibility relation, allowing formalization of concepts like obligation or permission in system specifications. Temporal logics address the evolution of systems over time. Linear Temporal Logic (LTL) operates over linear sequences of states, featuring operators such as next (X \phi), until (\phi U \psi), always (G \phi), and eventually (F \phi). For instance, the formula G(P \to F Q) specifies a response property where whenever proposition P holds, Q will eventually hold in the future, a common requirement in reactive system design. In contrast, Computational Tree Logic (CTL) handles branching-time structures, using path quantifiers like for all paths (A) and exists a path (E), combined with temporal operators to distinguish state formulas from path formulas. This enables expressing properties over computation trees, such as AG(P \to EF Q), meaning on all paths, if P holds at a state, there exists a path where Q eventually holds. Description logics provide a fragment of tailored for knowledge representation, with ALC (Attributive Language with Complements) as a core variant featuring concept constructors like (\sqcap), (\sqcup), (\neg), existential restriction (\exists r.C), and universal restriction (\forall r.C). ALC concepts denote sets of individuals satisfying properties, such as Human \sqcap \exists hasChild.[Doctor](/page/Doctor), representing humans with at least one doctor child, supporting terminological reasoning in ontologies. Epistemic logic models in multi-agent systems using the knowledge operator K_i \phi, where agent i knows \phi if \phi holds in all worlds considered possible by i based on an accessibility relation for indistinguishability. This framework captures distributed (D_G \phi) across a group G, true if \phi holds in worlds possible for all agents in G, and common (C_G \phi), iterating over mutual levels. These advanced logics find applications in verifying communication protocols and concurrent systems, where temporal and epistemic ensure and coordination.

Theoretical Aspects

Automata Theory and Formal Languages

Automata theory and formal languages form a cornerstone of theoretical computer science, bridging logic with computational models by exploring how abstract machines recognize patterns defined by logical structures. At its core, this area examines the classification of languages—sets of strings over an alphabet—and the machines that decide membership in those sets. Regular languages have a logical characterization in monadic second-order logic over words (Büchi's theorem), enabling precise definitions of recognizable patterns without unbounded memory. The , introduced by , organizes formal languages into a nested classification based on the generative power of associated grammars. Type-3 grammars generate regular languages, the most restrictive class, using productions of the form A → aB or A → a, where A and B are nonterminals and a is a terminal. Type-2 grammars produce context-free languages (CFLs) with rules like A → α, where α is any string of terminals and nonterminals. Type-1 grammars define context-sensitive languages via rules αAβ → αγβ, where γ is nonempty and |αγβ| ≥ |αAβ|. At the top, Type-0 grammars yield recursively enumerable languages, corresponding to unrestricted productions. This hierarchy reflects increasing logical complexity, from finite-state decisions to those requiring arbitrary context. Finite automata provide the for s, directly linking to logical characterizations. A is a 5-tuple (Q, Σ, δ, q₀, F), where Q is a finite set of states, Σ the alphabet, δ: Q × Σ → Q the transition function, q₀ the initial state, and F ⊆ Q the accepting states. Starting from q₀, the automaton processes an input string w ∈ Σ* via successive transitions; w is accepted if the final state is in F. A extends this with δ: Q × Σ → 2^Q () or ε-transitions, allowing multiple paths, yet NFAs recognize exactly the same languages as DFAs. s, algebraic notations using union (|), concatenation, and (*), equivalently describe these languages. Kleene's theorem establishes the equivalence: every is accepted by a finite automaton and generated by a . For instance, the expression (a|b)*abb matches strings ending in "abb" over {a, b}, convertible to an NFA with states tracking the suffix. properties are expressed via s or equivalent logical formalisms like with order (FO[<]) for star-free languages. A key logical characterization of regular languages involves monoids. Schützenberger's theorem states that a regular language is star-free—expressible via regular expressions without Kleene star, using only union, concatenation, and complement—if and only if its syntactic monoid is aperiodic (for every element x, there exists n such that x^n = x^{n+1}). The syntactic monoid arises from the congruence on words where u ≡ v if, for all w, uw ∈ L iff vw ∈ L; aperiodicity ensures no nontrivial cycles, corresponding to definability in first-order logic over words without modular quantifiers. This theorem connects algebraic semigroup theory to logical definability, showing star-free languages as those logically expressible in the variety of aperiodic monoids. Extending beyond finite memory, pushdown automata (PDAs) recognize context-free languages using a stack for bounded recursion. A PDA is a 7-tuple (Q, Σ, Γ, δ, q₀, Z₀, F), where Γ is the stack alphabet, Z₀ the initial stack symbol, and δ: Q × Σ_ε × Γ → 2^{Q × Γ* } the transition allowing push/pop operations (ε denotes empty). Acceptance is by final state or empty stack. PDAs capture nested structures, like balanced parentheses {a^n b^n | n ≥ 0}, impossible for finite automata. Every CFL has an equivalent PDA, and vice versa via Chomsky normal form grammars. Pumping lemmas provide logical tools to prove non-regularity or non-context-freeness by contradiction. For regular languages, the pumping lemma (Rabin and Scott) asserts: if L is regular, there exists p (pumping length) such that for any w ∈ L with |w| ≥ p, w = xyz where |xy| ≤ p, |y| > 0, and xy^k z ∈ L for all k ≥ 0. This follows from the on DFA states: repeated states imply a pumpable . For example, {a^n b^n | n ≥ 0} violates it, as pumping y (a's) disrupts balance. For context-free languages, the (Bar-Hillel, Perles, and Shamir) states: if L is context-free, there exists p such that for w ∈ L with |w| ≥ p, w = uvxyz where |vxy| ≤ p, |vy| > 0, and uv^k x y^k z ∈ L for all k ≥ 0. Proven via parse trees or PDAs, it identifies pumpable subtrees or stack cycles. Consider {a^n b^n c^n | n ≥ 0}: any division pumps unequal parts, yielding strings outside L. These lemmas highlight logical boundaries on repetition in recognizable structures.
For a CFL L, ∃ p > 0 s.t. ∀ w ∈ L (|w| ≥ p) ∃ u,v,x,y,z ∈ Σ* (w = uvxyz, |vxy| ≤ p, |vy| ≥ 1, ∀ k ≥ 0, uv^k x y^k z ∈ L)
Context-free grammars (CFGs) enable , deriving strings via leftmost or rightmost from a start symbol. Parsing constructs a (parse tree), with internal nodes as nonterminals and leaves as the yield string. arises if a CFG admits multiple distinct parse trees for some w ∈ L(G), implying multiple meanings or derivations—e.g., the grammar S → S + S | S * S | a yields two trees for "a + a * a" (add then multiply vs. multiply then add). Inherently ambiguous languages, like {a^i b^j c^k | i = j or j = k}, have no unambiguous CFG. Ambiguity complicates deterministic parsing, favoring algorithms like CYK for general CFGs or LR for unambiguous subsets, ensuring unique interpretations in logical specifications. The Myhill-Nerode theorem offers an algebraic test for regularity via indistinguishability. Define x ~_L y iff ∀ z ∈ Σ*, xz ∈ L ⇔ yz ∈ L; the equivalence classes of ~_L form the Nerode right congruence. L is regular if and only if ~_L has finitely many classes, with a minimal DFA's states corresponding to these classes. This theorem, bridging Myhill's work on partial recursive functions and Nerode's extension, enables language minimization: merge equivalent states to obtain the unique minimal DFA up to isomorphism. For non-regular L like {a^n b^n}, infinitely many classes arise from distinguishing prefixes a^n.

Computability and Decidability

investigates the boundaries of what can be effectively computed using logical and algorithmic methods, with s serving as the foundational model for universal computation. A consists of an infinite tape divided into cells, a read/write head that moves left or right, a of s including a start and halt s, and a transition function that specifies the next , symbol to write, and head movement based on the current and scanned symbol. The configuration of a at any step is defined by the current , head position, and tape contents, allowing the simulation of any algorithmic process through a sequence of such configurations. The exemplifies the limits of : there exists no H that, given as input the description of another M and an input word w, halts and outputs 1 if and only if M halts on w. To prove this undecidability, assume for that such an H exists. Construct a diagonal machine D that, on input e (encoding a machine M_e), simulates H(M_e, e); if H outputs 1, D loops forever, and if 0, D halts. Then, running D on its own index d leads to a : if H(M_d, d) outputs 1 (implying D loops on d), it should halt, and vice versa. This argument shows no such H can exist. The Church-Turing thesis posits that the intuitive notion of effective computability coincides with what is computable by a , and equivalently by other models such as Alonzo Church's or recursive functions. In , functions are expressed via and application, where effective computability is defined through \lambda-definability, shown equivalent to Turing computability. Similarly, recursive functions, built from primitive recursion and minimization, capture the same class of computable functions. This equivalence underpins the thesis, though it remains unprovable as it bridges formal models with informal intuition. Decidability concerns whether a problem—typically the membership of strings in a —can be resolved by an that always halts with a yes/no answer. Recursive languages are those decidable by s that always halt, while recursively enumerable (RE) languages are those where a halts on yes instances but may loop on no instances. The generates an RE but non-recursive language, as one can simulate execution until halt, but no machine decides non-halting cases. generalizes this: for any non-trivial semantic property P of partial recursive s (i.e., P holds for some but not all such functions), the set of s computing a function with property P is undecidable. The proof reduces to the by showing that deciding P would imply solving halting for a constructed machine. Logical undecidability extends these ideas to formal systems, via Kurt Gödel's incompleteness theorems, which have profound implications for . The first theorem states that any consistent capable of expressing basic arithmetic is incomplete: there exists a true in the system's but unprovable within it. In , this applies to programming languages or verification systems encoding arithmetic, implying no complete can prove all true statements about programs' behaviors in such systems. The second theorem asserts that no consistent extension of the system can prove its own , limiting self-verifying computational logics. To explore degrees of unsolvability beyond the halting problem, introduced oracle machines, which extend standard Turing machines by access to an —a answering membership queries in a fixed set A instantaneously. The of a set B is the of sets Turing-equivalent to B (i.e., mutually computable with the same oracle). Degrees form a partial order under Turing reducibility, with the halting set at degree $0' (the first jump), and higher jumps $0^{(n)} yielding increasingly unsolvable problems. further developed this hierarchy, showing the existence of incomparable degrees and initiating the study of the Turing degrees' structure.

Computational Complexity

Computational complexity theory studies the resources, such as time and space, required to solve computational problems, with logic providing foundational tools for defining and classifying these resources through formal languages and reductions. In this context, the class consists of decision problems solvable by a deterministic in time, while includes problems solvable in time by a nondeterministic , or equivalently, decision problems for which a proposed solution () can be verified in time by a deterministic . The (SAT) is NP-complete by the Cook-Levin theorem, showing that any NP problem reduces to SAT. The central question of whether = remains unresolved, posing profound implications for logic-based , as a proof that = would imply efficient algorithms for all problems reducible to testing. The Cook-Levin theorem establishes that the (SAT), which asks whether a given in CNF has a satisfying assignment, is the first -complete problem. This result shows that any problem in NP can be reduced to SAT in polynomial time, via a logical encoding of the nondeterministic Turing machine's as a set of clauses that are satisfiable if and only if the original instance is accepted. is defined using polynomial-time many-one reductions (also known as Karp reductions), where an instance of one problem transforms into an instance of another such that the answers match, preserving the class's hardness. A example is the reduction from 3-SAT (SAT restricted to clauses of three literals) to the problem, where a is constructed from the formula's variables and clauses such that a minimum vertex cover corresponds to a satisfying assignment, with edges linking literals to ensure consistency. Turing reductions, which allow multiple adaptive queries to an for the target problem, generalize many-one reductions and are used for completeness in classes like NP under more flexible transformations. Beyond NP, the polynomial hierarchy (PH) extends this structure analogously to the arithmetical hierarchy in logic, layering alternations of existential and universal quantifiers over polynomial-time predicates. The class PSPACE, containing problems solvable using polynomial space, corresponds to the validity of fully quantified Boolean formulas (QBF), where quantifiers alternate between existential (∃) and universal (∀) over propositional variables, making the truth evaluation PSPACE-complete. For instance, a QBF like ∀x ∃y ∀z φ(x,y,z), with φ a polynomial-time decidable formula, captures alternating quantifier complexity inherent in games or planning problems. EXPTIME, for deterministic exponential time, includes problems like validity of alternating-time temporal logic formulas with exponential blowups, while the PH collapses to the second level (Σ₂ᵖ) under certain relativized assumptions but remains uncollapsed in the unrelativized world. Proving P = NP faces significant barriers, notably from oracle separations demonstrated by the Baker-Gill-Solovay , which constructs oracles A and B such that Pᴬ = NPᴬ yet Pᴮ ≠ NPᴮ, showing that relativizing proof techniques—those preserving oracle queries—cannot resolve the question, as they yield contradictory outcomes. As of 2025, no proof of P = NP or P ≠ NP has emerged, though progress in approximation algorithms for NP-hard problems, such as PTAS for Euclidean TSP via dynamic programming refinements, provides practical heuristics without resolving the core equality. These logical reductions and class characterizations underscore how propositional and predicate logics underpin the classification of computational hardness, linking proving to resource-bounded feasibility.

Logic in Hardware Design

Boolean Algebra

Boolean algebra forms the mathematical foundation for manipulating logical expressions in , particularly in the design of digital systems where propositions are represented as binary values. In this structure, variables assume values of 0 (false) or 1 (true), with operations defined as (AND, denoted ·), disjunction (OR, denoted +), and (NOT, denoted ¯). These operations satisfy specific axioms that ensure consistent algebraic manipulation, mirroring the truth semantics of propositional logic but emphasizing structural properties for simplification and optimization. The axioms of Boolean algebra include commutativity, where x + y = y + x and x \cdot y = y \cdot x; associativity, where (x + y) + z = x + (y + z) and (x \cdot y) \cdot z = x \cdot (y \cdot z); distributivity, where x \cdot (y + z) = (x \cdot y) + (x \cdot z) and x + (y \cdot z) = (x + y) \cdot (x + z); absorption, where x + (x \cdot y) = x and x \cdot (x + y) = x; and complement laws, where x + \bar{x} = 1 and x \cdot \bar{x} = 0, with 0 and 1 as the identity elements for + and ·, respectively. These axioms define a complemented distributive lattice, ensuring every element has a unique complement. Additionally, De Morgan's laws hold: \bar{(x + y)} = \bar{x} \cdot \bar{y} and \bar{(x \cdot y)} = \bar{x} + \bar{y}, which facilitate transformation between conjunctive and disjunctive forms. Boolean algebras can be viewed as Boolean rings, where addition is symmetric difference (x + y) and multiplication is conjunction (x \cdot y), with every element idempotent (x + x = 0, x \cdot x = x). They also form distributive lattices, with meet (\wedge, or ·) as the greatest lower bound and join (\vee, or +) as the least upper bound, bounded by 0 and 1. This dual perspective aids in proving properties and extending to infinite structures, though finite Boolean algebras are central to computing applications. The application of Boolean algebra to computer science originated with Claude Shannon's 1937 master's thesis, "A Symbolic Analysis of Relay and Switching Circuits," which demonstrated that Boolean operations could model electrical switching circuits, bridging abstract logic to practical hardware design. This work established Boolean algebra as essential for analyzing and synthesizing binary logic systems. Minimization techniques in Boolean algebra aim to reduce expressions to simpler equivalent forms, primarily sum-of-products (SOP, disjunctive normal form: a disjunction of conjunctions) or product-of-sums (POS, conjunctive normal form: a conjunction of disjunctions), which correspond to efficient circuit realizations. The Karnaugh map, introduced by Maurice Karnaugh in 1953, visualizes Boolean functions on a grid where adjacent cells differ by one variable, allowing grouping of 1s to identify prime implicants and simplify via adjacency rules. For functions with more variables, the Quine-McCluskey algorithm, developed by Willard V. Quine in 1952 and extended by Edward J. McCluskey in 1956, systematically generates prime implicants through tabular comparison of minterms, followed by selection of a minimal cover using the method of essential primes. These methods ensure optimal algebraic representations without exhaustive enumeration.

Logic Circuits and Gates

Logic circuits form the foundational building blocks of digital hardware, implementing functions through interconnected electronic components that process signals. These circuits translate abstract logical operations into physical or simulated electrical behaviors, enabling the and control functions essential to computers and embedded systems. Gates, the primitive elements of logic circuits, perform basic operations on one or more input bits to produce an output bit, with their design rooted in semiconductor technology such as (complementary metal-oxide-semiconductor). The fundamental logic gates include , NOT, , NOR, and XOR, each with standardized symbols and s defining their behavior for all possible inputs. The outputs 1 only if all inputs are 1, symbolized by a with curved input sides and a pointed output; its for two inputs (A, B) is:
ABOutput
000
010
100
111
The outputs 1 if at least one input is 1, with a featuring flat input sides and a pointed output; its is:
ABOutput
000
011
101
111
The NOT gate, or inverter, inverts a single input (0 to 1 or 1 to 0), represented by a with a circle at the output; its truth table is A=0 → Output=1, A=1 → Output=0. The , an AND followed by NOT, outputs 0 only if all inputs are 1, and is universal as any function can be built from it alone; its combines the AND shape with an output bubble. The , an OR followed by NOT, outputs 1 only if all inputs are 0, with a like OR but with an output bubble, also universal. The outputs 1 if inputs differ, symbolized by an OR shape with an extra input curve, useful for checks; its is:
ABOutput
000
011
101
110
Combinational logic circuits produce outputs solely dependent on current inputs, without , constructed by interconnecting to realize complex Boolean expressions. A half-adder, for instance, adds two single-bit numbers to produce a (XOR of inputs) and carry (AND of inputs), essential for basic arithmetic; its is:
ABSumCarry
0000
0110
1010
1101
A full-adder extends this by incorporating a carry-in bit, using two half-adders and an OR gate for the final carry, enabling multi-bit addition in processors. Multiplexers (MUX) select one of multiple inputs to a single output based on control signals, functioning as data routers; a 2-to-1 MUX uses AND, OR, and NOT gates to choose between two inputs via a select line. Decoders convert binary inputs to one-hot outputs, activating a unique line from n inputs to up to 2^n outputs using AND gates, commonly for memory addressing. Sequential logic circuits incorporate memory elements to store state, making outputs dependent on both current inputs and past states, synchronized typically by a clock signal. Flip-flops are the basic memory units: the SR (set-reset) flip-flop stores a bit using NAND or NOR gates with feedback, setting to 1 on S=1 (R=0), resetting to 0 on R=1 (S=0), and holding otherwise, though it has an invalid S=R=1 state. The D (data) flip-flop, edge-triggered on clock rising edges, captures the input D at the clock edge, widely used for synchronization due to its simplicity. JK and T (toggle) flip-flops extend SR, with JK allowing toggle on J=K=1 and T flipping on T=1, enabling counters and state machines. Registers group flip-flops to store multi-bit words, such as a 4-bit register for temporary data holding in CPUs. Finite state machines (FSMs) model sequential behavior with states stored in registers and transitions via combinational logic, classifying as Moore (output depends only on state) or Mealy (output depends on state and input), fundamental to protocol controllers and automata. Timing considerations in logic circuits ensure reliable operation amid physical delays. Propagation delay is the time for a signal change at an input to affect the output, typically 10-100 ps per gate in modern CMOS processes (as of 2025). Hazards arise from timing mismatches, such as static hazards (temporary incorrect output during transition) or dynamic hazards (multiple output spikes), mitigated by redundant gates or careful design. Clocking synchronizes sequential elements, with edge-triggered flip-flops capturing data on clock edges; clock skew (variation in arrival times) must be minimized to below setup/hold times (minimum input stable periods, often 0.1-1 ns) to prevent metastability. In VLSI (very-large-scale integration) design, logic synthesis automates gate-level netlists from high-level descriptions in hardware description languages (HDLs) like , optimizing for area, power, and speed using algorithms that map behavioral code to networks. modules describe combinational and sequential logic, with synthesis tools inferring gates from always blocks and assign statements, applying technology mapping to target libraries (e.g., standard cells). This top-down approach enables complex , reducing manual gate design while ensuring synthesizability through constraints like clock domains. Quantum extensions of logic circuits employ reversible gates to preserve information, crucial for where irreversible operations generate . The , a universal reversible 3-bit gate, conditionally flips the target bit if both controls are 1, enabling quantum arithmetic and without measurement collapse. As of 2025, simulations of superconducting quantum processors have demonstrated Toffoli fidelities exceeding 99% in noise-free conditions using echoed cross-resonance decompositions, with experimental hardware achieving around 98% or less; these approaches reduce T-gate overhead in fault-tolerant architectures, while optimizations for multi-controlled Toffolis minimize circuit depth for scalable quantum algorithms like Shor's.

Logic in Software and Programming

Logic Programming Paradigms

is a in which programs are expressed as sets of logical formulas, typically Horn clauses, and proceeds through logical deduction rather than explicit . This approach separates the logic of the —what is to be computed—from the control of how it is computed, allowing the underlying to derive solutions automatically. The semantic foundation draws briefly from predicate logic, where programs represent knowledge and queries seek proofs of goals within that . Prolog, the most prominent logic programming language, was developed in 1972 at the University of Marseille by Alain Colmerauer, Philippe Roussel, and Robert Pasero, building on ideas from Robert Kowalski's procedural interpretation of Horn clauses. In Prolog, programs consist of facts, which are atomic propositions; rules, which are implications of the form headbody; and queries, which pose goals to be resolved. For example, a simple family relation program might include the fact parent(tom, bob). and the rule grandparent(X, Z) :- parent(X, Y), parent(Y, Z)., with a query like ?- grandparent(tom, X). yielding bindings for X. Computation in Prolog relies on the unification algorithm, originally introduced by J.A. Robinson in 1965, which matches terms by finding substitutions that make them identical, enabling pattern matching between goals and program clauses. This is combined with backtracking search, a depth-first strategy that explores alternative clauses upon failure, systematically attempting resolutions until a solution is found or all paths are exhausted. The inference mechanism in Prolog is based on (Selective Linear Definite clause resolution), a refutation-complete proof procedure for definite clause programs that linearly resolves selected goals against program clauses, reducing the goal set until success or failure. This process implements a top-down, goal-directed evaluation, ensuring soundness and completeness for the least Herbrand model of the program. Prolog also supports Definite Clause Grammars (DCGs), a syntactic extension for defining context-free and beyond grammars as logic clauses, facilitating parsing and generation tasks; for instance, a DCG rule like s --> np, vp. translates to Prolog clauses with difference lists for efficient sequence processing. Variants of logic programming extend Prolog's core for specialized domains. , a without function symbols or , focuses on bottom-up evaluation for deductive databases, enabling efficient query answering over relational through fixed-point semantics. Constraint Logic Programming (CLP) integrates constraint solving over domains like finite sets or reals with , replacing unification with more general constraint propagation to handle combinatorial problems declaratively.

Formal Specification and Verification

Formal specification in computer science employs mathematical logic to precisely describe the intended behavior of software and hardware systems, enabling unambiguous requirements capture without implementation details. This approach contrasts with informal methods by providing a foundation for rigorous analysis and proof of correctness. Verification then uses logical deduction to establish that the system implementation conforms to the specification, often through theorem proving or deductive methods. These techniques have been pivotal in ensuring reliability in critical systems, such as operating systems and embedded software. Key formal specification languages include and the (VDM). , based on Zermelo-Fraenkel and first-order logic, uses schemas to model state and operations declaratively, facilitating refinement to executable code. Developed in the late 1970s at Oxford University, it emphasizes modular descriptions of abstract data types and invariants. VDM, originating from IBM's Vienna laboratory in the 1970s, employs a model-oriented style with explicit type definitions, operations, and pre/postconditions to specify functional behavior. Its specification language, VDM-SL, supports executable subsets for prototyping and has been standardized by ISO. Temporal logics extend these by incorporating time-dependent properties; for instance, (LTL) allows specification of sequences over execution traces. Introduced by Amir Pnueli in 1977, LTL uses operators like "next" (X) and "eventually" (F) to express dynamic behaviors. Verification techniques rely on deductive frameworks to prove program correctness. Hoare logic, proposed by C.A.R. Hoare in 1969, centers on triples of the form \{P\} C \{Q\}, where P is a asserting initial state properties, C is the program segment, and Q is a postcondition specifying desired outcomes upon termination. This partial correctness semantics assumes termination and enables compositional reasoning via rules for constructs like and sequencing. Weakest precondition (wp) semantics, developed by Edsger Dijkstra in the 1970s, complements by computing the weakest initial condition wp(C, Q) that guarantees Q after executing C, supporting backward reasoning from postconditions. For example, wp(x := e, Q) = Q[e/x], substituting expression e for variable x in Q. These methods underpin refinement calculi, where specifications are iteratively proven correct against implementations. Theorem proving systems mechanize these logics for scalable verification. The Boyer-Moore theorem prover (NQTHM), initiated in 1971, automates proofs in a with equality, using term rewriting and for hardware and software validation. Its successor, (A Computational Logic for Applicative ), introduced in 1996, integrates a Lisp-based with an interactive prover, enabling certified proofs of complex algorithms and systems. Widely used in industry, has verified components like and cryptographic protocols through thousands of lemmas. Advanced logics, such as higher-order or variants, enhance expressiveness for concurrent or heap-manipulating systems. Properties in formal specifications distinguish safety and liveness concerns using LTL. Safety properties, like "no overflow ever occurs," assert that bad states are unreachable (e.g., always not overflow, G ¬overflow). Liveness properties ensure progress, such as "a request is eventually granted" (F granted). These are specified over infinite execution paths, with safety often translatable to reachability checks and liveness requiring fairness assumptions. A landmark case study is the seL4 microkernel, formally verified in 2009 from abstract specification to C implementation using Isabelle/HOL theorem prover, proving functional correctness, absence of buffer overflows, and adherence to access controls. This end-to-end proof covered 8,700 lines of C code, establishing seL4 as the first general-purpose OS kernel with machine-checked security guarantees. Subsequent updates through 2025 have extended proofs to , noninterference, and new architectures like ARMv8, with continuous maintenance ensuring no violations of verified properties over 15+ years of deployment. Challenges in formal verification include state explosion, where system complexity leads to exponentially large state spaces that overwhelm proof search. This arises in reasoning about concurrent or parameterized systems, limiting scalability. Abstraction refinement techniques, such as counterexample-guided abstraction refinement (CEGAR), mitigate this by starting with coarse models, identifying infeasible counterexamples via simulation, and iteratively refining abstractions until convergence or proof completion.

Automated Reasoning and Tools

Automated Theorem Proving

(ATP) encompasses algorithms and systems designed to mechanically generate formal proofs of from logical axioms, primarily in and higher-order logics. These systems automate the search for derivations that establish the validity of conjectures, often using refutation procedures where the of a theorem is assumed and contradictions are sought. ATP has applications in , , and , enabling rigorous validation without human intervention in proof construction. Key challenges include managing search spaces and handling undecidable logics, addressed through complete inference rules and heuristics. Resolution theorem proving, a foundational in ATP, was introduced by J.A. Robinson in 1965. It operates on formulas converted to clausal normal form, where clauses are disjunctions of literals, and employs binary to combine clauses into new ones until deriving the empty clause, proving unsatisfiability. The method achieves refutation completeness for clausal logic, meaning any unsatisfiable set of clauses yields a proof. Central to is unification, an also developed by Robinson, which finds substitutions to make literals identical, enabling from non-identical terms. For example, resolving P(x) \lor Q(x) and \neg P(a) uses unification to substitute x with a, yielding Q(a). Semantic tableaux, or analytic tableaux, offer a tableau-based for automated proof search in propositional and . Originating from E.W. Beth's work in the , the method constructs a from the of a , applying expansion rules to branch on logical connectives and quantifiers. A proof exists if all branches close via contradictory literals, such as a literal and its . Tableau methods are goal-directed and support free-variable variants for efficiency in settings, providing and for refutation. They contrast with by emphasizing semantic model exploration over syntactic clause combination. Handling in ATP requires specialized rules, as standard treats as a binary . Paramodulation, developed by G.A. Robinson and L.T. Wos in 1969, extends for equational by inferring from clauses to replace subterms. For clauses l = r () and C[l'] where l' unifies with l, paramodulation produces C[r'] via the unifier, ensuring refutation completeness when combined with . This rule is essential for theories with axioms, preventing incomplete proofs in domains like . Modern implementations optimize paramodulation with ordering strategies to reduce redundancy. Prominent ATP systems include and for . , developed by William McCune at since the 1980s, supports and paramodulation with strategies like set-of-support and for efficient equational proving; its successor EQP famously solved the in 1996. , originating from the under Andrei Voronkov, employs advanced indexing and saturation techniques, achieving top performance on benchmarks through AVATAR architecture for clause selection. Interactive provers like and Isabelle/HOL blend automation with user guidance: , based on the Calculus of Inductive Constructions from INRIA, allows tactic-based proofs in dependent type theory, while Isabelle/HOL uses with structured proofs via language, both verifying complex theorems interactively. Higher-order ATP extends methods to logics with lambda abstraction and higher-type quantification, integrating for term representation. Systems like Leo-III use extensional higher-order paramodulation and superposition, achieving completeness via rigid E-unification. These provers handle impredicative definitions, crucial for , but face challenges from undecidable unification; approximations like encodings mitigate this. Benchmarks for ATP are standardized via the TPTP library, maintained by Geoff Sutcliffe since 1992, containing thousands of problems in and higher-order formats for evaluating prover performance. Annual competitions like CASC assess speed and success rates on TPTP suites. As of 2025, advances in neural-guided proving integrate large language models with search algorithms, such as in Lean-based systems, improving premise selection and tactic generation for complex proofs; for instance, models like those in LeanProgress predict proof progress to guide exploration, achieving a 3.8% improvement on proofs in the Mathlib4 library over baselines.

Model Checking and Satisfiability Solving

Model checking and satisfiability solving are algorithmic techniques central to automated verification in , focusing on finite-state systems and logical constraints. Satisfiability solving addresses the (SAT), which determines whether there exists an assignment of truth values to variables in a propositional formula that makes the formula true. This problem underpins many verification tasks, as propositional logic forms the core of SAT instances. The Davis–Putnam–Logemann–Loveland (DPLL) algorithm, introduced in the early , provides a foundational procedure for solving SAT by systematically exploring partial assignments and using unit propagation and pure literal elimination to prune the search space. Modern SAT solvers build on DPLL with (CDCL), which analyzes conflicts during search to derive new clauses that constrain future explorations, enabling non-chronological and significantly improving efficiency on industrial benchmarks. CDCL was pioneered in solvers like in , leading to dramatic performance gains, with solvers now handling millions of clauses in seconds. The NP-completeness of SAT, established by in 1971, underscores its theoretical hardness, yet practical heuristics make it tractable for many real-world instances. Extensions of SAT solving incorporate domain-specific theories, yielding satisfiability modulo theories (SMT) solvers that handle formulas combining propositional logic with theories like linear arithmetic. SMT solvers such as Z3, developed by in 2008, integrate a CDCL-based SAT engine with theory-specific solvers, using lazy clause generation to propagate constraints across theories efficiently. Similarly, CVC5, released in 2022 as a successor to CVC4, excels in arithmetic theories through techniques like bit-vector optimization and Fourier-Motzkin elimination for linear real arithmetic, achieving state-of-the-art performance on SMT-LIB benchmarks. These tools are widely used in , where they check constraints involving numerical computations. Model checking verifies whether a finite-state system satisfies a specification by exhaustively exploring its state space. Systems are modeled as Kripke structures—labeled transition systems representing states and atomic propositions—while specifications use branching-time logics like (CTL) or linear-time logics like (LTL). , introduced by Clarke and Emerson in 1981, computes fixed points of temporal operators over the structure to confirm properties like invariance or . , developed by Queille and Sifakis in 1982, reduces the problem to checking emptiness of a Büchi derived from the formula, with complexity as shown by Sistla and Clarke in 1985. To combat state explosion, symbolic representations using binary decision diagrams (BDDs), pioneered by Bryant in 1986, compactly encode state sets and transitions, enabling verification of systems with billions of states. Abstraction techniques mitigate complexity by constructing approximate models that over-approximate the system's behavior, allowing initial checks on simplified structures. , formalized by Clarke, Grumberg, and Peled in , iteratively refines these abstractions: if a is found, it is analyzed to determine if it is spurious (due to over-approximation); if so, the abstraction is refined by adding predicates to eliminate the , restarting the check until convergence or a real bug is confirmed. This method has proven effective for hardware and , reducing manual effort while preserving soundness. Prominent tools include NuSMV, a symbolic model checker supporting CTL and LTL over BDDs and SAT-based engines, developed in 2002 for flexible system modeling in language. , originating in the , specializes in LTL model checking via on-the-fly partial-order reduction and simulation, widely applied in protocol verification. As of 2025, emerging quantum SAT solvers leverage and quantum annealers to explore search spaces quadratically faster for specific instances, though hardware limitations restrict them to small-scale problems compared to classical CDCL solvers.

Applications and Extensions

Logic in Artificial Intelligence

Logic plays a foundational role in by providing formal mechanisms for reasoning, knowledge encoding, and decision-making under uncertainty. In , logical frameworks enable systems to represent complex relationships, infer new information from partial data, and plan actions toward goals, bridging symbolic computation with real-world applications. This integration has evolved from rule-based expert systems in the to contemporary hybrids that address limitations in pure neural approaches, such as lack of interpretability and generalization. Knowledge representation in AI utilizes logical structures to organize and retrieve information efficiently. , introduced by , structure knowledge as prototypical situations with fillable slots for attributes, defaults, and procedures, allowing AI systems to apply common-sense inferences to new instances. Semantic networks, developed by M. Ross Quillian, model knowledge as directed graphs where nodes represent concepts and labeled edges denote semantic relations, supporting associative reasoning and inheritance hierarchies. For more rigorous applications, logic-based ontologies like the (OWL), a W3C recommendation since 2009, employ description logics to define classes, properties, and axioms, enabling automated classification and consistency checking in knowledge-intensive domains such as biomedical informatics. Automated planning leverages logic to model dynamic environments and generate optimal action sequences. The STRIPS formalism, created by Richard Fikes and Nils J. Nilsson, employs predicate logic to specify action preconditions, add-lists for positive effects, and delete-lists for negative effects, facilitating state-space search for goal achievement. Building on this, the Planning Domain Definition Language (PDDL), standardized by Drew McDermott and collaborators, uses logical predicates to express domains, problems, and temporal constraints, serving as a for AI planning systems in international competitions. These representations allow planners to verify logical consistency and explore hypothetical scenarios. Non-monotonic reasoning extends classical logic to accommodate defeasible inferences, essential for AI in uncertain or evolving contexts. Default logic, formalized by Raymond Reiter, introduces default rules that permit provisional conclusions unless contradicted, with semantics defined via fixed-point extensions to handle belief sets. Circumscription, proposed by John McCarthy, achieves non-monotonicity by minimizing the scope of predicates in models, formalizing principles like "all birds fly unless specified otherwise" through prioritized assumptions. Such mechanisms support belief revision, where new evidence retracts prior inferences without invalidating the entire knowledge base. To manage uncertainty inherent in real-world data, AI incorporates fuzzy and probabilistic logics. , originated by , generalizes binary truth values to continuous degrees of membership, enabling graded reasoning in applications like and decision support. Markov logic networks (MLNs), introduced by Matthew Richardson and , unify with probabilistic graphical models by weighting logical formulas as soft constraints, allowing maximum a posteriori inference over grounded clauses for tasks like . Neural-symbolic integration represents a frontier in AI as of 2025, merging deep learning's with logic's explanatory power through architectures like differentiable . These hybrids enable joint optimization of neural parameters and symbolic rules, improving generalization and transparency in domains such as visual , as evidenced by a of 167 publications from 2020–2024. A seminal application is the expert system, built by Edward H. Shortliffe, which applied forward- and backward-chaining over approximately 500 production rules encoded in propositional logic to diagnose infections and recommend therapies, achieving performance comparable to human experts in controlled evaluations. Predicate logic underpins rule encoding in such systems for precise variable binding and quantification.

Logic in Databases and Knowledge Systems

The , introduced by in 1970, provides a foundation for database management using mathematical relations to represent , enabling operations that maintain and consistency. This model draws on for procedural query formulation but supports non-procedural querying through relational calculus, a logical framework based on first-order predicate logic where queries specify what to retrieve rather than how. Structured Query Language (SQL), the standard for relational databases, embodies relational calculus by allowing declarative expressions of desired results via , such as selections and joins, which align with logical conditions on tuples. Datalog extends to databases, serving as a declarative for expressing complex queries, particularly recursive ones, over relational data. It uses Horn clauses without function symbols, facilitating bottom-up where facts are iteratively derived from rules until a fixed point is reached, optimizing for recursive computations like transitive closures in graph-like structures. This approach leverages semi-naïve to avoid redundant computations, applying rules only to newly derived facts, which enhances efficiency for tasks. Database integrity relies on logical constraints expressed as to enforce data consistency, such as primary keys ensuring uniqueness and foreign keys maintaining through predicate checks on relationships between tables. These constraints, formalized in logic databases, prevent invalid states by validating updates against rules like denial constraints, which specify prohibited conditions. In knowledge systems, underpin the (OWL) for the , providing a decidable fragment of to define classes, properties, and individuals in RDF graphs. OWL's SROIQ enables formal semantics for ontologies, supporting reasoning tasks like subsumption and consistency checking. (RDFS) extends this with entailment rules for inferring implicit knowledge, such as subclass hierarchies via transitive rdfs:subClassOf properties, allowing systems to derive new triples from existing RDF data. Active databases incorporate event-condition-action (ECA) rules to trigger responses based on database events, enhancing reactivity in knowledge systems. An ECA rule detects an (e.g., an insert operation), evaluates a condition as a side-effect-free query, and executes an action if true, with composite events built using operators like or for complex monitoring. As of 2025, logical principles influence and graph databases, where declarative query languages like for employ and predicates to navigate relationships, mirroring relational calculus but optimized for non-tabular structures. This logical foundation supports scalable reasoning over interconnected data in domains like social networks, without rigid schemas.

References

  1. [1]
    [PDF] Logic in Computer Science 1 Introduction
    The role of logic in computer science has been compared to that of calculus in physics and engineering. This course focusses on the foundations of logic rather ...
  2. [2]
    COMP 409/509: Logic in Computer Science and Artificial Intelligence
    COMP 409/509 provides the student with a thorough introduction to computational logic, covering in depth the topics of syntax, semantics, decision procedures.
  3. [3]
    [PDF] LOGIC IN COMPUTER SCIENCE: Modelling and Reasoning about ...
    Formal methods have finally come of age! Specification languages, theorem provers, and model checkers are beginning to be used routinely in industry.
  4. [4]
    Propositional Logic - Stanford Encyclopedia of Philosophy
    May 18, 2023 · Propositional logic is the study of the meanings of, and the inferential relationships that hold among, sentences based on the role that a specific class of ...
  5. [5]
    [PDF] Lecture 1: Propositional Logic
    Propositional formulas are constructed from atomic propositions by using logical connectives. Connectives false true not and or conditional (implies).<|separator|>
  6. [6]
    3.1 Propositional Logic - Discrete Mathematics
    To verify that two statements are logically equivalent, you can make a truth table for each and check whether the columns for the two statements are identical.
  7. [7]
    [PDF] Logic: resolution - CS221 Stanford
    For general propositional logic, modus ponens is insufficient. In this lecture, we'll see that a more powerful inference rule, resolution, is complete for all ...
  8. [8]
    [PDF] Propositional Logic: - Computer Science, UWO
    Every truth table (Boolean function) can be written as either a conjunctive normal form. (CNF) or disjunctive normal form (DNF). • CNF is an ∧ of ∨s, ...
  9. [9]
    George Boole - Stanford Encyclopedia of Philosophy
    Apr 21, 2010 · He revolutionized logic by applying methods from the then-emerging field of symbolic algebra to logic. Where traditional (Aristotelian) logic ...
  10. [10]
    [PDF] First-order predicate logic with identity: syntax and semantics Syntax
    Sep 4, 2008 · First-order predicate logic with identity: syntax and semantics. Syntax. Vocabulary. Individual variables: u, v, w, x, y, z, u1, …, z1, u2 ...
  11. [11]
    [PDF] First-Order Logic - Syntax, Semantics, Resolution
    First-order logic is also called (first-order) predicate logic. Ruzica Piskac. First-Order Logic - Syntax, Semantics, Resolution. 3 / 125. Page 4. Syntax. 1.1 ...
  12. [12]
    [PDF] First Order Logic - Syntax and Semantics - West Virginia University
    Feb 6, 2013 · First-order Logic (FOL) extends Propositional Logic (PL) with predicates, functions and quantifiers. Subramani. First Order Logic. Page 11 ...
  13. [13]
    [PDF] First-Order Logic Prenex Normal Form - University of Iowa
    Prenex normal form is a formula with no quantifiers or of the form Q...Q...P, where Q are universal or existential quantifiers, x are variables, and P is free of ...
  14. [14]
    [PDF] Semantics of First-Order Logic - CS@Cornell
    An undirected graph is connected if there is for all vertices u, v, (u 6= v) there is a path from u to v. • A digraph is strongly connected if for all vertices ...
  15. [15]
    [PDF] On Herbrand's Theorem - UCSD Math
    Herbrand's theorem is one of the fundamental theorems of mathematical logic and allows a certain type of reduction of first-order logic to propositional logic.
  16. [16]
    [PDF] Undecidability of First-Order Logic - Computer Science
    Church's proof uses the fact that it is undecidable whether two expressions in λ-calculus are equivalent, and then reduces this to the decision problem. On the ...
  17. [17]
    Semantical Considerations on Modal Logic
    Semantic Scholar extracted view of "Semantical Considerations on Modal Logic" by Saul A. Kripke.
  18. [18]
    [PDF] The temporal logic of programs - Semantic Scholar
    The temporal logic of programs. @article{Pnueli1977TheTL, title={The temporal logic of programs}, author={Amir Pnueli}, journal={18th Annual Symposium on ...
  19. [19]
    [PDF] TIIKEE MODELS FOR TIE DESCRIPTION OF LANGUAGE
    We study the formal properties of a set of grammatical trans- formations that carry sentences with phra.se structure into new sentences with derived phrase.
  20. [20]
    [PDF] representation of events in nerve nets and
    In showing that each regular event is representable in the state of a finite automaton, the automaton we use is a McCulloch-Pitts nerve net. Thus their neurons ...
  21. [21]
    [PDF] Logic Meets Algebra: the Case of Regular Languages
    The best-known instance of such a result is Schützenberger's theorem: Theorem 3.3 ([Sch65]). A regular language is star-free if and only if its syntactic monoid.
  22. [22]
    [PDF] Context-Free Languages and Pushdown Automata
    In the bibliography, we have generally tried to retrieve the references to the original papers, in order to give some avour of the chronological development ...
  23. [23]
    [PDF] The Pumping Lemma - Theorem of the Day
    This lemma was published in 1959 by Michael Rabin and Dana Scott, and, independently in a generalisation to context-free languages, by Yehoshua Bar-Hillel, ...
  24. [24]
    [PDF] Analyzing Ambiguity of Context-Free Grammars
    Ambiguity in context-free grammars is a recurring problem in language design and parser generation, as well as in applications where gram- mars are used as ...
  25. [25]
    [PDF] the myhill-nerode theorem - CS@Columbia
    Feb 5, 2017 · The key concept to the Myhill-Nerode theorem is the distinguishing extension. Definition 1.1. Let L ⊆ Σ∗ be any language over the alphabet Σ.
  26. [26]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  27. [27]
    The Church-Turing Thesis (Stanford Encyclopedia of Philosophy)
    Jan 8, 1997 · The Church-Turing thesis concerns the concept of an effective or systematic or mechanical method, as used in logic, mathematics and computer science.Missing: authoritative | Show results with:authoritative
  28. [28]
    [PDF] A Note on the Entscheidungsproblem Alonzo Church The Journal of ...
    Mar 3, 2008 · Received April 15, 1936. An unsolvable problem of elementary number theory, American journal of mathematics, vol. 58 (1936). Grundziige der ...
  29. [29]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    Theorem 1: If a set S of strings is accepted by some nondeterministic Turing machine within polynomial time, then S is P-reducible to { DNF tautologies}.
  30. [30]
    [PDF] REDUCIBILITY AMONG COMBINATORIAL PROBLEMS
    The main contribution of this paper is the demonstration that a large number of classic difficult computational problems, arising in fields such as mathematical ...
  31. [31]
    [PDF] Word Problems Requiring Exponential Time - People | MIT CSAIL
    Word Problems Requiring Exponential Time~ Preliminary Report. L.J. Stockmeyer and A.R. Meyer. Massachusetts Institute of Technology. I. INTRODUCTION. The ...
  32. [32]
    The polynomial-time hierarchy - ScienceDirect.com
    The polynomial-time hierarchy is a subrecursive analog of the Kleene arithmetical hierarchy where deterministic polynomial time plays the role of recursive  ...
  33. [33]
    Relativizations of the P = ? N P Question - SIAM Publications Library
    We investigate relativized versions of the open question of whether every language accepted nondeterministically in polynomial time can be recognized ...Missing: original | Show results with:original
  34. [34]
    P versus NP problem - Wikipedia
    The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly ...
  35. [35]
  36. [36]
    [PDF] A Symbolic Analysis of Relay and Switching Circuits
    These will include hand-operated switches,. Page 11. A Symbolic Analysis of Relay and Switching Circuits. 481 contacts on external relays, etc. Relays and other ...
  37. [37]
  38. [38]
    The Problem of Simplifying Truth Functions
    THE PROBLEM OF SIMPLIFYING TRUTH FUNCTIONS. W. V. QUINE, Harvard University. The formulas of the propositional calculus, or the logic of truth functions, are ...
  39. [39]
    [PDF] Minimization of Boolean Functions" - MIT
    Minimization of Boolean Functions". E. J. McCLUSKEY, Jr. (Manuscript received June 26, 1956). A systematic procedure is presented for writing a Boolean ...
  40. [40]
    [PDF] 2 Logic Gates and Combinational Logic - University of Oregon
    The basic logic gates are AND, OR, NAND, NOR, XOR, INV, and BUF. The last two are not standard terms; they stand for “inverter” and “buffer”, respectively. The ...
  41. [41]
    Logic Gates
    Table 1 defines the 16 logic gates, shows logical symbol, behavior in ... People are most familiar with the ones that are named: NAND, NOR, AND, OR, XOR.
  42. [42]
    Gates
    A logic gate is an input-output device that computes a Boolean operator (AND, OR, NOT, XOR, NAND, NOR, etc.) Here are the standard symbols for these gates:.
  43. [43]
    7.1 Combinational Logic Circuits - Robert G. Plantz
    For a different approach, we look at the definition of half adder. The sum is simply the XOR of the two inputs, and the carry is the AND of the two inputs. This ...
  44. [44]
    [PDF] Combinational Circuits in Computers (Examples) - UTK-EECS
    Full-Adder (2 Half-Adders). • Arithmetic circuit: 1-bit full-adder (2 half- adders with carry in). S = ABC + ABC + ABC + ABC. = (AB + AB)C + (AB + AB)C. = (A ...
  45. [45]
    combinational - Logic Gates
    The multiplexer circuit is typically used to combine two or more digital signals onto a single line, by placing them there at different times. Technically, this ...
  46. [46]
    [PDF] CHAPTER VIII FINITE STATE MACHINES (FSM)
    Flip-flops, registers, and latches that are enabled/controlled with a signal derived from clock form a synchronous sequential system. • Asynchronous sequential ...
  47. [47]
    Digital Electronics Part II : Sequential Logic
    These D flip-flops are used throughout digital designs to synchronize logic signals by aligning them with clocks and to store states in finite state machine ...
  48. [48]
    Sequential Logic - Stephen Marz
    Finite state machines contain two elements: (1) combinational logic and (2) state registers. The combinational logic takes the current state and other inputs ...
  49. [49]
    [PDF] An Introduction to Asynchronous Circuit Design
    The clock period is determined by adding the worst case propagation delay, the margin and the maximum clock skew. Clock skew is simply the maximum difference ...
  50. [50]
    [PDF] VHDL Logic Design Overview
    1-hazard. 10 ns 20 ns 30 ns 40 ns 50 ns60 ns. B. D. E. F. Each gate has 10 ns propagation delay! A = C = 1. Page 18. 18. Hazards in Combinational Networks -3.
  51. [51]
    [PDF] Asynchronous Design Methodologies: An Overview
    By setting the clock rate to a long enough period, all worries about hazards (undesired signal transitions) and the dynamic state of the circuit are removed.
  52. [52]
    [PDF] Introduction to Logic Synthesis using Verilog HDL
    Oct 14, 2006 · Introduction to Logic Synthesis Using Verilog HDL explains how to write accurate Verilog descrip- tions of digital systems that can be ...
  53. [53]
    [PDF] Hardware Description Languages
    A.1.2.2 Synthesis. Logic synthesis transforms HDL code into a netlist describing the hardware; e.g., logic gates and the wires connecting.
  54. [54]
    [PDF] Tutorial: Design of a Logic Synthesis System - CECS
    Optimization algorithms enable a logic synthesis design methodology. Naive translation from hdl to gates results in circuits which are too large and too slow ...
  55. [55]
    [PDF] arXiv:2501.02222v1 [quant-ph] 4 Jan 2025
    Jan 4, 2025 · The Toffoli gate is also widely used in several important quantum al- gorithms and protocols, such as Shor's algorithm [46, 47],. Grover search ...
  56. [56]
    [PDF] A complete set of transformation rules for reversible circuits - arXiv
    Aug 24, 2025 · The Toffoli gate is a 3-bit reversible logic gate that has two control bits and one target bit: it flips the target bit iff both of the two ...
  57. [57]
    Practical Fidelity Limits of Toffoli Gates in Superconducting Quantum ...
    Sep 5, 2025 · Our results help bridge the gap between theoretical gate specifications and real-world implementation fidelity by providing detailed benchmarks ...
  58. [58]
    [PDF] Logic Programming Robert Kowalski MIT Encyclopaedia of ...
    Logic programming is the use of logic to represent programs and of deduction to execute programs in logical form. To this end, many different forms of logic ...
  59. [59]
    [PDF] Logic Programming - CMU School of Computer Science
    Aug 29, 2006 · Logic programming is a particular way to approach programming. Other paradigms we might compare it to are imperative programming or func- ...
  60. [60]
    [PDF] The birth of Prolog - Alain Colmerauer
    During the fall of 1972, the first Prolog system was implemented by Philippe in Niklaus Wirt's language Algol-W; in parallel, Alain and Robert Pasero created.Missing: seminal | Show results with:seminal
  61. [61]
    Fifty Years of Prolog and Beyond
    May 17, 2022 · Automatic theorem proving made a big step forward in a seminal paper ... 1972) (the latter already present in Marseille. Prolog). He used as ...
  62. [62]
    [PDF] Unification: A Multidisciplinary Survey - Rice University
    The unification problem and several variants are presented. Various algorithms and data structures are discussed. Research on unification arising in several ...<|control11|><|separator|>
  63. [63]
    [PDF] A view of the origins and development of Prolog
    In this paper I present a review of the origins and development of Prolog based on a long-term associa- tion, both academic and personal, with Colmerauer, the.Missing: seminal | Show results with:seminal
  64. [64]
    [PDF] introduction to logic programming - UT Computer Science
    SLD-resolution allows us to derive only positive statements. Chapter 5 deals with the other side of the coin - the derivability of the negative statements.
  65. [65]
    Datalog: concepts, history, and outlook - ACM Digital Library
    This chapter is a survey of the history and the main concepts of Datalog.We begin with an introduction to the language and its use for database definition ...Missing: original | Show results with:original
  66. [66]
    Constraint logic programming - ACM Digital Library
    Constraint logic programming (CLP) has been proposed as a declarative paradigm for merging constraint solving and logic programming. Recently, coinductive ...
  67. [67]
    [PDF] The Z Notation: - Rose-Hulman
    In the formal text of a Z specification, spaces are generally not considered to be significant, except where they serve to separate one symbol from the next ...
  68. [68]
    [PDF] An introduction to Z and formal specifications - People | MIT CSAIL
    This paper provides an introduction to the description of information systems using formal, mathematical specifications written in the Z notation, and to the ...
  69. [69]
    [PDF] Using the Vienna Development Method (VDM) to Formalize a ...
    The purpose of this document is to serve as an introduction to certain specification tech- niques and the specification language (Meta-IV) of the Vienna ...
  70. [70]
    The Vienna Development Method - Overture Tool
    VDM is based on a formal specification language VDM-SL, the semantics of which are given mathematically in an ISO Standard [6].
  71. [71]
    The temporal logic of programs | IEEE Conference Publication
    The main proof method suggested is that of temporal reasoning in which the time dependence of events is the basic concept.
  72. [72]
    The Boyer-Moore Theorem Prover (NQTHM) - UT Computer Science
    The Boyer-Moore theorem prover started in Edinburgh, Scotland, in 1971. It was originally a fully-automatic theorem prover for a logic based on a home-grown ...
  73. [73]
    ACL2 Version 8.6 - UT Computer Science
    ACL2 is a logic and programming language in which you can model computer systems, together with a tool to help you prove properties of those models.
  74. [74]
    [PDF] seL4: Formal Verification of an OS Kernel - acm sigops
    In this paper, we present the design of seL4, discuss the methodologies we used, and provide an overview of the approach used in the formal verification from.Missing: 2025 | Show results with:2025
  75. [75]
    Fact Sheet - seL4
    Verification ... 0 violations of verified properties since proof completed in 2009; Continually maintained and updated; No code change without proof validation ...
  76. [76]
    [PDF] SeL4 Whitepaper [pdf]
    seL4's Verification Story. In 2009, seL4 became the world's first OS kernel with a machine-checked functional correctness proof at the source-code level.
  77. [77]
    [PDF] The Challenge of Software Verification - Computer Science Laboratory
    Issues: State space explosion, hybrid representations, model extraction from software, environment models, real-time systems. Challenges Checking functional ...
  78. [78]
    An Overview of Automated Theorem Proving - TPTP
    Automated Theorem Proving (ATP) deals with the development of computer programs that show that some statement (the conjecture) is a logical consequence of a set ...
  79. [79]
    [PDF] AUTOMATED REASONING SLIDES 9 SEMANTIC TABLEAUX ...
    Theorem Proving with Semantic Tableaux. Non - splitting (α rules):. Splitting ... Semantic Tableau methods provide an alternative to resolution for theorem.
  80. [80]
    Vampire
    although now it can do much more! Its main focus is in proving theorems in first-order ...Trophies · Vampire · News · Download
  81. [81]
    Rocq Prover
    The Rocq Prover is an interactive theorem prover, or proof assistant. This means that it is designed to develop mathematical proofs, and especially to write ...About The Rocq Prover · Learn · Papers · Install Rocq
  82. [82]
    Isabelle
    Feb 3, 2025 · Isabelle is a generic proof assistant. It allows mathematical formulas to be expressed in a formal language and provides tools for proving those formulas in a ...Overview · Installation · Documentation
  83. [83]
    [PDF] Higher-Order Automated Theorem Provers - Freie Universität Berlin
    Theorem proving in these calculi works as follows: In order to prove that a (closed) conjecture formula c logically follows from a (possibly empty) set of ( ...
  84. [84]
  85. [85]
    [PDF] SEMANTIC MEMORY - DTIC
    semantic memory. Most important of these properties is the capability of the memory to be used inferentially; i.e., to allow for the answering of questions ...
  86. [86]
    OWL 2 Web Ontology Language Document Overview (Second Edition)
    Dec 11, 2012 · This document provides a non-normative high-level overview of the OWL 2 Web Ontology Language and serves as a roadmap for the documents that define and ...
  87. [87]
    [PDF] STRIPS: A New Approach to the Application of .Theorem Proving to ...
    1. Nilsson, N. J. Problem.Solving Methods in Artificial Intelligence. McGraw-Hill Book. Company, New Y~rk, New York, 1971.
  88. [88]
    [PDF] PDDL | The Planning Domain De nition Language
    This manual describes the syntax of PDDL, the Planning Domain De nition Language, the problem-speci cation language for the AIPS-98 planning competition. The ...
  89. [89]
    [PDF] A Logic for Default Reasoning - John Horty
    In this paper we propose a logic for default reasoning. We then specialize our treatment to a very large class of commonly occurring defaults. For this ...
  90. [90]
    [PDF] CIRCUMSCRIPTION - A FORM OF NON-MONOTONIC REASONING
    Feb 1, 1980 · Februari 1980. Computer Science Department. Report No. STAN-CS-80-788. CIRCUMSCRIPTION - A FORM OF NON-MONOTONIC REASONING. John McCarthy.
  91. [91]
    [PDF] Fuzzy Sets* - LA ZADEH - Annuaire du LIPhy
    We begin with several definitions involving fuzzy sets which are obvious extensions of the corresponding definitions for ordinary sets. A fuzzy set is empty if ...
  92. [92]
    [PDF] Markov Logic Networks - CSE Home
    Abstract. We propose a simple approach to combining first-order logic and probabilistic graphical models in a single representation. A Markov logic network ...
  93. [93]
    [PDF] Computer-Based Consultations in Clinical Therapeutics - Stacks
    However, a briefexcerpt ofa sample session is included hereto illustratethe interaction betweenthe clinician and the MYCIN System. An average consultation ...
  94. [94]
    A relational model of data for large shared data banks
    A relational model of data for large shared data banks. Author: E. F. Codd ... Published: 01 June 1970 Publication History. 5,614citation66,017Downloads.
  95. [95]
    [PDF] Relational Calculus - Cornell: Computer Science
    ❖ Relational Completeness: Query language (e.g.,. SQL) can express every query that is expressible in relational algebra/calculus. S. S Sailors.
  96. [96]
    RDF Schema 1.1
    ### RDFS Entailment and Reasoning
  97. [97]
    [PDF] Composite Events for Active Databases: Semantics, Contexts and ...
    This paper focuses on the event component of the ECA (event-condition-action) rules used in active databases. An ECA rule consists, primarily, of three ...
  98. [98]
    A logical approach to graph databases - ScienceDirect
    In this sense, in this paper we first present a core query language, similar to Cypher or G-Core. Then, we define a simple logic whose formulas are ...