Classical logic
Classical logic is a formal system of deductive reasoning that forms the foundation of much of Western philosophy, mathematics, and science, characterized by its bivalent semantics—where every proposition is either true or false—and adherence to principles such as the law of excluded middle (every statement is either true or its negation is true) and the law of non-contradiction (no statement can be both true and false).[1][2] It employs propositional connectives like negation (¬), conjunction (∧), disjunction (∨), implication (→), and equivalence (↔), along with quantifiers (∀ for universal, ∃ for existential) in its predicate form, to construct valid arguments through rules such as modus ponens (from A and A → B, infer B).[1][3] This system assumes that truth is absolute and independent of verification, enabling proofs by contradiction and the principle of explosion (from a contradiction, anything follows).[4] The origins of classical logic trace back to ancient Greece, particularly Aristotle (384–322 BCE), who developed syllogistic logic as a method for categorical reasoning using propositions of the form "All S are P," "No S are P," "Some S are P," and "Some S are not P," organized into valid syllogisms across four figures with 15 valid moods.[3] Aristotle's work, detailed in his Organon, emphasized deductive necessity and the square of opposition relating these proposition types as contraries, contradictories, or subcontraries, laying the groundwork for formal deduction despite blending material and logical elements.[3] Subsequent developments by the Stoics, including Chrysippus (c. 279–206 BCE), extended this to propositional logic with truth-functional connectives and a two-valued system, introducing rules like modus tollens (from A → B and ¬B, infer ¬A) and hypothetical syllogisms (transitivity of implication).[3][1] In the modern era, classical logic was rigorously formalized in the 19th century by Gottlob Frege (1848–1925), who introduced predicate logic with quantifiers to handle relations and variables, enabling the expression of complex mathematical statements beyond syllogisms.[2] This evolution distinguished classical logic from alternatives like intuitionistic or constructive logics, which reject the law of excluded middle for unverified propositions and prioritize constructive proofs.[4] Key semantic tools, such as truth tables for evaluating propositional formulas under all possible truth assignments, confirm validity: a formula is a tautology if true in every interpretation, underpinning applications in computer science, artificial intelligence, and automated theorem proving.[2] Despite critiques for its non-constructive nature—exemplified in 19th-century debates between Leopold Kronecker (constructivist) and David Hilbert (classical defender)—classical logic remains the default framework for rigorous argumentation due to its completeness and decidability in propositional form.[4]Overview and Fundamentals
Definition and Scope
Classical logic is the standard formal system in Western philosophy and mathematics that assumes bivalence, whereby every proposition has exactly one of two truth values: true or false.[5] It adheres to fundamental principles such as the law of excluded middle, which states that for any proposition A, either A or its negation \neg A is true (A \lor \neg A), and the law of non-contradiction, which asserts that no proposition can be both true and false (\neg (A \land \neg A)).[6] These axioms ensure a sharp distinction between truth and falsity, forming the bedrock of classical reasoning.[5] The scope of classical logic primarily encompasses propositional logic, which deals with the truth-functional combinations of atomic propositions using connectives like negation (\neg), conjunction (\land), disjunction (\lor), and implication (\to), and first-order predicate logic, which extends this framework to include quantifiers (\forall, \exists) over individuals in a domain, along with predicates and relations.[7] It excludes higher-order logics, which quantify over predicates or properties, as well as non-standard variants like modal or intuitionistic logics that deviate from bivalence or classical principles.[8] This delineation positions classical logic as the foundational paradigm for deductive inference in most mathematical and philosophical contexts.[5] Central to classical logic are core assumptions that govern its inference rules and semantics, including material implication, where A \to B is equivalent to \neg A \lor B, treating implication as a truth-functional operation rather than a strict conditional.[6] Double negation elimination allows \neg \neg A to imply A, reinforcing the equivalence of a proposition and its double negation in bivalent systems.[6] Additionally, the explosion principle, or ex falso quodlibet, stipulates that from a contradiction (A \land \neg A), any proposition follows, underscoring the system's commitment to consistency as paramount.[9] Classical logic emerged as the dominant framework in Western philosophy following Aristotle's foundational work on syllogistic reasoning, evolving through medieval scholasticism and into modern formalizations that solidified its status as the orthodox system for logical analysis.[8]Key Characteristics
Classical logic is defined by its adherence to truth-functionality, a property whereby the truth value of any compound proposition is entirely determined by the truth values of its constituent atomic propositions. This means that logical connectives such as negation, conjunction, and disjunction operate as truth functions, mapping combinations of truth values (true or false) to a single output truth value for the entire formula. For instance, the truth of a conjunction A \land B holds if and only if both A and B are true, independent of any external context or meaning beyond these values.[10] Central to classical logic is the principle of bivalence, which asserts that every proposition is either true or false, with no intermediate or indeterminate truth values possible. This binary assignment underpins the semantics of the system, ensuring that interpretations assign exactly one truth value to each formula, thereby supporting key theorems like the law of excluded middle (A \lor \neg A) and double negation elimination. Bivalence distinguishes classical logic from multivalued or fuzzy logics, where propositions might hold degrees of truth.[5] Another defining feature is monotonicity, the property that permits the addition of new premises to a set without invalidating previously derived conclusions. Formally, if a set of premises \Gamma entails a formula \phi, then any superset \Gamma' \supseteq \Gamma also entails \phi. This ensures that the deductive power of classical logic accumulates reliably as knowledge expands, reflecting its suitability for rigorous, non-revisable inference but contrasting with defeasible reasoning in non-monotonic systems.[11] Classical logic also exhibits distinct decidability properties depending on its scope. Propositional classical logic is decidable, as truth-table methods can systematically evaluate all possible assignments of truth values to atomic propositions (up to $2^n for n atoms), determining validity or satisfiability in finite steps. In contrast, first-order predicate logic is only semi-decidable: while valid formulas can be proven within the system (by the completeness theorem), there is no general algorithm to determine invalidity for all cases, a result established by Church, Gödel, and Turing in the 1930s.[10][5] Finally, classical logic lacks paraconsistency, embracing instead the principle of ex falso quodlibet (from falsehood, anything follows), which states that a contradiction in the premises entails every possible statement. This explosive inference rule implies that inconsistent theories become trivial, as any proposition \psi derives from premises including both \theta and \neg \theta. Paraconsistent logics reject this to tolerate inconsistencies without collapse, but classical systems prioritize consistency as a foundational requirement for non-trivial deduction.[5][12]Historical Development
Ancient Foundations
The roots of classical logic can be traced to pre-Socratic philosophers in ancient Greece, particularly the Eleatic school, where early concerns about contradiction and consistency emerged. Parmenides of Elea (c. 515–450 BCE) emphasized the principle of non-contradiction, arguing that reality is unchanging and one, rejecting the possibility of opposites or change as illusory, which laid groundwork for logical coherence in philosophical inquiry.[13] His student Zeno of Elea (c. 490–430 BCE) further highlighted logical inconsistencies through paradoxes, such as the Dichotomy and Achilles and the Tortoise, which demonstrated contradictions in common assumptions about motion and plurality, thereby challenging opponents to resolve apparent logical flaws in their views.[13] Aristotle (384–322 BCE) systematized these ideas into the foundational framework of classical logic in his collection of works known as the Organon. In the Categories, he outlined ten fundamental classifications of being, providing a basis for analyzing predicates and substances in logical arguments.[14] The Prior Analytics developed syllogistic logic, a deductive system where conclusions follow necessarily from two premises through valid moods, such as the Barbara syllogism ("All men are mortal; Socrates is a man; therefore, Socrates is mortal").[15] Complementing this, the Posterior Analytics addressed scientific demonstration, explaining how syllogisms could yield certain knowledge from first principles in demonstrative sciences.[16] Central to Aristotle's system were the laws of thought, articulated primarily in Metaphysics Book Gamma: the law of identity (A is A), the law of non-contradiction (it is impossible for the same thing to belong and not belong to the same thing in the same respect), and the law of excluded middle (of two contradictories, one must be true and the other false).[17] Following Aristotle, Hellenistic philosophers, especially the Stoics, extended logical foundations toward propositional logic. Chrysippus of Soli (c. 279–206 BCE), the third head of the Stoic school, advanced a theory of axiōmata (assertibles) as the basic units of logic, focusing on their truth values rather than terms.[18] Stoic logic incorporated connectives such as conjunction ("and," true only if both parts are true) and implication ("if...then," true unless the antecedent is true and the consequent false), forming the basis for arguments like the five indemonstrables, including modus ponens.[18] These developments complemented Aristotelian syllogistics by emphasizing compound propositions and their inferential relations, influencing later logical traditions.[18]Medieval to Modern Evolution
During the early medieval period, classical logic was preserved primarily through the efforts of Boethius, who translated and commented on several works from Aristotle's Organon, including the Categories, On Interpretation, and Prior Analytics, thereby transmitting key elements of Aristotelian logic to Latin Europe.[19] These translations formed the foundation of logical study in monastic and cathedral schools until the 12th century. In the Islamic world, Avicenna integrated Aristotelian logic into his comprehensive philosophical system, extending it with innovations in modal logic while emphasizing its role in scientific demonstration, as detailed in his extensive commentaries on the Organon. Similarly, Averroes provided detailed commentaries on Aristotle's logical corpus, defending its purity against Neoplatonic influences and influencing both Islamic and later Latin scholastic traditions. Scholastic logic flourished in medieval universities such as Paris and Oxford from the 12th century onward, where Aristotle's works, bolstered by newly translated Arabic commentaries, became central to the quadrivium and trivium curricula, leading to developments like supposition theory for analyzing terms in context.[20] However, following the initial preservation by Boethius, the study of logic experienced a relative decline in Western Europe during the early Middle Ages, largely due to the dominance of theological priorities and the loss of Greek texts, limiting engagement to abbreviated commentaries until the 12th-century renaissance of translations.[20] In the Renaissance and early modern periods, Petrus Ramus critiqued the complexity of Aristotelian scholastic logic, advocating a simplified, topic-based system focused on natural method and rhetorical utility, which gained popularity in Protestant universities but was later seen as diminishing deductive rigor. Gottfried Wilhelm Leibniz envisioned a characteristica universalis, a universal symbolic language for logic that would enable mechanical resolution of disputes through calculation, laying conceptual groundwork for later formal systems despite remaining unrealized in his lifetime. The 19th century marked a revival through algebraic approaches, with George Boole developing the first explicit algebraic treatment of logic in his 1847 work The Mathematical Analysis of Logic and 1854's An Investigation of the Laws of Thought, representing propositions as binary variables and operations, thus bridging logic and mathematics.[21] Augustus De Morgan contributed foundational relations in this tradition, formulating De Morgan's laws—which state that the negation of a conjunction is the disjunction of negations and vice versa—essential for manipulating logical expressions in Boolean algebra.[21] In the 20th century, classical logic underwent rigorous formalization, beginning with Gottlob Frege's 1879 Begriffsschrift, which introduced modern predicate logic notation and quantifiers, providing a precise symbolic framework for expressing mathematical inferences. Alfred North Whitehead and Bertrand Russell advanced this in their multi-volume Principia Mathematica (1910–1913), attempting to derive all mathematics from logical axioms using a ramified type theory to avoid paradoxes, though it highlighted limitations in reducing arithmetic to pure logic.[22] Kurt Gödel's 1930 completeness theorem established that every valid first-order logical formula is provable within the system, confirming the adequacy of classical semantics for capturing deductive validity and solidifying the foundations of modern mathematical logic. This progression from medieval preservation amid theological constraints to 19th- and 20th-century symbolic innovations revived classical logic as a formal discipline, transforming it from a rhetorical tool into a cornerstone of mathematics and philosophy.[21]Logical Connectives and Syntax
Propositional Connectives
In classical propositional logic, atomic propositions form the foundational elements of the language. These are simple, indivisible units, typically denoted by lowercase letters such as p, q, or r, each of which can be assigned a truth value of true (T) or false (F) independently of others.[10] The connectives, or logical operators, combine atomic propositions to build compound expressions. Classical propositional logic employs five primary connectives: negation (\neg), conjunction (\land), disjunction (\lor), implication (\to), and biconditional (\leftrightarrow). Their semantics are precisely defined via truth tables, which enumerate all possible truth-value assignments to the atomic components and determine the resulting truth value of the compound formula. For negation, a unary connective, \neg p is true if p is false, and false otherwise: \begin{array}{c|c} p & \neg p \\ \hline \mathrm{T} & \mathrm{F} \\ \mathrm{F} & \mathrm{T} \\ \end{array} Conjunction p \land q is true only when both p and q are true; disjunction p \lor q is true if at least one is true (inclusive or); implication p \to q is false solely when p is true and q is false (material conditional); and biconditional p \leftrightarrow q is true when both have the same truth value. The full truth tables for the binary connectives are as follows: \begin{array}{c|c|c|c|c|c} p & q & p \land q & p \lor q & p \to q & p \leftrightarrow q \\ \hline \mathrm{T} & \mathrm{T} & \mathrm{T} & \mathrm{T} & \mathrm{T} & \mathrm{T} \\ \mathrm{T} & \mathrm{F} & \mathrm{F} & \mathrm{T} & \mathrm{F} & \mathrm{F} \\ \mathrm{F} & \mathrm{T} & \mathrm{F} & \mathrm{T} & \mathrm{T} & \mathrm{F} \\ \mathrm{F} & \mathrm{F} & \mathrm{F} & \mathrm{F} & \mathrm{T} & \mathrm{T} \\ \end{array} These definitions ensure that the connectives capture intuitive notions of "not," "and," "or," "if...then," and "if and only if," respectively.[10][23] Well-formed formulas (wffs), also known as sentences, are constructed recursively from atomic propositions using the connectives, with parentheses to ensure unambiguous grouping. The recursive definition proceeds as follows: (1) Every atomic proposition is a wff; (2) if A is a wff, then \neg A is a wff; (3) if A and B are wffs, then (A \land B), (A \lor B), (A \to B), and (A \leftrightarrow B) are wffs; (4) nothing else qualifies as a wff. For instance, (p \land q) \to r and \neg (p \lor \neg q) are wffs, while incomplete expressions like p \land are not. This syntax guarantees that every wff has a unique parse tree, facilitating systematic evaluation and proof.[23][10] Several equivalence laws hold among wffs, preserving truth values across all interpretations and enabling formula transformations. De Morgan's laws establish that \neg (p \land q) \leftrightarrow \neg p \lor \neg q and \neg (p \lor q) \leftrightarrow \neg p \land \neg q, allowing negation to be distributed over conjunction and disjunction. Distributive laws mirror arithmetic, with p \land (q \lor r) \leftrightarrow (p \land q) \lor (p \land r) and p \lor (q \land r) \leftrightarrow (p \lor q) \land (p \lor r). These and related tautological equivalences, such as the law of double negation \neg \neg p \leftrightarrow p, form the basis for algebraic manipulations in propositional logic.[10][23] An axiomatic basis provides a minimal set of starting formulas and inference rules sufficient to derive all valid wffs, ensuring the system's completeness. A standard Hilbert-style axiomatization for classical propositional logic uses three axiom schemas—all instances of which are wffs—and the modus ponens rule (from A and A \to B, infer B):- A \to (B \to A)
- (A \to (B \to C)) \to ((A \to B) \to (A \to C))
- (\neg A \to \neg B) \to (B \to A)
Predicate Extensions
Predicate extensions in classical logic extend the syntax of propositional logic by incorporating terms, predicates, and quantifiers, enabling the expression of statements about objects and their relations in first-order predicate logic.[5] This augmentation allows for greater expressive power, permitting generalizations over domains of individuals rather than merely combining truth values.[5] Terms form the basic building blocks for referring to objects within the language. They consist of individual constants, such as a or b, which denote specific objects; individual variables, such as x or y, which act as placeholders; and function symbols applied to terms, such as f(x), where f is a function of appropriate arity.[5] Constants and variables are the simplest terms, while functions compose more complex ones, ensuring all terms are closed (without free variables) or open depending on their variables.[5] Predicates extend atomic formulas beyond propositional atoms by relating terms to properties or relations. A predicate symbol P of arity n combines with n terms to form an atomic formula, such as the unary P(x) for a property of x, or the binary R(x, y) for a relation between x and y.[5] The identity predicate = is a standard binary predicate, with t_1 = t_2 asserting equality between terms t_1 and t_2.[5] Zero-arity predicates correspond to propositional atoms, bridging to the propositional base.[5] Quantifiers introduce binding over variables, with the universal quantifier \forall and existential quantifier \exists scoping over formulas. The universal quantifier \forall x \, \phi binds variable x in formula \phi, expressing that \phi holds for all values of x, while \exists x \, \phi asserts existence of some x making \phi true.[5] The scope of a quantifier is the subformula it governs, and binding occurs only within that scope, preventing conflicts with external variables.[5] Well-formed formulas (wffs) in predicate logic are defined recursively, starting from atomic formulas and incorporating propositional connectives with quantified subformulas. For instance, \forall x (P(x) \to Q(x)) is a wff where the implication combines predicates under universal quantification.[5] Every first-order formula can be transformed into prenex normal form, where all quantifiers prefix a quantifier-free matrix, such as \forall x \exists y \, R(x, y).[24] Variables in formulas are either free or bound: a variable is free if it appears without being in the scope of a quantifier binding it, and bound if captured by a \forall or \exists.[5] For example, in \forall x (A(x, y) \to B(x)), x is bound while y is free.[5] Substitution replaces free occurrences of a variable v in a formula \theta with a term t, denoted \theta(v \mid t), but must avoid capturing free variables in t by bound ones in \theta.[5] To handle variable renaming without altering meaning, \alpha-conversion renames bound variables consistently, such as transforming \forall x P(x) to \forall y P(y) by replacing all bound x with y, provided y is not free in the original scope.[5] This preserves logical equivalence and facilitates safe substitutions.[5]Semantics
Truth-Value Semantics
In truth-value semantics for classical propositional logic, an interpretation consists of an assignment of truth values—true (T) or false (F)—to each atomic proposition in the language.[25] These assignments serve as the basic models for evaluating the truth of compound formulas built from atomic propositions using logical connectives such as negation (¬), conjunction (∧), disjunction (∨), implication (→), and equivalence (↔).[25] The semantics is bivalent, meaning every proposition receives exactly one of the two truth values under any given interpretation, reflecting the core principle of classical logic that propositions are either true or false without intermediate values.[25] The valuation function extends these truth assignments recursively to all compound formulas, determining their truth values based on the semantics of the connectives. For instance, the truth value of a conjunction v(φ ∧ ψ) is T if and only if both v(φ) = T and v(ψ) = T; otherwise, it is F. This can be formalized numerically by assigning T = 1 and F = 0, where v(φ ∧ ψ) = min(v(φ), v(ψ)), v(φ ∨ ψ) = max(v(φ), v(ψ)), v(¬φ) = 1 - v(φ), v(φ → ψ) = 1 if v(φ) ≤ v(ψ) (i.e., F only when φ is T and ψ is F), and v(φ ↔ ψ) = 1 if v(φ) = v(ψ).[25] These recursive definitions ensure that the truth value of any formula depends solely on the truth values of its atomic components, making the semantics fully truth-functional.[25] A formula is a tautology if it is true under every possible interpretation, meaning its valuation is T regardless of the truth assignments to its atomic propositions. The law of excluded middle, p ∨ ¬p, exemplifies this: for any atomic p, if p is T then ¬p is F and the disjunction is T; if p is F then ¬p is T and the disjunction is again T.[25] Tautologies capture the validities of classical propositional logic and form the semantic basis for theorems in proof systems. Logical consequence is defined semantically such that a set of formulas Γ entails a formula φ (denoted Γ ⊨ φ) if every interpretation that satisfies all formulas in Γ also satisfies φ. In other words, there is no interpretation where all members of Γ are T but φ is F.[25] A formula φ is valid (a tautology) if the empty set entails it, i.e., ∅ ⊨ φ. Models of Γ are precisely the interpretations satisfying Γ. The compactness theorem for propositional logic states that a set of formulas Γ is satisfiable if and only if every finite subset of Γ is satisfiable.[26] This means that if Γ has no satisfying interpretation, then some finite portion of Γ already lacks one, allowing satisfiability to be checked via finite means despite potentially infinite Γ. The theorem follows from the finite nature of truth assignments over finitely many atoms in any finite subset and the extension to the whole set via a chain of consistent extensions.[26] Truth tables provide a concrete method to compute valuations exhaustively for formulas with finitely many atomic propositions. Consider the biconditional (p → q) ↔ (¬p ∨ q), which equates implication to the material conditional. The full truth table is:| p | q | ¬p | p → q | ¬p ∨ q | (p → q) ↔ (¬p ∨ q) |
|---|---|---|---|---|---|
| T | T | F | T | T | T |
| T | F | F | F | F | T |
| F | T | T | T | T | T |
| F | F | T | T | T | T |
Model-Theoretic Semantics
Model-theoretic semantics interprets formulas of classical predicate logic (first-order logic) using mathematical structures, extending the truth-value assignments of propositional logic to languages with quantifiers, predicates, functions, and constants by considering varying domains and interpretations. This approach, formalized by Alfred Tarski, defines truth and satisfaction relative to a structure, enabling the study of logical consequence as preservation across all models. A structure \mathcal{M} for a first-order language L is a pair (D, I), where D is a non-empty set known as the domain of discourse, and I is an interpretation function. The function I maps each constant symbol c \in L to an element I(c) \in D, each n-ary function symbol f \in L to a function I(f): D^n \to D, and each n-ary predicate symbol P \in L to a relation I(P) \subseteq D^n. For example, if P is a unary predicate, then I(P) \subseteq D. This setup allows formulas to be evaluated relative to specific interpretations, contrasting with the fixed truth tables used for propositional connectives. Satisfaction in a structure \mathcal{M} = (D, I) is defined recursively for a formula \phi under a variable assignment \nu: \text{Vars} \to D, denoted \mathcal{M} \models \phi[\nu]. For atomic formulas, such as P(t_1, \dots, t_n) where the t_i are terms, \mathcal{M} \models P(t_1, \dots, t_n)[\nu] holds if (d_1, \dots, d_n) \in I(P), with each d_i being the denotation of t_i under \nu and I. The definition extends to connectives: \mathcal{M} \models \neg \phi[\nu] if not \mathcal{M} \models \phi[\nu]; \mathcal{M} \models \phi \land \psi[\nu] if both \mathcal{M} \models \phi[\nu] and \mathcal{M} \models \psi[\nu]; and similarly for \lor, \to, and \leftrightarrow. For quantifiers, \mathcal{M} \models \forall x \, \phi[\nu] if for every d \in D, \mathcal{M} \models \phi[\nu[x/d]], where \nu[x/d] is \nu modified to assign d to x; dually, \mathcal{M} \models \exists x \, \phi[\nu] if there exists d \in D such that \mathcal{M} \models \phi[\nu[x/d]]. A sentence (closed formula) \phi is true in \mathcal{M} if \mathcal{M} \models \phi[\nu] for any \nu, since it lacks free variables. Herbrand structures provide a specific class of models useful for analyzing satisfiability, particularly in connection with skolemization.[27] The Herbrand universe for a language with function symbols is the set of all ground terms (terms without variables) generated from the constants and functions.[27] A Herbrand structure has this set as its domain, with interpretations of predicates extended homomorphically from the term algebra, allowing reduction of quantifier elimination via skolemization, where existentials are replaced by skolem functions to preserve satisfiability.[27] Two formulas \phi and \psi are logically equivalent if they have exactly the same models, meaning for every structure \mathcal{M} and assignment \nu, \mathcal{M} \models \phi[\nu] if and only if \mathcal{M} \models \psi[\nu]. The Löwenheim-Skolem theorem establishes that if a first-order theory (a set of sentences) has an infinite model, then it has models of every infinite cardinality at least as large as the cardinality of the language; in particular, every countable theory with an infinite model has a countable model. For example, the sentence \exists x \forall y \, R(x,y) is satisfiable: consider a structure with domain D = \{a, b\}, where I(R) = \{(a,a), (a,b)\}; assigning x = a satisfies the universal quantifier over all y \in D.Proof Theory
Deductive Systems
Deductive systems in classical logic provide formal methods for constructing proofs from premises using inference rules that mirror natural reasoning patterns. These systems ensure that derivations are valid relative to the syntax of the language, capturing the structural properties of logical connectives without direct reference to semantics. Two primary systems are natural deduction and sequent calculus, both developed by Gerhard Gentzen in the 1930s to facilitate the analysis of proofs and demonstrate key metatheoretic results.[28][29] Natural deduction emphasizes the introduction and elimination of logical connectives through rules that correspond to their intuitive meanings. For conjunction (∧), the introduction rule (∧I) allows inferring φ ∧ ψ from premises φ and ψ, while the elimination rules (∧E) permit deriving φ (or ψ) from φ ∧ ψ. For implication (→), the introduction rule (→I) discharges a hypothesis φ to conclude φ → ψ if ψ follows from φ, and the elimination rule (→E), or modus ponens, derives ψ from φ and φ → ψ. Negation (¬) follows a similar pattern: ¬φ is introduced (¬I) by assuming φ and deriving a contradiction, and eliminated (¬E) by deriving an arbitrary formula from φ and ¬φ. These rules apply to propositional formulas, with extensions to predicates in first-order logic.[28] A simple derivation in natural deduction illustrates these rules, such as proving p → p from no premises:- Assume p (hypothesis).
- From 1, infer p (reiteration or assumption discharge).
- Discharge the assumption to infer p → p (→I).