Mathematical logic
Mathematical logic is the branch of mathematics devoted to the study of formal deductive reasoning and its foundations, employing mathematical methods to analyze the structure, methods, and validity of mathematical proofs.[1] It developed as a rigorous framework to formalize mathematical language and inference, enabling the precise examination of concepts like truth, provability, and definability within mathematical systems.[2] The field encompasses several core branches that address distinct aspects of logical foundations. Model theory investigates the relationship between formal languages and their interpretations, focusing on structures that satisfy logical formulas and properties like completeness and categoricity.[3] Proof theory examines the syntactic structure of proofs, studying concepts such as consistency, normalization, and the lengths of derivations in formal systems.[3] Recursion theory, also known as computability theory, explores the limits of algorithmic computation and the hierarchy of unsolvable problems, foundational to theoretical computer science.[3] Set theory provides the axiomatic basis for mathematics, analyzing infinite sets, cardinalities, and forcing techniques to resolve questions like the continuum hypothesis.[3] Mathematical logic emerged in the late 19th century amid efforts to rigorize the foundations of mathematics, spurred by paradoxes in early set theory and the need for a secure basis for analysis and geometry.[4] Pioneering works include Gottlob Frege's Begriffsschrift (1879), which introduced modern predicate logic, and Giuseppe Peano's axiomatization of arithmetic (1889), which formalized natural numbers. The early 20th century saw Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913), attempting to derive all mathematics from logical axioms, though complicated by Russell's paradox (1901).[4] Landmark results in the 1930s transformed the field: Kurt Gödel's incompleteness theorems (1931) demonstrated that any sufficiently powerful axiomatic system is either inconsistent or incomplete, undermining David Hilbert's program for a finitary consistency proof of mathematics.[4] Alonzo Church and Alan Turing independently defined computability in 1936, laying groundwork for recursion theory and showing the undecidability of the halting problem.[4] These developments not only reshaped foundational mathematics but also influenced philosophy, linguistics, and computer science by clarifying the boundaries of formal systems and automated reasoning.Scope and Subfields
Definition and Objectives
Mathematical logic is the study of mathematical reasoning through the development of abstract models and formal deductive systems that rigorize concepts of inference, truth, and provability in mathematics.[2] These systems aim to capture the essence of mathematical proof and argumentation in a precise, symbolic manner, enabling the analysis of logical structures underlying mathematical theories.[5] The core objectives of mathematical logic include establishing solid foundations for mathematics by clarifying the nature of mathematical truth and proof, investigating the inherent limitations of formal axiomatic systems—such as their completeness and consistency—and elucidating the internal structure and interconnections of mathematical theories through tools like model theory and proof theory.[6] By formalizing reasoning processes, it seeks to resolve foundational questions about what can be proven within given systems and to provide meta-theoretical insights into the power and boundaries of mathematics itself.[7] In distinction from philosophical logic, which delves into broader epistemological and metaphysical debates about reasoning and knowledge, mathematical logic prioritizes technical, quantitative analyses of formal constructs, treating logic as a branch of mathematics rather than a philosophical inquiry. Key concepts central to this discipline encompass deductive systems, comprising a set of axioms and inference rules that generate theorems from premises; formal languages, which define the syntax and vocabulary for constructing mathematical expressions; and the differentiation between validity—a semantic property where an argument holds in every possible interpretation—and soundness—a syntactic property ensuring that all derivable conclusions are indeed valid.[8] These elements form the bedrock for evaluating the reliability and scope of mathematical deductions.Primary Branches
Mathematical logic is traditionally divided into four primary branches: set theory, model theory, computability theory, and proof theory. These branches collectively address foundational questions about the nature, structure, and limits of mathematical reasoning and objects.[9] Set theory serves as the foundational framework for mathematics by providing axiomatic systems that define sets and their properties, enabling the construction of all mathematical entities within a unified structure.[10] Model theory examines the structures that satisfy given formal theories, focusing on the relationships between linguistic expressions and their interpretations in various models.[11] Computability theory investigates the boundaries of what functions and problems can be effectively calculated or decided by algorithmic processes.[12] Proof theory analyzes the syntactic structure of formal proofs, including their derivation rules and implications for consistency within logical systems.[13] These branches are deeply interconnected, enhancing their mutual insights. For instance, model theory relies on set-theoretic constructions to build and analyze the structures that interpret logical theories.[11] Similarly, computability theory informs proof theory through results like Gödel's incompleteness theorems, which demonstrate inherent limitations in formal systems by linking provability to undecidable computational questions.[14] The scope of mathematical logic centers on these core branches, which emphasize rigorous mathematical foundations and interconnections, while excluding areas like philosophical logic or logics applied in artificial intelligence unless they directly contribute to understanding mathematical structures and proofs.[12]Historical Development
Ancient and Early Modern Foundations
The foundations of mathematical logic trace back to ancient Greece, where Aristotle developed syllogistic logic as a systematic framework for deductive reasoning. In his Organon, particularly the Prior Analytics, Aristotle outlined the syllogism as a form of argument consisting of two premises leading to a conclusion, such as "All men are mortal; Socrates is a man; therefore, Socrates is mortal." This structure emphasized categorical propositions and valid inference patterns, laying the groundwork for formal deduction. Aristotle's logic influenced mathematical practice by providing a model for rigorous argumentation, though it was primarily philosophical rather than quantitative.[15] Euclid's Elements (c. 300 BCE) exemplified this deductive approach in geometry, building theorems from axioms and postulates through step-by-step proofs that echoed syllogistic reasoning without explicit symbolic notation. For instance, Euclid's proof of the Pythagorean theorem proceeds from prior propositions in a chain of implications, establishing a precedent for axiomatic deduction in mathematics. While not directly syllogistic, Euclid's method drew from the Aristotelian tradition of deriving conclusions from self-evident principles, influencing centuries of geometric rigor.[16] This informal yet structured proof style in geometry highlighted the need for logical consistency, setting the stage for later formalization.[17] In the medieval period, Islamic scholars advanced logical inquiry, with Al-Farabi (c. 872–950 CE) making significant contributions to modal logic by extending Aristotle's framework to include notions of necessity and possibility. In works like his Commentary on Aristotle's Topics, Al-Farabi analyzed modal syllogisms, distinguishing between actual and potential predicates to refine deductive validity in contingent scenarios. His innovations preserved and expanded Aristotelian logic amid the translation movement, influencing subsequent thinkers.[18] European Scholasticism, flourishing from the 12th to 15th centuries, integrated and refined this logical heritage through university curricula centered on Aristotle's texts. Scholastics like Peter Abelard and William of Ockham developed terminist logic, focusing on the semantics of terms in propositions to resolve ambiguities in syllogisms, as seen in Abelard's Dialectica. This emphasis on supposition theory—how words refer in context—enhanced precision in deductive arguments, bridging philosophy and emerging scientific discourse. Scholastic logic thus maintained deductive rigor in theological and natural philosophy debates.[19] During the early modern era, Gottfried Wilhelm Leibniz (1646–1716) envisioned a universal formal language to mechanize reasoning, known as the characteristica universalis. In essays like "On the Art of Combinations" (1666), Leibniz proposed a symbolic system where concepts could be combined algebraically, akin to arithmetic, to resolve disputes through calculation rather than verbal debate: "Let us calculate" (Calculemus). This ambition for a "universal characteristic" anticipated symbolic logic by seeking an unambiguous notation for all thought, though unrealized in his lifetime.[20] These developments in informal deductive practices, particularly in geometry, underscored limitations in natural language and intuitive proofs, paving the way for 19th-century symbolic innovations.[21]19th-Century Innovations
The 19th century witnessed a profound transformation in mathematical logic, as mathematicians sought symbolic and algebraic frameworks to address the growing demands for rigor in analysis and the foundational upheavals from discoveries like non-Euclidean geometries, which challenged traditional assumptions about mathematical certainty.[22] These innovations responded to the need for precise deductive methods amid the expansion of calculus and geometry, enabling logic to serve as a tool for clarifying mathematical proofs and structures.[23] George Boole's pamphlet The Mathematical Analysis of Logic, published in 1847, pioneered this algebraic turn by treating logical reasoning as a calculus akin to algebra, where propositions are expressed as equations involving variables representing classes or concepts.[24] Boole posited that the "laws of thought" could be formalized through operations like addition (disjunction) and multiplication (conjunction), with the empty class as zero and the universal class as unity, laying the groundwork for what became Boolean algebra.[25] This approach shifted logic from verbal syllogisms to symbolic manipulation, influencing subsequent developments in computer science and circuit design.[23] Augustus De Morgan advanced this symbolic paradigm through his work on relational logic, introducing De Morgan's laws, which establish dualities between negation, conjunction, and disjunction: \neg (P \land Q) \equiv \neg P \lor \neg Q and \neg (P \lor Q) \equiv \neg P \land \neg Q.[23] In publications like his 1847 Formal Logic, De Morgan extended traditional syllogistics to handle relations between terms, such as "loves" or "is greater than," allowing for more expressive inferences beyond simple class inclusions.[26] His emphasis on quantification of the predicate and reform of deductive rules bridged Aristotelian logic with emerging algebraic methods.[26] Gottlob Frege's 1879 Begriffsschrift revolutionized logical notation by inventing a two-dimensional symbolic language for predicate logic, incorporating quantifiers to bind variables universally (\forall) or existentially (derived as the negation of universal).[27] This system enabled the formal representation of complex judgments, such as "All men are mortal," as nested functions and arguments, providing a precise alternative to natural language ambiguities.[27] Frege's notation facilitated the analysis of mathematical definitions and proofs, aiming to ground arithmetic in pure logic.[27] Charles Sanders Peirce and Ernst Schröder built upon Boolean foundations to extend algebraic logic into polyadic relations and quantification. Peirce, in works like his 1880 and 1885 papers, developed the "logic of relatives" using algebraic symbols for binary relations and introduced summation (\Sigma) and product (\Pi) notations as proto-quantifiers, enhancing expressiveness for relational inferences.[23] Schröder, in his four-volume Vorlesungen über die Algebra der Logik (1890–1905), systematized these ideas, incorporating Peirce's relational algebra and providing a comprehensive treatment of quantifiers within Boolean frameworks, solidifying the algebraic tradition's role in modern logic.[23]20th-Century Maturation and Paradoxes
The early 20th century marked a period of crisis in the foundations of mathematics, precipitated by paradoxes arising from naive set theory. Georg Cantor's diagonal argument, introduced in 1891 to prove the uncountability of the real numbers, had profound implications in the 1900s by highlighting limitations in transfinite cardinalities and paving the way for set-theoretic paradoxes.[28] The Burali-Forti paradox, discovered by Cesare Burali-Forti in 1897, demonstrated that the set of all ordinal numbers leads to a contradiction, as it would both be an ordinal and greater than itself under the usual ordering.[29] Bertrand Russell identified his eponymous paradox in 1901, showing that the set of all sets not containing themselves as members both contains and does not contain itself, exposing inconsistencies in unrestricted comprehension. These paradoxes prompted immediate responses aimed at salvaging set theory. In 1908, Ernst Zermelo published the first axiomatic system for set theory, introducing axioms of extensionality, separation, power set, union, infinity, and choice (with empty set and pairing derivable; replacement was added later).[30] This framework provided a consistent basis for Cantor's transfinite numbers while prohibiting self-referential constructions like Russell's set. Concurrently, David Hilbert launched his program in the early 1920s, advocating the formalization of mathematics in axiomatic systems and the pursuit of finitary consistency proofs to establish the reliability of infinite methods without invoking intuitionism.[31] Hilbert envisioned metamathematical tools to prove the consistency of systems like arithmetic and geometry, thereby securing the foundations against paradoxical threats. Kurt Gödel's incompleteness theorems of 1931 revolutionized the field by revealing inherent limitations in formal systems. The first theorem states that in any consistent formal system powerful enough to describe basic arithmetic (such as Peano arithmetic), there exist statements that are true but neither provable nor disprovable within the system. Gödel achieved this through arithmetization: assigning unique natural numbers (Gödel numbers) to symbols, formulas, and proofs, enabling self-reference via a fixed-point theorem. He constructed a sentence G asserting "G is not provable in the system," which, if provable, would falsify itself (leading to inconsistency), and if unprovable, would be true but unprovable—thus demonstrating incompleteness. The second theorem extends this, proving that no such system can demonstrate its own consistency, as that would require proving the unprovability of G, which is impossible without assuming consistency externally.[32] These results shattered Hilbert's dream of absolute consistency proofs, implying that mathematics cannot be fully mechanized or reduced to a complete axiomatic foundation, and shifted focus toward relative consistency and interpretability.[31] The 1930s saw the maturation of mathematical logic through key foundational contributions that birthed its major branches. Alfred Tarski's 1933 semantic theory of truth addressed liar-paradox-like issues in formal languages by defining truth hierarchically: for a language L, truth is defined in a metalanguage with greater expressive power, satisfying the T-schema ("'P' is true if and only if P") for atomic sentences and extended recursively, ensuring consistency and adequacy.[33] This work formalized model theory and truth predicates, influencing later developments in semantics and philosophy of language. Independently, Alonzo Church in 1936 proposed the lambda calculus as a model of computation, while Alan Turing introduced his abstract machines, leading to the Church-Turing thesis: every effectively computable function is computable by a Turing machine (or equivalently, lambda-definable).[34] This thesis, though unprovable, unified recursion theory, proof theory, and computability, providing a precise notion of algorithm that underpins modern computer science.[35] Mathematical logic institutionalized as a distinct discipline with the founding of the Association for Symbolic Logic in 1936 and its flagship Journal of Symbolic Logic, which began publishing in the same year to disseminate research in symbolic reasoning, set theory, and related areas.[36] Post-World War II, the journal's establishment reflected the field's growing coherence amid the crises of the early century, fostering international collaboration and rigorous standards that propelled logic's expansion into algebra, philosophy, and beyond.[37]Formal Logical Systems
Syntax and Semantics
In formal logical systems, syntax specifies the structure of expressions without regard to their meaning, consisting of an alphabet of symbols and recursive rules that define which sequences qualify as well-formed formulas (wffs). The alphabet typically includes propositional variables (e.g., p, q, r, \dots), logical connectives such as negation (\neg), conjunction (\wedge), disjunction (\vee), implication (\to), and biconditional (\leftrightarrow), along with parentheses for grouping. A grammar then generates wffs inductively: propositional variables are wffs; if A is a wff, then \neg A is a wff; if A and B are wffs, then (A \wedge B), (A \vee B), (A \to B), and (A \leftrightarrow B) are wffs.[38][39] Semantics, in contrast, assigns meaning to syntactic expressions through interpretations that determine their truth values in possible worlds. For propositional logic, an interpretation is a truth valuation v that assigns true (T) or false (F) to each propositional variable, then extends recursively to compound formulas using the semantics of connectives: v(\neg A) = \text{T} if v(A) = \text{F}, and \text{F} otherwise; v(A \wedge B) = \text{T} if both v(A) = \text{T} and v(B) = \text{T}, and \text{F} otherwise; similar clauses hold for \vee, \to (true unless A true and B false), and \leftrightarrow (true if A and B have the same truth value). A formula is satisfiable if there exists a valuation making it true, valid (a tautology) if true under every valuation, and a contradiction if false under every valuation. This framework draws from Alfred Tarski's 1933 semantic conception of truth, which defines truth for sentences in a formal language via satisfaction in models, ensuring that truth predicates are materially adequate (e.g., the T-schema: " 'S' is true if and only if S") while avoiding paradoxes through object-language and metalanguage distinctions.[40][33] Soundness and completeness theorems establish the tight connection between syntax and semantics: a deductive proof system is sound if every provable formula is semantically valid (i.e., a tautology), and complete if every semantically valid formula is provable. In propositional logic, standard Hilbert-style systems with modus ponens and axioms like A \to (B \to A), (A \to (B \to C)) \to ((A \to B) \to (A \to C)), and schemata for connectives (e.g., (A \to B) \to ((B \to C) \to (A \to C))) achieve both properties, meaning syntactic provability coincides exactly with semantic truth-in-all-cases. Soundness follows by induction on proof length, verifying that axioms are tautologies and rules preserve validity; completeness relies on methods like semantic tableaux or maximal consistent sets, showing that unsatisfiable formulas have proofs of contradiction.[39][41][42] A key tool for exploring propositional semantics is the truth table, which enumerates all possible valuations for a formula's atomic components and computes the resulting truth values column by column. For a formula with n distinct propositional variables, there are $2^n rows. Consider the formula (p \to q) \leftrightarrow (\neg p \vee q), with two variables p and q (thus 4 rows). The table is constructed as follows:| p | q | \neg p | p \to q | \neg p \vee q | (p \to q) \leftrightarrow (\neg p \vee q) |
|---|---|---|---|---|---|
| T | T | F | T | T | T |
| T | F | F | F | F | T |
| F | T | T | T | T | T |
| F | F | T | T | T | T |
First-Order Logic
First-order logic, also known as first-order predicate logic, extends propositional logic by incorporating predicates, functions, and quantifiers to express statements about objects and their relations within a domain. The language is defined over a signature consisting of a countable set of constant symbols, function symbols of various arities, and predicate symbols of various arities (at least zero, where zero-arity predicates are propositional). Variables range over elements of the domain, and terms are formed recursively: variables and constants are terms, and if f is an n-ary function symbol and t_1, \dots, t_n are terms, then f(t_1, \dots, t_n) is a term. Atomic formulas are obtained by applying predicate symbols to terms: if P is an n-ary predicate and t_1, \dots, t_n are terms, then P(t_1, \dots, t_n) is an atomic formula. Well-formed formulas (sentences when closed) are built from atomic formulas using connectives \neg, \land, \lor, \to, \leftrightarrow and quantifiers \forall (universal) and \exists (existential), following standard formation rules: if \phi and \psi are formulas, then so are \neg\phi, (\phi \land \psi), (\phi \lor \psi), (\phi \to \psi), (\phi \leftrightarrow \psi); if \phi is a formula and x a variable, then (\forall x \phi) and (\exists x \phi) are formulas. Free variables in formulas can be bound by quantifiers, and sentences have no free variables.[43] Deductive systems for first-order logic provide rules to derive theorems from axioms, ensuring soundness and completeness. Hilbert-style systems, named after David Hilbert, rely on a finite set of axiom schemas for propositional logic extended to predicates, plus specific axioms for quantifiers such as \forall x ( \phi \to \psi ) \to ( \forall x \phi \to \forall x \psi ) (for x not free in \phi) and \forall x \phi \to \phi[t/x] (quantifier instantiation, where t replaces free x), along with modus ponens and universal generalization as the sole inference rules. Natural deduction systems, introduced by Gerhard Gentzen, mimic informal proofs through introduction and elimination rules for each connective and quantifier; for instance, the universal generalization rule allows inferring \forall x \phi(x) from \phi(y) if y is an arbitrary (fresh) variable not occurring free in undischarged assumptions. These systems are equivalent in expressive power and prove the same theorems, though natural deduction is often preferred for its intuitive structure in theorem proving.[44][45] A cornerstone result is Gödel's completeness theorem, which states that every consistent first-order theory is satisfiable, or equivalently, a formula is provable if and only if it is valid (true in every model). In his 1929 doctoral dissertation, Kurt Gödel proved this by constructing, for any consistent set of sentences, a model where the sentences hold, by reducing the problem to formulas in prenex normal form and using the completeness of propositional logic along with a sequence of satisfiability-preserving transformations, implicitly incorporating compactness, without relying on the axiom of choice for the countable case; the proof demonstrates that if a formula is not provable, an extension to a complete theory allows defining a domain and interpretations satisfying it. This theorem bridges syntax (proofs) and semantics (models), confirming that Hilbert-style systems capture all logical truths. Regarding decidability, the validity problem for full first-order logic is undecidable, meaning no algorithm exists to determine whether an arbitrary formula is valid. Alonzo Church proved this in 1936 by reducing the halting problem for lambda calculus (equivalent to Turing machines) to first-order validity, showing that if validity were decidable, so would be the halting problem, which it is not. In contrast, monadic first-order logic—restricted to unary predicates (no functions or higher-arity predicates)—is decidable, as demonstrated by a translation to propositional logic via finite models or automata, where satisfiability reduces to checking a finite number of cases up to the formula's quantifier complexity.[46] The Löwenheim-Skolem theorem further highlights the theory's model-theoretic properties: if a first-order theory with a countable signature is consistent, it has a countable model. Proved by Leopold Löwenheim in 1915 and refined by Thoralf Skolem in 1920, the result follows from constructing a countable submodel satisfying the theory via Skolem functions (existential witnesses) and the downward Löwenheim-Skolem extension, ensuring that infinite models exist in every cardinality but always admit countable ones, underscoring the non-categorical nature of first-order theories.[47]Extensions and Non-Classical Variants
Higher-order logics extend first-order logic by allowing quantification over predicates, relations, and functions, thereby increasing expressive power to capture concepts like induction and continuity that are inexpressible in first-order terms.[48] In Alonzo Church's simple theory of types, introduced in 1940, logical expressions are assigned types to avoid paradoxes such as Russell's, with basic types for individuals (e.g., type i) and higher types for predicates (e.g., i \to o for unary predicates yielding truth values of type o).[49] This typed framework underpins higher-order unification and automated theorem proving, as seen in systems like HOL Light.[49] Modal logic incorporates operators for necessity (\Box) and possibility (\Diamond), enabling the formalization of concepts such as epistemic knowledge, temporal progression, and deontic obligation beyond classical propositional or predicate logics.[50] Saul Kripke's 1963 semantics defines truth in terms of possible worlds connected by accessibility relations, where a formula \Box \phi holds at a world w if \phi holds at all worlds accessible from w, providing a relational model that resolves issues in earlier algebraic approaches.[50] This Kripke framework supports soundness and completeness for normal modal systems like K, T, S4, and S5, influencing applications in philosophy, computer science (e.g., model checking), and linguistics.[50] Intuitionistic logic diverges from classical logic by rejecting the law of excluded middle (\phi \lor \neg \phi) and double negation elimination (\neg \neg \phi \to \phi), emphasizing constructive proofs where existence requires explicit construction rather than mere contradiction avoidance.[51] L.E.J. Brouwer's intuitionism, developed from the early 1900s, motivated this rejection, viewing mathematics as mental constructions rooted in time and intuition, leading to the Brouwer-Heyting-Kolmogorov interpretation where implications correspond to proof methods.[51] Arend Heyting formalized intuitionistic logic in 1930 as a Hilbert-style system, preserving classical tautologies that are constructively valid while enabling applications in type theory and program verification via the Curry-Howard isomorphism.[51] Many-valued logics generalize bivalent truth values to multiple possibilities, addressing limitations in classical logic for handling vagueness, future contingents, and partial information.[52] Jan Łukasiewicz introduced three-valued logic in 1920, with truth values true (1), false (0), and indeterminate (1/2), defining connectives like implication such that p \to q takes value 1 unless p = 1 and q = 0, motivated by Aristotle's sea battles to avoid determinism in future statements.[52] Extensions to infinite-valued logics by Łukasiewicz and Tarski in the 1920s used real intervals [0,1] for truth degrees, inspiring fuzzy logic applications in engineering and AI, though differing from probabilistic logics by treating values as intrinsic rather than epistemic.[52] Compared to first-order logic, higher-order variants offer greater expressive power; for instance, second-order logic can axiomatize the natural numbers up to isomorphism (categoricity), expressing the induction axiom fully via second-order quantification over subsets, whereas first-order Peano arithmetic admits non-standard models.[48] Modal and intuitionistic logics sacrifice some classical theorems for specialized semantics, with intuitionistic logic embeddable in classical via Gödel's double negation translation, but many-valued logics expand the truth domain without altering deduction rules, trading completeness for flexibility in non-binary domains.[48] Algebraic semantics, such as Heyting algebras for intuitionistic logic or Boolean algebras with operators for modal logic, unify these variants by interpreting connectives as lattice operations.[48]Set Theory
Axiomatic Foundations
The development of axiomatic set theory was primarily motivated by the need to resolve paradoxes arising in naive set theory, such as Russell's paradox, which demonstrated that unrestricted comprehension leads to contradictions like the set of all sets that do not contain themselves. To avoid such issues, axiomatic systems impose restrictions, notably by prohibiting the existence of a universal set that contains all sets, thereby ensuring consistency through careful separation of definable subsets. This approach, pioneered by Ernst Zermelo in 1908, laid the groundwork for formalizing set theory as a foundation for mathematics, emphasizing iterative construction from basic elements rather than arbitrary collections. The standard axiomatic system today is Zermelo-Fraenkel set theory with the axiom of choice (ZFC), which refines Zermelo's original axioms through contributions from Abraham Fraenkel and Thoralf Skolem in the 1920s. The axioms of ZFC are as follows:- Axiom of Extensionality: Two sets are equal if and only if they have the same elements, formally \forall x (x \in A \leftrightarrow x \in B) \to A = B.
- Axiom of Empty Set: There exists a set with no elements, \exists \emptyset \forall x (x \notin \emptyset).
- Axiom of Pairing: For any two sets a and b, there exists a set \{a, b\} containing exactly them.
- Axiom of Union: For any set A, there exists a set \bigcup A whose elements are the elements of the members of A.
- Axiom of Power Set: For any set A, there exists a set \mathcal{P}(A) whose elements are all subsets of A.
- Axiom Schema of Separation: For any set A and formula \phi(x) without free variables other than x, there exists a set \{x \in A \mid \phi(x)\}. This replaces naive comprehension to avoid paradoxes.
- Axiom of Infinity: There exists an infinite set, specifically one containing the empty set and closed under the successor operation, such as the set of natural numbers.
- Axiom Schema of Replacement: For any set A and formula \phi(x, y) defining a functional relation, there exists a set \{y \mid \exists x \in A \, \phi(x, y)\}. This ensures sets can be replaced by their images under definable functions.
- Axiom of Foundation (Regularity): Every non-empty set A has an element x \in A such that x \cap A = \emptyset, preventing infinite descending membership chains.
- Axiom of Choice: For any set A of non-empty disjoint sets, there exists a set containing exactly one element from each member of A, formally enabling selection functions.