Axiomatic system
An axiomatic system is a formal structure in mathematics and logic comprising undefined terms (concepts accepted without explicit definition), axioms (unproven statements assumed to be true), defined terms (constructed from the undefined ones), and rules of inference that enable the logical derivation of theorems from the axioms.[1] The axiomatic method traces its origins to ancient Greece, particularly Euclid's Elements (circa 300 BCE), which organized geometry around a small set of postulates and common notions to deduce all subsequent propositions.[2] This approach provided a model for rigorous reasoning but contained implicit assumptions that later mathematicians sought to refine. In the late 19th and early 20th centuries, figures like David Hilbert advanced axiomatization across fields such as arithmetic and geometry, emphasizing formal rigor to address foundational crises in mathematics, such as those arising from infinite sets and paradoxes.[3] Central to the evaluation of axiomatic systems are several key properties: consistency, ensuring no contradictions can be derived (e.g., the system does not prove both a statement and its negation); independence, where each axiom cannot be logically derived from the others (demonstrated by finding a model satisfying all but one axiom); and completeness, meaning every valid statement in the system's language is either provable or disprovable from the axioms.[4] While these ideals guide system design, Kurt Gödel's incompleteness theorems (1931) reveal fundamental limitations: any consistent axiomatic system capable of expressing basic arithmetic is incomplete, as it contains true statements that cannot be proved within the system.[3] Axiomatic systems underpin diverse areas, from Euclidean and non-Euclidean geometries to set theory (e.g., Zermelo-Fraenkel axioms) and formal logic, serving as the bedrock for theorem-proving and theoretical computer science.[1]Fundamentals
Definition
An axiomatic system, also referred to as a formal system, is a foundational structure in mathematics and logic comprising three essential elements: a formal language that defines the syntax for constructing statements, a set of axioms serving as unproven foundational assumptions, and a collection of rules of inference that enable the logical derivation of theorems from those axioms.[5] This setup allows for the systematic generation of all valid statements within the system through deductive processes.[6] Formal axiomatic systems emphasize symbolic rigor and mechanical precision, where statements are manipulated according to strict syntactic rules without reliance on external interpretations or intuitive meanings.[5] In contrast, informal axiomatic approaches, such as those employed in everyday reasoning, depend on contextual understanding and lack the formalized syntax and inference mechanisms, making derivations less unambiguous.[6] Axiomatic systems play a crucial role in formalizing mathematical theories by providing a clear, unambiguous method for deriving conclusions, thereby eliminating ambiguities inherent in less structured argumentation and ensuring that every theorem follows logically from the established axioms.[5] This formalization supports the development of rigorous proofs and the verification of mathematical consistency.[6] At the core of any axiomatic system are the notions of logical implication, which denotes that a statement necessarily follows from preceding ones under the specified rules of inference, and deduction, the iterative application of these rules to build sequences of valid statements starting from the axioms.[5] These prerequisites enable the system to function as a self-contained deductive apparatus.[6]Components
An axiomatic system is built upon a formal language that provides the syntactic foundation for expressing statements. This language consists of an alphabet comprising a finite set of primitive symbols, which are undefined basic elements such as propositional variables (e.g., p, [q](/page/Q)) and logical connectives (e.g., \land for conjunction, \lor for disjunction, \neg for negation).[7][8] The syntax of the language specifies rules for constructing well-formed formulas (wffs), which are valid expressions built recursively from the alphabet; for instance, p \land [q](/page/Q) is a wff, while p \land is not, as it violates the requirement for binary connectives to link complete formulas.[7][9] Axioms form the foundational assumptions of the system, consisting of a set of wffs—either finite or infinite—that are accepted as true without proof. These include undefined terms or primitives (e.g., basic predicates or constants not further defined) and defined terms derived from them via explicit definitions. Axiom schemas, such as the truth-functional tautologies F \supset (G \supset F) or quantificational rules like \forall u(F) \supset F' (where F' is a substitution instance of F), allow for the generation of infinitely many specific axioms from general patterns.[8][9] Rules of inference are finite mechanisms that permit the derivation of new wffs from existing ones, ensuring the system's deductive power. The primary example is modus ponens, which states that from wffs A and A \supset B, one may infer B; in logical form, if \Gamma \vdash A and \Gamma \vdash A \to B, then \Gamma \vdash B. In predicate logic extensions, additional rules like universal generalization allow inferring \forall u F from F, provided u does not appear free in undischarged assumptions.[8][7][9] Theorems emerge as wffs that can be derived through finite sequences of applications of the rules of inference starting from the axioms, representing the logically provable statements within the system.[7][9]Key Properties
Consistency
In an axiomatic system, consistency is defined as the property that no contradiction can be derived from the axioms using the specified inference rules. Formally, the system is consistent if there exists no formula \phi such that both \phi and \neg \phi are provable from the axioms.[10] This ensures the system's internal coherence, preventing the derivation of mutually contradictory statements. Consistency can be understood in two related but distinct ways: syntactic and semantic. Syntactic consistency means that there is no formal proof within the system of a falsehood, such as a contradiction like $0 = 1 or \phi \land \neg \phi for some formula \phi.[11] In contrast, semantic consistency requires that no model exists in which the axioms lead to a contradiction, meaning the axioms can be satisfied simultaneously in some interpretation.[11] For systems with sound and complete proof theories, such as first-order logic, these notions coincide, as the absence of syntactic contradictions guarantees the existence of a satisfying model, and vice versa.[12] To detect inconsistency, one typically attempts to derive a falsehood directly from the axioms and rules; if successful, the system is inconsistent.[12] Alternatively, relative consistency proofs establish that a system's consistency follows from the consistency of a stronger or better-understood theory. For instance, the consistency of Peano arithmetic is relative to that of Zermelo-Fraenkel set theory with choice (ZFC), since Peano arithmetic can be interpreted within ZFC.[13] Gerhard Gentzen provided a seminal relative consistency proof for Peano arithmetic in 1936 by reducing it to a weaker subsystem using transfinite induction up to the ordinal \epsilon_0.[14] The implications of inconsistency are profound: an inconsistent axiomatic system allows the derivation of every possible statement via the principle of explosion, known as ex falso quodlibet ("from falsehood, anything follows").[10] This trivializes the system, as it loses all discriminatory power and becomes useless for mathematical reasoning, since both theorems and their negations would be provable.[10] Gödel's second incompleteness theorem ties directly to consistency by showing that, for any consistent axiomatic system powerful enough to formalize basic arithmetic (such as Peano arithmetic), the consistency of the system cannot be proved from within the system itself.Completeness
In axiomatic systems, completeness is a key property that ensures the system's deductive power aligns with its intended semantic scope. Semantic completeness holds when every formula valid in all models—meaning it is true under every possible interpretation—is provable from the axioms using the system's rules of inference.[15] Syntactic completeness, in contrast, requires that for every formula in the system's language, either the formula itself or its negation is provable, leaving no undecided statements within the syntax.[15] A foundational result in this area is Gödel's completeness theorem, which states that for first-order logic, every consistent axiomatic theory has a model, and every semantically valid formula is syntactically provable.[15] This theorem, established by Kurt Gödel in 1929, bridges syntax and semantics by guaranteeing that the axiomatic framework captures all logical necessities in classical first-order settings.[16] Despite these strengths, completeness faces fundamental limitations in expressive systems. Gödel's first incompleteness theorem demonstrates that any consistent axiomatic system powerful enough to formalize basic arithmetic, such as Peano arithmetic, is incomplete: there exist statements in the system's language that are true but neither provable nor refutable within it.[17] These undecidable statements, like the Gödel sentence constructed for the system, highlight an inherent boundary to axiomatization in arithmetic and related domains.[17] Completeness must be distinguished from decidability, as the former ensures all truths are in principle provable but does not imply an effective algorithm exists to determine provability for arbitrary formulas.[15] First-order logic exemplifies this: it is complete per Gödel's theorem yet undecidable by the Church-Turing theorem, meaning no general procedure can mechanically verify proofs for all cases.[15] In applications, completeness assures that axiomatic systems for classical logic fully encapsulate all intended truths, enabling reliable formal reasoning in mathematics and providing a basis for model-theoretic semantics without gaps in provability for valid inferences.[15] This property is particularly vital in ensuring the robustness of foundational theories, though it presupposes consistency to avoid deriving contradictions that would trivialize the system.[15]Independence
In an axiomatic system, an axiom is independent if it cannot be logically derived from the remaining axioms using the system's rules of inference.[18] A set of axioms exhibits full independence if each individual axiom is independent relative to the others, ensuring no redundancy within the collection.[19] To establish the independence of an axiom, one standard method involves attempting to prove it solely from the other axioms; if no such derivation exists, independence holds, though this requires a formal demonstration of non-derivability.[20] Alternatively, independence can be shown by constructing a model that satisfies all axioms except the one under consideration, thereby proving that the axiom is not a necessary consequence of the rest.[18] Such models often involve alternative interpretations, like many-valued logics, where truth values are assigned to make the target axiom false while preserving the truth of the others.[18] Independent axiom sets are prized for their minimality, providing an elegant foundation that avoids superfluous elements and streamlines the deductive structure of the system.[21] In contrast, including redundant axioms—those derivable from the others—complicates the system without expanding its expressive or theorem-proving capabilities, potentially obscuring the logical essence.[21] For example, consider an axiomatic set S that derives a specific theorem T; augmenting S with T as an additional axiom yields a dependent set, since T is now provable from the original axioms alone.[20] This highlights how dependence emerges when an element is already captured by the system's deductive closure. The notion of axiom independence draws an intuitive parallel to linear independence in vector spaces: just as a basis comprises vectors where none is a linear combination of the others, forming a minimal spanning set, an independent axiom set serves as a foundational "basis" from which all theorems are derived without overlap.[22]Models and Semantics
Models
In the semantics of axiomatic systems, particularly those formalized in first-order logic, a model is a mathematical structure that provides an interpretation for the system's symbols, making all axioms true. Formally, a model \mathcal{M} consists of a non-empty domain M (the universe of discourse) and an interpretation function that assigns to each constant symbol an element of M, to each function symbol a function on M, and to each predicate symbol a relation on M, such that every axiom holds in \mathcal{M}.[23] This assignment validates the axioms semantically, distinguishing models from the purely syntactic manipulation of proofs within the system. For instance, in an axiomatic system for arithmetic with primitives like successor, addition, and multiplication, the standard model is the set of natural numbers \mathbb{N} equipped with the usual interpretations of these operations, where axioms such as commutativity of addition are satisfied.[24] A special class of models in first-order logic is the Herbrand models, which provide a canonical way to interpret sentences using only the language's own terms without invoking an external domain. The domain of a Herbrand model is the Herbrand universe, the set of all ground terms (terms without variables) formed from the function symbols and constants; interpretations then assign truth values to ground atoms (atomic formulas with ground terms) such that the axioms hold, avoiding the need for infinite or abstract domains beyond the language's syntax.[25] These models are particularly useful in automated theorem proving, as they enumerate all possible ground-level interpretations exhaustively. Two models are isomorphic if there exists a bijective mapping between their domains that preserves the interpretations of all constants, functions, and predicates, ensuring they are structurally identical up to relabeling.[23] In contrast, models are elementarily equivalent if they satisfy precisely the same set of first-order sentences, meaning no first-order formula can distinguish between them, even if they are not isomorphic; this equivalence captures shared semantic properties definable within the logic.[23] Non-standard models illustrate how axiomatic systems can admit interpretations diverging from intuitive or intended ones while still satisfying all axioms. For example, in non-standard analysis, extensions of the real numbers \mathbb{R}^* include infinitesimals (numbers smaller than any positive real but greater than zero) and infinite hyperreals, forming models of the axioms for the reals that enable rigorous infinitesimal calculus, as developed by Abraham Robinson.[23] The existence of models for axiomatic systems is guaranteed by key results in model theory: Gödel's completeness theorem establishes that a first-order theory is consistent (has no contradiction) if and only if it possesses a model.[23] Moreover, the Löwenheim-Skolem theorem ensures that if such a theory has an infinite model, then it has models of every infinite cardinality, including countable ones, allowing for a range of semantic realizations beyond any particular size.[23]Soundness and Completeness Theorems
In axiomatic systems for first-order logic, the soundness theorem asserts that every formula provable from a set of axioms and inference rules is semantically valid, meaning it holds true in every model of the system.[15] This ensures that the syntactic derivations do not yield falsehoods under any interpretation. The proof proceeds by induction on the length of the proof: the base case verifies that all axioms are valid in every model, and the inductive step shows that each inference rule preserves validity, so if the premises are true in a model, the conclusion must also be true there.[26] The completeness theorem, established by Kurt Gödel in 1930, states that every semantically valid formula is provable within the axiomatic system.[27] This result implies that for classical first-order logic, the syntactic notion of provability coincides exactly with semantic validity, providing a tight correspondence between formal proofs and truth in all models. A standard proof of the theorem, such as Leon Henkin's (1949), constructs a canonical model from a maximal consistent extension of the theory. This shows that every consistent theory has a model, ensuring that semantic validity coincides with provability.[28] Central to these theorems is the concept of semantic entailment, where a set of formulas \Gamma semantically entails a formula \phi, denoted \Gamma \models \phi, if every model satisfying all formulas in \Gamma also satisfies \phi.[29] Soundness guarantees that provability implies semantic entailment, while completeness ensures the converse, so \Gamma \vdash \phi if and only if \Gamma \models \phi, making the axiomatization adequate for capturing all logical truths. These theorems apply specifically to classical first-order logic and do not hold in general for higher-order logics or non-classical systems. In higher-order logics, completeness fails because semantic validity cannot be fully captured by any recursive axiomatic system, as the expressive power allows for undecidable truths beyond first-order quantifiers.[27] Similarly, in intuitionistic or modal logics, alternative semantics lead to incomplete axiomatizations relative to classical models. Together, soundness and completeness establish that a system is both reliable and exhaustive for first-order reasoning.Axiomatization
Process
The process of constructing an axiomatic system begins with identifying primitive or undefined terms, which serve as the foundational elements that cannot be further defined without circularity, such as points, lines, and planes in geometry.[30] These primitives provide the basic vocabulary for the system. Next, axioms are formulated as statements assumed to be true that capture the essential properties of the primitives and their relations, ensuring they are independent—meaning no axiom can be derived from the others—and collectively sufficient to describe the intended theory.[31] Inference rules are then selected, typically including modus ponens and other logical principles, to allow the deduction of theorems from axioms and previously derived statements.[30] Finally, the system's properties, such as consistency (absence of contradictions), are verified through metamathematical analysis or by constructing models that satisfy all axioms without leading to inconsistencies.[31] David Hilbert's axiomatic method exemplifies this process by emphasizing a finite set of axioms that achieve complete axiomatization, providing rigorous foundations for mathematical theories like geometry. In his approach, axioms are grouped thematically—for instance, into categories of incidence, order, congruence, parallels, and continuity—to systematically build the structure while demonstrating their independence through counterexamples or alternative models.[31] This method prioritizes finitary reasoning, using concrete, intuitive symbols to avoid abstract infinities during construction, thereby ensuring the system's rigor and freedom from hidden assumptions.[30] Construction often involves iterative refinement, where axioms are added to encompass newly discovered theorems or revised to eliminate redundancies, as seen in the evolution of geometric systems through collaborative efforts.[30] Challenges arise in balancing expressiveness, which allows the system to model complex phenomena, with simplicity to maintain clarity and avoid introducing paradoxes, such as those involving self-reference or infinite regress.[30] The ultimate outcome is a deductive system in which all valid theorems logically follow from the axioms via the specified rules, forming a coherent framework for further mathematical development.[31]Proofs and Theorems
In an axiomatic system, a theorem is defined as a formula that is derivable from the given axioms through a finite sequence of applications of the specified rules of inference, establishing its status as a logically entailed statement within the system.[32] This derivation process underscores the deductive foundation of axiomatic systems, where theorems extend the axiomatic basis without introducing unsubstantiated assumptions.[17] Axiomatic proofs are structured as finite chains of inferences, beginning with axioms or previously established theorems and proceeding step-by-step via inference rules to reach the desired conclusion. Direct proofs construct this chain affirmatively from premises to the theorem, relying on forward application of rules like modus ponens. In contrast, indirect proofs, also known as proofs by contradiction, assume the negation of the theorem and derive a logical inconsistency, thereby affirming the original statement through reductio ad absurdum.[33] These structures ensure that every step is justifiable within the system's rules, maintaining the proof's rigor.[34] Propositional logic and predicate calculus provide the foundational inference rules—such as modus ponens for implication and universal instantiation for quantification—that govern the derivation process in axiomatic systems, providing the foundational inference rules—such as modus ponens for implication and universal instantiation for quantification—that enable the chaining of formulas into proofs. Inference rules, as outlined in the components of axiomatic systems, form the syntactic backbone for these derivations, allowing systematic expansion from axioms to theorems.[35] Predicate logic extends propositional logic by incorporating quantifiers and relations, facilitating derivations in more expressive systems like those for arithmetic or set theory.[36] Despite the power of these deductive mechanisms, Gödel's first incompleteness theorem demonstrates that in any consistent axiomatic system capable of expressing basic arithmetic, there exist true statements that cannot be proved or disproved within the system, highlighting inherent undecidability. This limitation parallels the undecidability of the halting problem in computability theory, where no general algorithm can determine whether a program will terminate for all inputs, illustrating that axiomatic systems cannot mechanize all truth determinations.[17] Verification of proofs in axiomatic systems traditionally relies on peer review, where mathematicians scrutinize the logical steps for adherence to rules and absence of errors. In contemporary practice, formal proof checkers—implemented in interactive theorem provers such as Coq or Isabelle—automate this process by mechanically validating entire proof sequences against the axioms and inference rules, enhancing reliability for complex derivations. Soundness theorems ensure that such verified proofs correspond to semantic truth in models of the system.[37][38]Historical Development
Origins in Geometry and Logic
The axiomatic method in geometry originated with Euclid's Elements around 300 BCE, which presented the first systematic axiomatization of plane geometry through a set of definitions, five postulates, and five common notions intended as self-evident truths from which theorems could be deduced.[39] These postulates included basic assumptions about points, lines, and circles, while the common notions addressed general properties like equality and wholes exceeding parts, forming the foundational framework for rigorous geometric proofs.[39] However, the fifth postulate, known as the parallel postulate, stated that if a straight line falling on two straight lines makes the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side; this assumption proved contentious due to its non-intuitive nature and reliance on infinite extensions, prompting centuries of attempts to derive it from the other axioms without success.[40] In parallel, the roots of axiomatic systems in logic trace to Aristotle's syllogistic in the 4th century BCE, as developed in his Organon, which formalized deductive reasoning through syllogisms—arguments deriving conclusions from categorical premises, such as "All men are mortal; Socrates is a man; therefore, Socrates is mortal."[41] This system established inference rules based on universal and particular statements, treating premises as axioms from which valid conclusions follow necessarily, and it became the cornerstone of Western logical thought by emphasizing deduction over induction.[41] Aristotle's approach treated logical principles as indemonstrable starting points, akin to geometric postulates, influencing the structure of axiomatic reasoning for millennia.[42] Medieval scholastic logicians, building on Aristotle's framework from the 12th century onward, refined deductive methods through extensive commentary and extension of syllogistic theory, particularly in the works of figures like Peter Abelard and John Buridan.[43] These developments, centered in European universities, addressed gaps in Aristotle's system by introducing modal syllogisms and theories of supposition (how terms refer in context), enhancing the precision of inferences while maintaining axiomatic premises as foundational.[43] This scholastic tradition preserved and elaborated logical deduction, bridging ancient and modern axiomatic approaches. The transition to the modern era saw early efforts toward formal languages in mathematics by René Descartes and Gottfried Wilhelm Leibniz in the 17th century. Descartes, in his La Géométrie (1637), integrated algebraic symbols with geometric figures, creating a symbolic notation that allowed problems to be expressed as equations, thus laying groundwork for a more formal, language-like treatment of deduction in geometry. Leibniz envisioned a characteristica universalis, a universal symbolic language for all sciences where disputes could be resolved by calculation, combining logical axioms with mathematical notation to mechanize reasoning.[44] Despite these advances, early axiomatic systems remained informal, as exemplified by ambiguities in Euclid's Elements, where proofs often relied on unstated assumptions, such as the superposition of figures without explicit justification, and the parallel postulate's indirect use until late in Book I highlighted foundational inconsistencies.[40] These limitations underscored the need for stricter rigor, as geometric deductions sometimes presupposed results or employed intuitive appeals not derivable from the axioms alone.[39]19th and 20th Century Foundations
In the 19th century, the discovery of non-Euclidean geometries profoundly challenged the perceived universality of Euclid's axiomatic framework, particularly the parallel postulate. Mathematicians such as Nikolai Lobachevsky, who developed hyperbolic geometry in the late 1820s, and János Bolyai, who independently constructed a similar system in 1832, demonstrated that consistent geometries could exist without the Euclidean parallel axiom, while Bernhard Riemann's 1854 work on elliptic geometry further expanded alternatives.[45] These developments, spanning the 1830s to 1860s, highlighted the relativity of axioms and spurred efforts to rigorously distinguish between synthetic and analytic truths in geometry. In response to these developments and the need for greater rigor, David Hilbert published Grundlagen der Geometrie in 1899, providing a complete, independent, and consistent set of axioms for Euclidean geometry that eliminated implicit assumptions in Euclid's work.[45][40] By the late 19th century, axiomatic methods extended to arithmetic and logic through the works of Giuseppe Peano and Gottlob Frege. Frege's 1879 Begriffsschrift introduced a formal symbolic language for predicate logic, enabling precise axiomatization of logical inferences independent of natural language ambiguities.[46] Peano, building on this foundation though unaware of Frege's specifics, published his axioms for the natural numbers in 1889, providing a concise set of postulates that formalized arithmetic operations and induction.[46] These efforts in the 1880s and 1890s marked a shift toward symbolic rigor, laying groundwork for modern foundational studies. These advancements were soon challenged by paradoxes in set theory, most notably Bertrand Russell's paradox in 1901, which revealed contradictions in unrestricted comprehension principles underlying Frege's and Cantor's systems. To address this, Russell, collaborating with Alfred North Whitehead, produced Principia Mathematica (1910–1913), an extensive axiomatic framework deriving mathematics from pure logic using ramified type theory and axioms to circumvent paradoxes.[47][48] David Hilbert's contributions in the early 20th century elevated axiomatic systems to metamathematical scrutiny. At the 1900 International Congress of Mathematicians, Hilbert outlined 23 problems, several addressing foundational issues like the consistency of arithmetic and geometry.[30] His program, formalized in the 1920s, sought finitary proofs of consistency for axiomatic systems using metamathematical methods, aiming to secure mathematics against paradoxes through rigorous, content-free analysis.[30] Kurt Gödel's 1931 incompleteness theorems decisively impacted Hilbert's ambitions by proving that any consistent formal system capable of basic arithmetic contains undecidable propositions—true statements unprovable within the system.[49] This result shattered hopes for absolute consistency proofs, revealing inherent limitations in axiomatic completeness. Complementing this, Alonzo Church and Alan Turing's 1936 demonstrations of the halting problem's undecidability led to the Church-Turing thesis, positing that effective computability aligns with Turing machine capabilities, thus framing undecidability as a fundamental barrier in formal systems. Post-World War II advancements refined axiomatic foundations through semantic tools and standardized theories. Alfred Tarski's work from the 1930s onward, culminating in his 1954 proposal of "model theory," provided a framework to interpret axiomatic systems via structures satisfying their sentences, enabling precise studies of isomorphism and elementary equivalence. Concurrently, axiomatic set theory standardized around Zermelo-Fraenkel axioms with choice (ZFC), evolving from Ernst Zermelo's 1908 system through refinements by Abraham Fraenkel and others in the 1920s, becoming the dominant foundation for mathematics by the mid-20th century due to its balance of expressive power and paradox avoidance.[50]Examples
Peano Axioms for Natural Numbers
The Peano axioms provide a foundational axiomatic system for the natural numbers, defining their structure through a constant for zero, a successor function, and an induction principle. Originally formulated by Giuseppe Peano in 1889 in his work Arithmetices principia, nova methodo exposita, these axioms consist of nine postulates: four concerning equality and five proper axioms for arithmetic.[51] Peano's system built directly on Richard Dedekind's 1888 axiomatization of a "simply infinite system" in Was sind und was sollen die Zahlen?, refining it into a more systematic framework while acknowledging Dedekind's priority.[52] In modern presentations, the axioms use 0 as the starting point rather than Peano's original 1, and they are stated in the language of first-order logic with equality, including the constant symbol 0, the unary function symbol S (successor), and no other non-logical symbols for the basic version. The nine axioms are as follows:- Reflexivity of equality: For every natural number x, x = x.
- Symmetry of equality: For all natural numbers x and y, if x = y, then y = x.
- Transitivity of equality: For all natural numbers x, y, z, if x = y and y = z, then x = z.
- Substitution for equality: For all natural numbers a, b and any property P, if a = b, then P(a) if and only if P(b).
- Zero is a natural number: $0 is a natural number.
- Successor closure: For every natural number n, the successor S(n) is a natural number.
- No zero successor: For every natural number n, S(n) \neq [0](/page/0).
- Injectivity of successor: For all natural numbers m, n, if S(m) = S(n), then m = n.