Consistency
Consistency is the quality of agreement, harmony, or uniformity among parts or features of a whole, often implying coherence, reliability, or the absence of contradictions across related elements or over time.[1] In logical and philosophical contexts, consistency refers to the property of a set of statements or beliefs where no contradictions arise, meaning it is possible for all elements to be true simultaneously without logical conflict.[2] For instance, a theory is consistent if there exists a model or interpretation under which all its axioms hold true.[3] In psychology, consistency encompasses theories and principles related to cognitive and behavioral alignment, where individuals are motivated to maintain congruence among their attitudes, beliefs, and actions to reduce psychological discomfort.[4] Cognitive consistency theories, such as balance theory and cognitive dissonance, highlight how people strive for internal harmony in their mental states, viewing inconsistencies as sources of tension that drive attitude change or rationalization.[5] This drive extends to ethical decision-making, where consistency is seen as the absence of contradictions in moral reasoning, serving as a foundational principle for reliable ethical conduct.[6] In computer science and databases, consistency denotes the correctness and uniformity of data states, ensuring that transactions adhere to predefined rules and that all replicas or copies of data remain identical across systems.[7] For example, in distributed systems, strong consistency guarantees that all reads reflect the most recent write, while eventual consistency allows temporary discrepancies that resolve over time.[8] In physics, consistency often manifests as dimensional consistency, a fundamental check where equations must balance in units (e.g., length, time, mass) on both sides to ensure physical validity.[9] These varied applications underscore consistency's role as a core concept in ensuring reliability and coherence across disciplines.General Concepts
Definition and Historical Context
In formal logic, consistency is a core property of a formal system that ensures its axioms do not lead to contradictory derivations. A formal system is syntactically consistent if no formula φ exists such that both φ and its negation ¬φ can be derived as theorems from the axioms using the system's rules of inference.[10] This definition emphasizes the syntactic aspect, focusing on what can be proved within the system itself without reference to external interpretations.[11] In contrast, semantic consistency requires that the axioms have a model—an interpretation where all axioms are true—thus precluding any contradiction in that structure.[12] These notions are equivalent in sound and complete systems like first-order logic, where provability aligns with truth in all models.[13] The historical development of consistency arose from foundational crises in mathematics during the late 19th and early 20th centuries, particularly in response to paradoxes that exposed flaws in informal reasoning about sets and infinity. Russell's paradox, discovered in 1901, exemplified this by showing that the assumption of a set containing all sets that do not contain themselves leads to a self-referential contradiction, undermining naive set theory.[14] This and similar paradoxes, such as those in Cantor's transfinite numbers, highlighted the need for precise axiomatic systems to avoid such inconsistencies and restore rigor to mathematical foundations.[14] David Hilbert's program, outlined in the 1920s, formalized the quest for consistency by advocating finitary proofs—relying only on concrete, finite methods—to demonstrate that axiomatic systems for mathematics are free of contradictions, thereby justifying the use of ideal infinite processes.[15] In the 1930s, this effort advanced through contributions from key figures: Wilhelm Ackermann, who collaborated with Hilbert on foundational texts exploring consistency in logical systems, and Kurt Gödel, whose investigations into provability and models refined the criteria for consistent formalization.[11][16] These developments established consistency as a central concern in mathematical logic, influencing subsequent axiomatic frameworks.Types of Consistency
In mathematical logic, consistency can be categorized along two primary axes: syntactic versus semantic, and absolute versus relative. Syntactic consistency, also known as proof-theoretic consistency, holds for a formal theory if there is no derivation of a contradiction, such as both a formula ϕ and its negation ¬ϕ, within the system's axioms and rules of inference.[13] Semantic consistency, or model-theoretic consistency, requires that the theory has a model—an interpretation in which all axioms are true—ensuring satisfiability without contradiction.[13] For first-order logic, Gödel's completeness theorem establishes the equivalence between these notions: a theory is syntactically consistent if and only if it is semantically consistent.[13] Absolute consistency refers to the outright provability of a theory's consistency within itself, without reliance on external assumptions. However, Gödel's second incompleteness theorem demonstrates that any consistent formal system capable of expressing basic arithmetic, such as Peano arithmetic, cannot prove its own consistency; thus, absolute consistency remains undecidable within sufficiently powerful systems.[17] In contrast, relative consistency proofs establish that the consistency of one theory T implies the consistency of another theory S, often by embedding S into T or constructing a model of S from a model of T.[18] These proofs can be syntactic, showing that a contradiction in S would yield one in T, or semantic, demonstrating model existence for S given T's model.[18] A classic example is Gödel's relative consistency proof that the axioms of choice and the continuum hypothesis are consistent relative to Zermelo-Fraenkel set theory (ZF), via the constructible universe model.[17] Similarly, the consistency of ZF is relative to ZF plus the existence of a strongly inaccessible cardinal, as the cumulative hierarchy up to such a cardinal yields a model of ZF. Beyond these distinctions, specialized notions of consistency arise in specific theories, such as ω-consistency in arithmetic systems like Peano arithmetic. A theory is ω-consistent if it is consistent and does not prove both an existential statement ∃x φ(x) and the negations ¬φ(n) for every standard natural number n, preventing the system from "falsely witnessing" its own existential claims through infinite descent.[19] This stronger condition than mere consistency was used in Gödel's original proof of the first incompleteness theorem but was later weakened to simple consistency by Rosser's theorem.[16] In arithmetic, ω-consistency ensures that the theory avoids proving false Σ₁ sentences, providing a safeguard against certain pathological proofs while remaining relevant for analyzing provability predicates.[19]Consistency in First-Order Logic
Notation and Syntax
In first-order logic, the standard notation includes a set of logical connectives to combine formulas: conjunction (\wedge), disjunction (\vee), negation (\neg), material implication (\to), and material equivalence (\leftrightarrow). Quantifiers are denoted by the universal quantifier (\forall) and the existential quantifier (\exists), which bind variables in their scope. The language also features variables (typically lowercase letters like x, y, z), constant symbols (lowercase letters like a, b, c or domain-specific names), function symbols (e.g., f, g with specified arity), and predicate symbols (e.g., P, Q with arity, representing relations). Equality is often included as a binary predicate =.[20][21] The syntax defines terms recursively: a term is either a variable, a constant symbol, or a function symbol applied to terms of appropriate arity (e.g., f(t_1, \dots, t_n) where each t_i is a term). Atomic formulas consist of a predicate symbol applied to terms (e.g., P(t_1, \dots, t_n)) or equality between terms (t_1 = t_2). Well-formed formulas (wffs) are built recursively from atomic formulas using connectives and quantifiers: if \phi and \psi are wffs, then so are \neg \phi, (\phi \wedge \psi), (\phi \vee \psi), (\phi \to \psi), (\phi \leftrightarrow \psi); additionally, if x is a variable, then (\forall x \, \phi) and (\exists x \, \phi) are wffs. Parentheses ensure unambiguous parsing, with standard precedence rules (e.g., \neg binds tightest, followed by quantifiers, then \wedge and \vee, then \to and \leftrightarrow). Sentences are wffs with no free variables.[20][21] Variables in a wff are either free (occurring outside any quantifier scope) or bound (within the scope of a matching \forall or \exists). For instance, in \forall x (P(x) \to Q(x)), x is bound, while in P(x) \to Q(y), both x and y are free; substituting a term t for a free variable x in a wff \phi yields \phi[t/x], provided no variable capture occurs (i.e., variables in t are renamed if necessary to avoid binding by quantifiers in \phi). This notation provides the formal basis for expressing and analyzing consistency in first-order theories.[20][21]Formal Definition
In first-order logic, a theory T, consisting of a set of sentences, is consistent if there exists no sentence \phi such that both T \vdash \phi and T \vdash \neg \phi, where \vdash denotes syntactic derivability within the theory.[22] Equivalently, T is consistent if it does not derive a contradiction, such as \bot or \phi \land \neg \phi for some \phi.[23] A theory T is inconsistent if it is not consistent, meaning there exists some \phi with T \vdash \phi and T \vdash \neg \phi. In such a case, from the principles of classical logic, T derives every possible sentence \psi; that is, if T is inconsistent, then for any \psi, T \vdash \psi.[22] This property, known as ex falso quodlibet (from falsehood, anything follows), renders an inconsistent theory trivial and incapable of distinguishing truths.[23] Maximal consistent sets provide a useful extension of this notion: a consistent set T is maximal if no larger consistent set properly contains it, meaning T is closed under logical consequence and for every sentence \phi, exactly one of \phi or \neg \phi belongs to T. Every consistent theory can be extended to a maximal consistent set.[23]Basic Properties and Results
In first-order logic, a theory T is consistent if and only if it does not prove the falsum \bot, meaning there exists no deduction from T to a contradiction.[24] This syntactic characterization ensures that the theory avoids deriving both a formula \phi and its negation \neg \phi for any \phi.[25] A fundamental property is the deduction theorem, which establishes that for any theory T, formula \phi, and formula \psi, T \cup \{\phi\} \vdash \psi if and only if T \vdash \phi \to \psi.[24] This equivalence facilitates reasoning about hypothetical assumptions and preserves the structure of proofs across theory extensions.[26] From the deduction theorem follows the preservation of consistency under non-refuting extensions: if T is consistent and T \not\vdash \neg \phi, then T \cup \{\phi\} is also consistent.[27] Equivalently, T \cup \{\phi\} proves \bot only if T already derives \neg \phi, ensuring that adding a formula compatible with T maintains the theory's coherence.[24] Every consistent theory extends to a complete consistent theory, often termed a prime theory, via Zorn's lemma applied to the partially ordered set of consistent extensions of T.[27] In this construction, the maximal element is complete, deciding every formula in the language, and remains consistent.[28] Such extensions underpin results like Henkin's theorem, which constructs models from consistent theories.[25]Henkin's Theorem
Henkin's theorem, a cornerstone of model theory, asserts that every consistent set of sentences in first-order logic has a model, thereby establishing the semantic completeness of the proof system. Formally, if \Gamma is a consistent set of closed formulas in a first-order language, then there exists a structure \mathcal{M} such that \mathcal{M} \models \phi for all \phi \in \Gamma. This result forms a key part of the broader completeness theorem, linking syntactic consistency directly to the existence of a satisfying interpretation.[29] The proof relies on the Henkin construction, which extends a given consistent theory to a maximal consistent set enriched with witnesses for existential quantifiers. Beginning with a consistent set \Gamma, one enumerates all closed sentences in the language and iteratively extends \Gamma by adding sentences while preserving consistency, yielding a maximal consistent set T_0. To address existential commitments, new constant symbols (witnesses) are introduced for each sentence of the form \exists x \, \psi(x) in T_0; specifically, a fresh constant c is added along with \psi(c), ensuring the extension remains consistent. This process is repeated transfinitely, incorporating infinitely many such witnesses if necessary, to produce a "Henkin theory" T that is maximal consistent and satisfies a witness condition: for every existential sentence in T, its witness instantiation is also in T. From this enriched theory, a model is constructed by taking the set of all introduced constants as the domain and defining interpretations based on the truth in T.[29][30] Leon Henkin introduced this construction in his 1947 PhD thesis, providing a more intuitive and general alternative to Kurt Gödel's original 1930 proof of completeness, which relied on the Lindenbaum lemma and canonical models without explicit witness handling. Henkin's approach proved particularly adaptable for extensions to higher-order logics and non-classical systems. A detailed proof sketch follows in the subsequent section.[29]Proof Sketch for Henkin's Theorem
The proof of Henkin's theorem relies on constructing a model for a consistent theory in first-order logic, assuming the language is countable to ensure the process terminates without cycles.[31] The first step enumerates all sentences of the language as \sigma_1, \sigma_2, \dots, and iteratively extends the original consistent theory \Gamma by adding witness constants for existential quantifiers. Specifically, for each sentence \sigma_n of the form \exists x \, \phi(x), a new constant c_n is introduced, and the axiom \phi(c_n) is added to the extension, ensuring the theory remains consistent at each finite stage.[30][32] The second step extends this countable chain to a maximal consistent theory T_m using the Lindenbaum lemma, which guarantees that T_m is consistent, closed under logical consequence, and includes witnesses for all existentials in the original theory, preserving consistency throughout the process.[31][30] In the third step, a canonical model is constructed from T_m, with the domain consisting of all terms (including the new constants) in the expanded language, interpretations defined via the theory's sentences (e.g., t_1 = t_2 holds if T_m \vdash t_1 = t_2, and R(t_1, \dots, t_k) holds if T_m \vdash R(t_1, \dots, t_k)), forming a term model that satisfies the axioms by design.[32][30] Finally, satisfaction is verified by showing that for any sentence \sigma in the original theory, if \sigma \in T_m, then it is true in the model by the construction of interpretations; conversely, the maximality of T_m ensures that any true sentence in the model is provable from T_m, thus from \Gamma, establishing semantic consistency.[31]Consistency in Arithmetic and Set Theory
Peano Arithmetic
Peano arithmetic (PA) is the canonical first-order axiomatization of the natural numbers, providing a formal foundation for arithmetic within first-order logic.[17] It consists of a language with symbols for zero (0), the successor function (S), addition (+), multiplication (×), logical connectives, quantifiers, and variables ranging over natural numbers. The axioms of PA fall into two categories: a finite set of basic axioms defining the structure of natural numbers and arithmetic operations, and an infinite axiom schema for mathematical induction.[17] The basic axioms are as follows:- ∀x (x ≠ 0 → ∃y (x = S(y))) (every natural number is either zero or a successor).
- ∀x ¬(S(x) = 0) (zero is not the successor of any natural number).
- ∀x ∀y (S(x) = S(y) → x = y) (the successor function is injective).
- ∀x (x + 0 = x) and ∀x ∀y (x + S(y) = S(x + y)) (recursive definition of addition).
- ∀x (x × 0 = 0) and ∀x ∀y (x × S(y) = (x × y) + x) (recursive definition of multiplication).