Fact-checked by Grok 2 weeks ago

Compactness theorem

In , the compactness theorem states that a set of sentences has a model if and only if every finite of the set has a model. This theorem holds for classical with standard semantics, including the logic with , and applies to both countable and uncountable sets of sentences. It is a cornerstone of , bridging syntactic (no contradictions derivable) and semantic (existence of a interpreting the sentences). The theorem was first established by in 1930 for countable languages and theories, as part of his work on the completeness of . Subsequent proofs, including Leon Henkin's 1949 construction using the completeness theorem, extended it to arbitrary languages and demonstrated its equivalence to weaker forms of the , such as the ultrafilter lemma or the Boolean prime ideal theorem. coined the term "compactness theorem" in 1952, drawing an analogy to the topological notion of , where infinite spaces behave like their finite subcovers. Among its key implications, the compactness theorem enables the construction of non-standard models, such as non-Archimedean ordered fields in or expansions of numbers with elements in non-standard analysis. It also supports the Löwenheim–Skolem theorem's upward direction, ensuring models of any desired cardinality for consistent theories, and facilitates applications in algebra (e.g., existence of algebraically closed fields of infinite transcendence degree) and (e.g., ultraproducts preserving properties). Notably, compactness fails for stronger logics like or infinitary logics, highlighting the limitations of expressiveness.

Foundations

Statement

In first-order logic, a formal system for expressing statements about mathematical structures, the key concepts include sentences (closed formulas without free variables), models (structures consisting of a domain and interpretations of the language's symbols that make the sentences true), satisfiability (a set of sentences has a model if there exists at least one such structure satisfying all of them), and semantic entailment (denoted by ⊨, where a set Σ of sentences semantically entails a sentence ϕ if every model of Σ is also a model of ϕ). The compactness theorem states that for any set Σ of sentences, Σ has a model if and only if every finite Σ₀ ⊆ Σ has a model. This formulation highlights compactness as a finite character property: the of an infinite reduces to checking finite approximations, ensuring that global consistency follows from . Equivalently, Σ ⊨ ϕ holds there exists a finite Σ₀ ⊆ Σ such that Σ₀ ⊨ ϕ, linking semantic entailment to finite subtheories.

Prerequisites

First-order logic (FOL) is the foundational framework for the compactness theorem, providing the syntactic and semantic structures necessary for expressing mathematical theories and their interpretations. A first-order language consists of a signature, which is a collection of non-logical symbols including constant symbols (for specific objects), function symbols (of various arities, for operations on objects), and predicate symbols (of various arities, for relations between objects), along with logical symbols such as variables, connectives (negation \neg, conjunction \land, disjunction \lor, implication \to, biconditional \leftrightarrow), quantifiers (\forall for universal, \exists for existential), and equality (=) if included. Terms in the language are built recursively from variables and constants using function symbols; for example, if f is a unary function symbol and c a constant, then f(c) is a term. Formulas are finite expressions formed from terms via atomic formulas (such as P(t_1, \dots, t_n) for a predicate P and terms t_i, or t_1 = t_2), combined with connectives, and prefixed by quantifiers binding variables; a formula is a sentence if it contains no free variables. Semantically, a structure (or model) \mathcal{M} for a language interprets the signature over a non-empty domain D (the universe of discourse), assigning to each constant an element of D, to each n-ary function a function from D^n to D, and to each n-ary predicate a subset of D^n. The satisfaction relation \mathcal{M} \models \phi[\sigma] holds if formula \phi is true in \mathcal{M} under a variable assignment \sigma that maps free variables in \phi to elements of D; for sentences, assignments are irrelevant, so we write \mathcal{M} \models \phi. A model of a theory T (a set of sentences) is a structure \mathcal{M} such that \mathcal{M} \models \psi for every \psi \in T. A theory T in FOL is simply a set of sentences closed under logical consequence in some contexts, but more generally any collection of sentences intended to axiomatize a class of structures. Consistency of T is a syntactic notion: T is consistent if there exists no sentence \theta such that both \theta and \neg \theta are logically derivable from T using the rules of inference (e.g., modus ponens, quantifier rules). In contrast, satisfiability is semantic: T (or a sentence \phi) is satisfiable if there exists a model \mathcal{M} such that \mathcal{M} \models T (i.e., \mathcal{M} \models \phi). By Gödel's completeness theorem for FOL, a theory is consistent if and only if it is satisfiable, linking syntax and semantics. Many important theories in mathematics are finitely axiomatizable, meaning they can be defined by a finite set of axioms from which all theorems follow via logical deduction; examples include the axioms for groups or fields. However, some theories require infinitely many axioms to capture their intended models, such as the theory of algebraically closed fields of a fixed characteristic, which needs axioms excluding roots of certain polynomials for each degree. The compactness theorem addresses scenarios involving infinite theories or infinite sets of sentences, highlighting the interplay between finite subsets (which may be satisfiable) and the entire infinite collection. This distinction underscores why compactness is a non-trivial bridge between local (finite) and global (infinite) properties in FOL.

History

Early Developments

The roots of the compactness theorem trace back to the early , intertwined with David for the foundations of mathematics and the associated . , outlined in his lectures and writings from the 1920s, sought to establish the of mathematical systems using finitary methods, while the posed the challenge of devising an algorithm to determine the truth or falsity of any mathematical statement in a . These efforts emphasized the need to handle and in logical systems, particularly for infinite sets of axioms, laying groundwork for later developments in . Early contributions to and came from Leopold Löwenheim and . In 1915, Löwenheim proved that if a sentence in a finite relational language is satisfiable, then it has a countable model, marking the first Löwenheim-Skolem theorem and highlighting the existence of models for consistent theories. Skolem extended this in 1920 with a simplified proof applicable to countable sets of sentences, using the to ensure models exist, which further underscored the limitations and possibilities of infinite structures in logic. These results provided initial insights into how finite checks could imply properties for larger systems, though they were restricted to specific language cardinalities. Kurt Gödel's work in 1929–1930 represented a pivotal precursor to the compactness theorem. His doctoral dissertation, published in 1930, established the completeness theorem for , demonstrating that every consistent set of sentences has a model, but this held only for countable languages and theories. Gödel explicitly included the compactness theorem for countable languages in the same paper, stating that a countably of formulas is satisfiable if and only if every finite subset is satisfiable. This addressed motivations from by linking provability to semantic , yet it revealed gaps for uncountable cases, motivating further exploration. In the 1930s, motivations from and , particularly Alfred Tarski's investigations into truth and definability, highlighted challenges with infinite axiom sets. Tarski's 1933 work on the concept of truth in formalized languages demonstrated the undefinability of truth within sufficiently strong theories, exposing paradoxes arising from self-referential infinite structures and emphasizing the need for tools to manage infinite languages without collapsing into inconsistency. This work, alongside Tarski's contributions to set-theoretic models, underscored issues in axiomatizing infinite domains, paving the way for realizations that finite consistency could imply global consistency in broader contexts. By the late 1930s, these ideas converged in the recognition that compactness principles could extend finite verifications to infinite theories, influencing subsequent formalizations.

Formalization and Proofs

In the and 1940s, played a central role in formalizing the compactness theorem, initially expressing it through the properties of and related concepts in deductive systems. Tarski's paper "On the Concept of Logical Consequence" provided the first precise semantic definition of , stating that a X follows from a set Γ of sentences if every model satisfying Γ also satisfies X, and explicitly noting that this relation satisfies compactness: if X follows from Γ, then X follows from some finite subset of Γ. This formulation established the theorem as a fundamental property of , linking infinite sets of premises to finite approximations via filter-like conditions in the consequence operation. Tarski's early proofs of compactness relied on syntactic methods, particularly in collaboration with Adolf Lindenbaum. In their 1938 work, Tarski outlined a proof using , which asserts that any consistent set of sentences can be extended to a maximal consistent set— a closed under logical . The approach involves assuming a set Γ is finitely satisfiable but not satisfiable, extending it to a maximal consistent set via the lemma, and deriving a contradiction by showing that the maximal set's completeness implies the existence of a model, thereby confirming that finite satisfiability extends to the whole set without delving into model construction details. This syntactic proof highlighted the theorem's reliance on maximal consistent sets as filters in the algebra of formulas. The adoption of the name "compactness theorem" occurred later under Tarski's influence in 1952, drawing an explicit to topological compactness, where every open cover admits a finite subcover, paralleling the finite subset condition in . During the 1940s, Anatoly Malcev advanced the theorem in , proving compactness for infinite algebraic systems and extending it to uncountable languages in his 1941 investigations, which integrated the result into the study of quasivarieties and filter properties of congruences. Key publications from 1934 to 1949 solidified the theorem's place in : early extensions via Lindenbaum's lemma (developed c. 1926–1927); Tarski's 1936 consequence paper; the 1938 Lindenbaum-Tarski proof; Malcev's 1941 algebraic generalization; and 1949 works by Leon Henkin and , where Tarski credited the theorem's foundational status in completeness proofs for first-order calculi.

Proof Techniques

Henkin Construction

The Henkin construction offers a of the compactness theorem in , demonstrating that if a T in a countable language is finitely consistent—meaning every finite subset of T has a model—then T itself has a model. This method, developed by Leon Henkin, proceeds by iteratively expanding T into a larger that includes witnesses for all existential quantifiers while preserving consistency, ultimately yielding an explicit model. The approach is particularly suited to countable languages, where the process can be carried out in countably many steps. Begin with a consistent T in a countable L. Introduce a countable set C = \{c_n \mid n \in \mathbb{N}\} of new constant symbols, forming an expanded L' = L \cup C. Enumerate all sentences of L' as \{\psi_n \mid n \in \mathbb{N}\}, noting that since L is countable, so is L'. Construct a sequence of theories T_n inductively, starting with T_0 = T. For each n, if T_n \cup \{\psi_n\} is consistent, set T_{n+1} = T_n \cup \{\psi_n\}; otherwise, set T_{n+1} = T_n. At limit stages, take unions. The resulting T_\omega = \bigcup_{n<\omega} T_n remains consistent because any finite subset of T_\omega is contained in some T_n, and consistency is preserved at each step due to the finite consistency of T. This T_\omega is maximally consistent in L', meaning for every sentence \phi in L', exactly one of \phi or \neg \phi belongs to T_\omega. To ensure witnesses for existential sentences, modify the construction to form a Henkin theory. During the enumeration, whenever an existential sentence \exists x \, \phi(x) (with x free in \phi) appears as \psi_n and T_n \not\vdash \exists x \, \phi(x) \rightarrow \phi(c_k) for any existing constant c_k \in C, introduce a fresh witness constant from C (or reuse if possible) and add both \exists x \, \phi(x) and \phi(c) where c is the witness. Consistency is maintained via the key witness lemma: if T is consistent and T \not\vdash \exists x \, \phi(x), then there exists a new constant c such that T \cup \{\phi(c)\} is consistent; equivalently, if T \cup \{\phi(c)\} is inconsistent for every constant c (old or new), then T \vdash \forall x \, \neg \phi(x). Formally, suppose \Gamma \vdash \neg \phi(c) for each constant c in the current language; then by the generalization rule (valid in first-order logic), \Gamma \vdash \forall x \, \neg \phi(x), so adding a witness would contradict consistency only if the existential is already refuted. This step-by-step addition ensures that in the final Henkin theory T_H = T_\omega (now expanded appropriately), for every existential sentence \exists x \, \phi(x) \in T_H, there is some constant c \in C such that \phi(c) \in T_H. The theory T_H remains maximally consistent and satisfies T. With T_H in hand, construct a model \mathcal{M} = (M, I) explicitly. Let M = C / \sim, where c \sim d if and only if T_H \vdash c = d (the equivalence classes under this relation). Interpret the constants of C as elements of M via their classes, functions and relations of L by satisfaction in T_H (e.g., for a unary function f, = f^\mathcal{M}() if T_H \vdash f(d) = c), and check that \mathcal{M} \models T by induction on formula complexity: a formula \phi holds in \mathcal{M} if and only if \phi \in T_H, with the witness property ensuring existentials are realized in M. Thus, \mathcal{M} is a model of T, proving compactness for countable languages. For uncountable languages, extend the construction transfinitely: if the language has cardinality \kappa, introduce \kappa-many new constants and build the Henkin theory over ordinals \xi < \kappa^+ (or suitable regular cardinal), preserving consistency at each stage using the witness lemma generalized to the expanded language. The resulting T_H is consistent, and by the , it has a model of cardinality at most \aleph_0 (or the language's cardinality), ensuring compactness holds generally. This extension appears in Henkin's work on non-denumerable logics. The method's advantage lies in its explicitness: unlike abstract proofs, it directly builds the model from the theory, facilitating applications in countable settings where the domain M is explicitly describable.

Ultrafilter Method

Ultrafilters are maximal filters on a power set or , playing a central role in proofs of the through their use in constructing consistent valuations and models. A filter \mathcal{F} on a set I is a non-empty collection of subsets of I that is closed under finite intersections and upward closed, containing I but not \emptyset. An U on I extends a filter by being maximal, satisfying the property that for every subset A \subseteq I, either A \in U or I \setminus A \in U. Principal ultrafilters are those generated by a single element j \in I, consisting of all subsets containing j; they correspond to "point masses" in the construction. Non-principal ultrafilters, in contrast, contain no finite sets and exist on infinite sets only under the ; they enable "uniform" limits over infinite index sets without favoring particular elements. The ultrafilter lemma asserts that every filter on a Boolean algebra (or power set) can be extended to an ultrafilter, a consequence weaker than the full axiom of choice but equivalent to the Boolean prime ideal theorem. This lemma is pivotal for the ultrafilter method, as it guarantees the existence of maximal extensions needed to define consistent truth assignments in logic. In propositional logic, the application proceeds by mapping sentences to the Lindenbaum algebra, the free Boolean algebra generated by propositional variables modulo logical equivalence, with operations \land, \lor, and \lnot. Given a set \Gamma of propositional sentences where every finite subset is satisfiable, form the filter generated by the equivalence classes of finite conjunctions from \Gamma. By the ultrafilter lemma, extend this to an ultrafilter U on the Lindenbaum algebra B. This ultrafilter defines a homomorphism h: B \to \{0,1\} by h(b) = 1 if b \in U and $0 otherwise, where \{0,1\} is the two-element Boolean algebra with $0 < 1. The homomorphism property follows from the closure of U under meets and complements, yielding a valuation that satisfies all sentences in \Gamma since no finite conjunction maps to $0. The topological perspective interprets this via Stone duality: the Stone space of the Lindenbaum algebra B is the compact Hausdorff space whose points are the ultrafilters on B, with topology generated by clopen sets \{ U' \mid b \in U' \} for b \in B. Compactness of the Stone space arises from Tychonoff's theorem applied to the product \{0,1\}^B, ensuring every cover has a finite subcover. Each point (ultrafilter) in this space corresponds to a homomorphism to \{0,1\}, hence a model (truth assignment) of the algebra; the existence of such a point for the filter from \Gamma implies a model for \Gamma, as the space's compactness mirrors the finite satisfiability condition. For first-order logic, the key theorem adapts this idea: if every finite subset of a set \Gamma of sentences is satisfiable, an ultrafilter yields a consistent valuation via an ultraproduct, producing a model. Specifically, let I be an index set (e.g., the natural numbers for countable \Gamma), with structures M_i such that each finite sub theory is realized in sufficiently many M_i. Extending the Fréchet filter of cofinite sets to a non-principal ultrafilter U on I, the ultraproduct \prod_{i \in I} M_i / U is defined with equivalence classes = \{ g \mid \{ i \mid f(i) = g(i) \} \in U \}. Łoś's theorem states that for any first-order formula \phi(\bar{v}) and tuple [f_1], \dots, [f_n], \prod M_i / U \models \phi([f_1], \dots, [f_n]) \iff \{ i \in I \mid M_i \models \phi(f_1(i), \dots, f_n(i)) \} \in U. For \phi \in \Gamma, the set where M_i \models \phi is cofinite (hence in U), so the ultraproduct satisfies all of \Gamma. This derives the model non-constructively from the ultrafilter, contrasting explicit methods.

Applications

Model Existence

The compactness theorem directly implies the existence of models for theories with infinitely many axioms, as long as the theory is finitely consistent, meaning every finite subset has a model. This is particularly significant for foundational theories like Peano arithmetic, which includes an infinite induction schema alongside finitely many other axioms; since any finite collection of these axioms is satisfied by the standard natural numbers, compactness guarantees that the full theory admits a model. Likewise, Zermelo-Fraenkel set theory with the axiom of choice (ZFC) features infinite schemas such as replacement and separation, but finite subsets are consistent within the cumulative hierarchy V, ensuring the entire theory has a model via compactness. A representative example is the first-order theory of dense linear orders without endpoints, formulated in the language with a single binary relation symbol < and axiomatized by the finite set of sentences expressing totality, antisymmetry, transitivity, density (\forall x \forall y (x < y \to \exists z (x < z \land z < y))), and absence of endpoints (\forall x \exists y (x < y) and \forall x \exists y (y < x)). Every finite subset of these axioms is consistent, as it is satisfied by the rational numbers \mathbb{Q} under the standard order, so compactness yields a model for the full theory; the real numbers \mathbb{R} provide such a model under the standard order. In the context of theory completeness, compactness ensures non-categoricity for infinite theories with infinite models, meaning they admit non-isomorphic models of the same cardinality, in stark contrast to certain finite axiomatizations that can be categorical (up to isomorphism) in specific cardinalities. This property underscores the limitations of first-order logic in uniquely characterizing infinite structures. Moreover, compactness implies that every consistent first-order theory T in a language of cardinality \lambda has models of every cardinality \kappa \geq \lambda; this follows by extending the language with \kappa new constant symbols, adding sentences asserting their distinctness and inequalities to force a large domain, and applying compactness to the resulting consistent theory, yielding a model whose underlying set can be taken to have size \kappa.

Non-Standard Structures

The compactness theorem enables the construction of saturated models via ultraproducts, providing non-standard structures that realize all consistent types over parameter sets of appropriate cardinalities. For a first-order theory T in a language of cardinality \kappa, the compactness theorem implies the existence of \kappa^+-saturated models, as finite approximations of types can be consistently extended. Ultrapowers of a model M taken with respect to a non-principal ultrafilter on a set of size \lambda > |M| produce \lambda-saturated elementary extensions when \lambda is sufficiently large relative to the theory's cardinality. In non-standard models of , compactness constructs extensions of the natural numbers \mathbb{N} that include infinite elements while satisfying the . To obtain such a model, adjoin a new constant symbol c to the language and consider the theory consisting of the plus the infinite set of sentences \{c > \overline{n} \mid n \in \mathbb{N}\}, where \overline{n} denotes the standard numeral for n. Every finite subset of this theory is satisfiable in \mathbb{N} by choosing a sufficiently large standard interpretation for c, so guarantees a model where c is an infinite natural number. The then asserts that a first-order \phi(x_1, \dots, x_n) with parameters from \mathbb{N} holds for all standard elements in the non-standard model it holds for all elements in \mathbb{N}: \mathbb{N} \models \phi(a_1, \dots, a_n) \iff {}^*\mathbb{N} \models \phi(a_1, \dots, a_n) for standard a_i \in \mathbb{N}. Abraham Robinson developed non-standard analysis in the 1960s, employing compactness to rigorously formalize infinitesimals within first-order extensions of standard mathematical structures. This framework revives infinitesimal methods from early calculus by constructing models where infinitesimals are non-zero elements smaller than every positive standard real number. A central example is the field of hyperreal numbers {}^*\mathbb{R}, obtained as an ultrapower of \mathbb{R} modulo a non-principal ultrafilter on \mathbb{N}. The hyperreals satisfy the first-order theory of real closed fields but contain infinite elements (larger than all standard reals) and infinitesimal elements (positive but smaller than all positive standard reals), enabling non-standard treatments of continuity, derivatives, and integrals. The underpins non-standard analysis in the hyperreals: a sentence \phi (possibly with from \mathbb{R}) is true in \mathbb{R} it is true in {}^*\mathbb{R}. Saturation properties of {}^*\mathbb{R} ensure that every consistent type over a countable set from {}^*\mathbb{R} is realized internally, facilitating the extension of standard theorems to non-standard contexts while preserving equivalences.

Infinite Models

The compactness theorem plays a crucial role in strengthening the upward Löwenheim–Skolem theorem, enabling the construction of models of arbitrarily large infinite cardinalities for theories with infinite models. Specifically, if a first-order theory T in a countable language has an infinite model, then for any infinite cardinal \kappa, T has a model of cardinality exactly \kappa. To achieve this, expand the language with \kappa new constant symbols \{c_\alpha \mid \alpha < \kappa\} and add sentences c_\alpha \neq c_\beta for all \alpha \neq \beta. Every finite subset of this expanded theory is satisfiable in an infinite model of T by assigning distinct elements to the finitely many involved constants; thus, by compactness, the entire expanded theory has a model \mathcal{M}. Since the language now has cardinality \kappa, the downward Löwenheim–Skolem theorem yields an elementary submodel \mathcal{N} \preceq \mathcal{M} with |\mathcal{N}| = \kappa, which satisfies T and interprets the constants as distinct elements, ensuring exactly \kappa elements in total. In the downward direction, compactness implies that every infinite model of a first-order theory admits countable elementary submodels. For an infinite structure \mathcal{M} \models T, consider the set of sentences consisting of T together with sentences expressing that certain elements of \mathcal{M} (a countable subset) satisfy specific formulas from types over that subset. Finite subsets of this theory are satisfiable, so compactness guarantees a model \mathcal{N} of the whole set; an elementary submodel of \mathcal{M} of countable size then exists via the , preserving all first-order properties. This result underscores how compactness facilitates embeddings and substructure preservation in infinite settings. The omitting types theorem further illustrates compactness's role in controlling infinite model properties. For a countable complete theory T and a non-principal type p(\bar{x}) over T, there exists a model of T that omits p, meaning no tuple in the model realizes p. The proof, due to Henkin and Keisler, applies compactness within a Henkin-style construction to extend T consistently by adding sentences at each stage that prevent any tuple from realizing the formulas in p, ensuring the final model omits p. This theorem allows selective realization in countable models while ensuring infinite structures avoid unwanted behaviors. A concrete application appears in the theory of algebraically closed fields (ACF). The theory ACF, axiomatized by field axioms plus sentences ensuring every polynomial of degree n > 1 has a root, admits models of any specified k (including 0) and any infinite transcendence degree \lambda over the prime field. To obtain such a model, add constants for \lambda algebraically independent elements and axioms forcing k (e.g., $1 + \cdots + 1 = 0 for k = p > 0); finite subsets are satisfiable in known algebraically closed fields like \mathbb{C} or \overline{\mathbb{F}_p}, so compactness produces the desired infinite model of exact \lambda + \aleph_0. This demonstrates compactness's power in generating infinite models with precise algebraic dimensions. More generally, if a \Sigma has models of every infinite \mu \leq \kappa, then \Sigma has a model of exactly \kappa. This follows by expanding \Sigma with \kappa new distinct constants (as in the upward Löwenheim–Skolem construction) and applying compactness to ensure consistency, followed by taking an elementary submodel of \kappa to match the target size precisely, leveraging the existence of smaller models to satisfy finite approximations.

References

  1. [1]
    Compactness Theorem - Internet Encyclopedia of Philosophy
    The compactness theorem is a fundamental theorem for the model theory of classical propositional and first-order logic.Compactness: Common... · Implications of Compactness · Connection to Topology
  2. [2]
    The completeness and compactness theorems of first-order logic
    Apr 10, 2009 · To state this theorem even more informally, any (first-order) result which is true in all models of a theory, must be logically deducible from ...
  3. [3]
    [PDF] the compactness theorem and applications - UChicago Math
    Aug 2, 2013 · In this paper we develop the basic principles of first-order logic, and then seek to prove the Compactness Theorem and examine some of its.
  4. [4]
    [PDF] Compactness and Completeness of Propositional Logic and First ...
    Mar 15, 2017 · Theorem 33 (Compactness for First-Order Logic). Let Γ be a set of first-order sentences. Then Γ is satisfiable iff Γ is finitely satisfiable.<|control11|><|separator|>
  5. [5]
    Classical Logic - Stanford Encyclopedia of Philosophy
    Sep 16, 2000 · The following sections provide the basics of a typical logic, sometimes called “classical elementary logic” or “classical first-order logic”.
  6. [6]
    Model Theory - Stanford Encyclopedia of Philosophy
    Nov 10, 2001 · Each mathematical structure is tied to a particular first-order language. A structure contains interpretations of certain predicate, function ...
  7. [7]
    Kurt Gödel - Stanford Encyclopedia of Philosophy
    Feb 13, 2007 · In 1930 Gödel published the paper based on his thesis (Gödel 1930) notable also for the inclusion of the compactness theorem, which is only ...2. Gödel's Mathematical... · 2.2 The Incompleteness... · Secondary Sources<|control11|><|separator|>
  8. [8]
    The Rise and Fall of the Entscheidungsproblem
    A first blow was dealt [to the “Hilbert decision-programme”] by Gödel's incompleteness theorem (1931), which made it clear that truth or falsehood of \(A ...Stating the... · Why the problem mattered · The consistency of mathematics
  9. [9]
  10. [10]
    Alfred Tarski - Stanford Encyclopedia of Philosophy
    Oct 30, 2006 · In the postscript to the 1935 German translation of the monograph on truth, Tarski abandons the 1933 requirement that the apparatus of the ...
  11. [11]
    The Axiom of Choice - Stanford Encyclopedia of Philosophy
    Jan 8, 2008 · (Lindenbaum and Tarski 1938). 1935–38, Gödel establishes relative consistency of AC with the axioms of set theory (Gödel 1938a, 1938b, 1939, ...
  12. [12]
  13. [13]
    Existence of an ω-nonstandard model of ZFC from compactness
    Oct 2, 2010 · This is a standard application of the Compactness Theorem, and works basically the same in producing nonstandard models of ZFC as it does for producing ...Existence of a model of ZFC in which the natural numbers are really ...Models of ZFC Set Theory - Getting Started - MathOverflowMore results from mathoverflow.net
  14. [14]
    [PDF] Set Theory and Model Theory Rahim Moosa, University of Waterloo
    Example 5.14 (DLO is complete). Let L = {<} be the language of orderings and let. DLO be the theory of dense linear orderings without endpoints. We show that ...
  15. [15]
    [PDF] CATEGORICAL QUANTIFICATION - PhilArchive
    Due to Lӧwenheim-Skolem, completeness, and compactness theorems, the first-order mathematical theories which have an infinite model are non-categorical and, at ...
  16. [16]
    [PDF] the ultraproduct construction
    Jun 1, 2010 · One application of compactness is the construction of extremely rich models called saturated models.
  17. [17]
    [PDF] mar.1 Non-Standard Models - Open Logic Project Builds
    We use the compactness theorem to show that Γ has a model. If every finite subset of Γ is satisfiable, so is Γ. Consider any finite subset Γ0 ⊆ Γ. Γ0 includes ...
  18. [18]
    [PDF] An introduction to nonstandard analysis - UChicago Math
    Aug 14, 2009 · In section 4 we introduce the main theorem of nonstandard analysis, the transfer principle, which allows us to transfer first-order sentences ...
  19. [19]
    [PDF] More Model Theory Notes
    Omitting types theorem (Henkin, Keisler). If T is a countable, complete theory and. Γ(¯x) is a type that is not principal with respect to T, then T ...
  20. [20]
    [PDF] Applications of the Compactness Theorem
    Contradiction. Example: Algebraically Closed Fields. Page 31. Algebraically closed fields ... If T has an infinite model, then there is a model of T with ...