Fact-checked by Grok 2 weeks ago

Undecidable problem

In computability theory, an undecidable problem is a decision problem for which no algorithm exists that can determine, for every possible input, whether the answer is yes or no in a finite amount of time. These problems highlight fundamental limits of computation, as formalized by the Church-Turing thesis, which posits that Turing machines capture the full extent of what is algorithmically computable. The most famous example is the , which asks whether a given will halt (terminate) on a specified input; proved its undecidability in 1936 by demonstrating that assuming a solution leads to a contradiction via a argument. This result, building on Kurt Gödel's 1931 incompleteness theorems showing that formal systems like Peano arithmetic cannot prove all true statements about natural numbers, established that certain mathematical questions are inherently unresolvable by mechanical means. specifically addressed Hilbert's , proving that no general algorithm exists to determine the truth of all statements in . Undecidable problems extend beyond theoretical models to practical implications in , including the inability to automatically verify program termination or equivalence in general. Other notable examples include determining whether a has integer solutions (, proven undecidable by in 1970) and checking if two context-free grammars generate the same language. These results underscore that while many problems are decidable within bounded resources, an infinite hierarchy of undecidability exists, influencing fields from to limits.

Foundations and Definitions

Core Definition

In , an undecidable problem is a —a about inputs—for which no algorithm exists that can correctly determine the answer for every possible input in a finite number of steps. Such problems are formalized using , the of introduced by in 1936. Specifically, a is represented as a L \subseteq \Sigma^*, where \Sigma is a finite alphabet and the strings in L encode the inputs yielding a "yes" answer. A L is decidable if there exists a M that halts on every input x \in \Sigma^* and accepts x if and only if x \in L; otherwise, L is undecidable. This formalization captures the intuitive notion that undecidability imposes fundamental limits on what can be algorithmically solved, distinguishing it from merely computationally hard (but decidable) problems like those in complexity classes such as NP-complete. Undecidability in focuses on s amenable to algorithmic resolution, in contrast to undecidable statements in , which are propositions neither provable nor disprovable within a consistent formal capable of expressing basic arithmetic. For instance, demonstrate the existence of such statements in systems like Peano arithmetic, but these concern provability rather than algorithmic termination or acceptance. The , which asks whether a given halts on a specified input, exemplifies an undecidable in this sense. Related to undecidability are (or recursively enumerable) sets, which provide insight into why some problems are semi-decidable but not fully decidable. A set S \subseteq \mathbb{N} is if there exists a that halts precisely on inputs in S (possibly looping forever on inputs not in S), meaning S is the domain of a partial . The complement of a is not necessarily , and a set is decidable both it and its complement are . Thus, many undecidable problems involve languages that are but whose complements are not, allowing verification of "yes" instances while "no" instances may require unbounded search.

Historical Development

The roots of undecidable problems trace back to David Hilbert's formalist program for mathematics, which sought to establish a complete and consistent axiomatic foundation for all mathematical truths through finitary methods. In 1928, Hilbert and Wilhelm Ackermann formally posed the Entscheidungsproblem (decision problem), challenging mathematicians to devise an algorithm that could determine, for any given first-order logical statement, whether it was provable from a fixed set of axioms. This problem was central to Hilbert's vision of mechanizing mathematical reasoning, assuming that all mathematical questions were in principle decidable. A precursor influence came from Kurt Gödel's 1931 incompleteness theorems, which demonstrated that sufficiently powerful formal systems cannot prove their own consistency and contain true but unprovable statements, thereby revealing inherent limitations in formal axiomatization. Building on this, the decisive breakthroughs occurred in 1936 when Alonzo Church and Alan Turing independently proved the undecidability of the Entscheidungsproblem. Church, using his lambda calculus as a model of computation, showed in his paper "A Note on the Entscheidungsproblem" that no general decision procedure exists for the first-order functional calculus, establishing that lambda-definable functions capture effective computability but cannot resolve all logical validity questions. Simultaneously, Turing's seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem" introduced the abstract Turing machine model to define computable real numbers and demonstrated that the Entscheidungsproblem is unsolvable, as it would require deciding the halting behavior of arbitrary machines—a task proven impossible via diagonalization. Turing explicitly introduced the term "undecidable" in this work to describe propositions lacking a mechanical proof or disproof. In the late 1930s, Stephen Kleene advanced these ideas through his development of recursion theory, formalizing general recursive functions in collaboration with and building on Gödel's recursive functions to unify notions of . Kleene's work, including his and subsequent papers, established the equivalence of recursive functions to lambda-definability and Turing , solidifying the theoretical framework for undecidability results. This period marked a pivotal shift from Hilbert's logical foundations toward the broader field of , as researchers like , Turing, and Kleene emphasized the limits of algorithmic processes over purely syntactic provability, influencing the foundations of modern by the mid-20th century.

Key Examples in Computability Theory

Halting Problem

The halting problem, introduced by Alan Turing in 1936, asks whether there exists an algorithm that, given the description of an arbitrary Turing machine M and an input w, can determine if M halts (i.e., terminates) when run on w. Turing proved that no such general algorithm exists, establishing the halting problem as undecidable and providing the first concrete example of an undecidable problem in computability theory. The proof proceeds by contradiction using a diagonalization argument. Assume there exists a H that decides the : on input \langle M, w \rangle (the pair of encoded machine M and input w), H outputs "halt" if M halts on w, and "loop" otherwise. Construct a new D that, on input \langle x \rangle (the encoded description of some machine x), simulates H on \langle x, x \rangle; if H outputs "halt," then D enters an , and if H outputs "loop," then D immediately halts. Now consider running D on its own description \langle D \rangle: if H says D halts on \langle D \rangle, then D loops (); if H says D loops on \langle D \rangle, then D halts (again a ). Thus, no such H can exist. Formally, Turing machines can be encoded as finite strings over a fixed , allowing the description \langle M \rangle to serve as both input and in computations. The undecidability of the follows from a to the problem for Turing machines: the set of pairs \langle M, w \rangle where M accepts w is recursively enumerable but not recursive, and the reduces to it via , preserving undecidability. This result implies that no general-purpose can predict program termination for arbitrary programs and inputs, limiting the scope of automated in . A variant of the halting problem arises in the theory of partial s, where the domain of a partial \phi_e (the e-th partial ) is undecidable: there is no algorithm to determine, for given e and x, whether \phi_e(x) is defined (i.e., halts).

Rice's Theorem

Rice's theorem, proved by Henry Gordon Rice in , provides a powerful generalization of undecidability results in , showing that a wide class of problems concerning properties of programs or Turing machines are inherently undecidable. Specifically, the theorem states that for any non-trivial semantic property P of partial s—meaning P holds for some but not all partial s—the problem of deciding, given the index e of a partial \phi_e, whether \phi_e satisfies P is undecidable. This , formally defined as C = \{ e \mid \phi_e \text{ satisfies } P \}, is recursive only if P is trivial (true for all partial s or none). A key distinction in the theorem is between semantic and syntactic properties. Semantic properties depend solely on the input-output behavior of the function computed (i.e., the partial recursive function itself), independent of how it is implemented by a particular or . In contrast, syntactic properties concern the structure of the machine or code, such as the number of states in a Turing machine description, and are often decidable. Rice's theorem applies exclusively to non-trivial semantic properties, explaining why questions about "what a computes" are typically undecidable, while superficial structural checks are not. The proof proceeds by reduction from the , which is known to be undecidable. Assume that there exists some e_0 such that \phi_{e_0} satisfies P, and consider an arbitrary e and input x. Construct a new e' for a partial that simulates \phi_e(x); if this simulation halts, it then behaves identically to \phi_{e_0} (thus satisfying P), and otherwise diverges. This construction is effective, so e is in the halting set if and only if e' is in C. Since the halting set is undecidable, C must also be undecidable. If the assumption about e_0 fails, the argument applies symmetrically to the complement property. Examples illustrate the theorem's scope. The property of totality—whether \phi_e halts on all inputs (i.e., is a total recursive function)—is a non-trivial , so deciding totality for a given e is undecidable. Similarly, determining whether a Turing machine ever outputs a specific string like "hello" on some input is undecidable, as it depends on the computed function's range. In contrast, a trivial semantic property, such as whether the function is partial recursive (true for all), yields a decidable index set (all indices). Properties like whether a program contains the substring "hello" are syntactic and outside the theorem's purview, often decidable by simple string matching.

Gödel's Incompleteness Theorems

Kurt Gödel's incompleteness theorems, published in his 1931 paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems," revealed fundamental limitations in formal axiomatic systems capable of expressing elementary arithmetic. The theorems apply to systems like Peano arithmetic, which formalizes the natural numbers using axioms for successor, addition, multiplication, and induction. The first incompleteness theorem states that any consistent strong enough to describe basic is incomplete: there exist statements in its language that are true but neither provable nor disprovable within the system. This incompleteness arises because the system's axioms and rules of inference cannot capture all arithmetical truths, leaving certain propositions beyond its deductive reach. The second incompleteness theorem extends this by showing that if the system is consistent, it cannot prove its own consistency using only its own axioms and rules. In other words, no internal proof can establish that the system avoids deriving contradictions, assuming consistency holds. These results directly imply undecidability within such systems, as exemplified by the Gödel sentence G, a carefully constructed statement that asserts its own unprovability. If the system proves G, it leads to a contradiction under consistency; if it disproves G, G (being true) would also be false, again yielding inconsistency. Thus, G remains undecidable: neither provable nor refutable. Gödel's proof hinges on , a method to encode the syntax of the — including symbols, formulas, and proofs—as unique natural numbers, enabling the arithmetic language to represent and manipulate its own structure. This arithmetization allows self-referential statements through the diagonal lemma, which guarantees the existence of a sentence equivalent to any given formula applied to its own Gödel number. Specifically, G is built as the of the provability predicate: it states, in the system's language, "I am not provable." The proof outline assumes the system is complete and consistent, then constructs G to derive a contradiction: if G is unprovable (true under consistency), completeness would require its disproof, but that implies provability of a falsehood. This diagonalization technique mirrors the self-referential argument in Turing's , highlighting undecidability through inescapable loops of reference.

Undecidability in Formal Systems

In formal systems, the Entscheidungsproblem, posed by David Hilbert and Wilhelm Ackermann in 1928, asks whether there exists an algorithm to determine, for any given first-order logical formula, whether it is valid (a tautology) in all models. Alonzo Church addressed this in his 1936 paper, proving that no such general decision procedure exists for first-order predicate logic by showing that the problem of determining the validity of sentences in the engere Funktionenkalkül (a restricted functional calculus) is undecidable, using reductions to effectively undefinable predicates via lambda-definability. Independently, Alan Turing demonstrated the same result later in 1936 by encoding the problem into computations on his abstract machine model, establishing that the halting problem's undecidability implies the non-existence of a decision algorithm for first-order validity. These results, known collectively as Church's theorem, confirm the undecidability of the Entscheidungsproblem for first-order logic. Beyond , undecidability extends to higher-order systems. , which quantifies over predicates and relations in addition to individuals, is also undecidable, as it subsumes and thus inherits its undecidability through straightforward embedding of first-order formulas. A prominent example in arithmetic-related formal systems is , which seeks an algorithm to determine whether a given (a equation with integer coefficients) has integer solutions. resolved this negatively in 1970 by proving that every recursively enumerable set is Diophantine, completing a reduction from the to show that no such algorithm exists; this built on prior work by Martin Davis, , and , demonstrating the undecidability of solvability for Diophantine equations. Key formal techniques for establishing these undecidability results involve reductions from to logical validity problems, where known undecidable problems like the are encoded into logical formulas or Diophantine representations. For instance, Turing's machine configurations can be translated into first-order sentences whose mirrors computational halts. In , tools like Skolemization—replacing existentially quantified variables with functions depending on universal variables—and Herbrand's theorem, which reduces validity to checking ground instances in the Herbrand , provide semi-decision procedures but face inherent limitations due to the underlying undecidability: these methods can confirm invalidity in finite cases but may loop indefinitely on valid formulas without a general termination guarantee. Such reductions highlight how self-referential constructions, inspired by foundational results like , propagate undecidability across formal systems.

Undecidability Across Mathematical Domains

In Set Theory

In axiomatic set theory, particularly within Zermelo-Fraenkel set theory with the (ZFC), the (CH) represents a foundational example of undecidability. The CH states that there is no set whose cardinality is strictly between that of the integers (ℵ₀) and the real numbers (2^ℵ₀), implying that the equals ℵ₁, the smallest uncountable cardinal. This hypothesis, first proposed by in 1878, was proven independent of ZFC—meaning neither CH nor its negation can be derived from ZFC axioms—through complementary consistency results established by and . Gödel's 1938 contribution demonstrated the consistency of CH with ZFC by introducing the constructibility axiom, V = L, which posits that every set is constructible from simpler sets via a well-defined . Under V = L, the universe of sets is highly structured, ensuring that the has ℵ₁ and satisfying CH without contradiction to ZFC. This inner model construction showed that if ZFC is consistent, then ZFC + CH is also consistent, ruling out any formal proof of ¬CH within ZFC. Gödel's proof relied on a refined version of his earlier incompleteness theorems to handle the hierarchical buildup of sets. Complementing Gödel's work, in 1963 developed the forcing technique to prove the consistency of ¬CH with ZFC. Forcing involves extending the universe of sets by adding generic elements that satisfy desired properties, such as introducing a set of reals with cardinality strictly between ℵ₀ and 2^ℵ₀, thereby making the larger than ℵ₁. Cohen's method constructs a forcing poset that preserves ZFC axioms while violating CH, establishing that if ZFC is consistent, then ZFC + ¬CH is also consistent. This breakthrough not only settled Cantor's problem but introduced a powerful tool for independence proofs in . These results extend to broader undecidability in , including the (AC), which asserts the existence of choice functions for any collection of nonempty sets. Gödel showed AC's with ZF (ZFC without AC) using V = L in 1938, while Cohen's forcing in 1963–1964 proved the consistency of ¬AC with ZF, rendering AC undecidable in ZF. Similarly, the generalized (GCH), which extends CH to all infinite by stating 2^κ = κ⁺ for every infinite cardinal κ, was shown consistent with ZFC by Gödel in 1938 via V = L. These independences highlight profound implications for the structure of infinite , allowing models of ZFC where cardinal behaves flexibly, unconstrained by a single "true" value for the or choice principles.

In Algebra and Group Theory

In and , undecidable problems arise prominently in the study of decision problems for algebraic structures defined by presentations or theories. A key example is the , which asks whether, given a finite G = \langle X \mid R \rangle with generators X and relations R, and two words w_1, w_2 over X, there exists a sequence of applications of the relations in R that transforms w_1 into w_2, meaning w_1 = w_2 in G. This problem is undecidable in general for finitely presented groups, as independently proven by Pyotr Novikov in 1955 and William W. Boone in 1959, establishing the Boone-Novikov theorem. The proof of undecidability proceeds via a reduction from the , a undecidable problem in . Specifically, for any M, one constructs a whose relations encode the possible histories of M; two words in this presentation are equal if and only if M halts on a given input, rendering the word problem unsolvable. This encoding leverages the group structure to simulate machine transitions while ensuring finite presentation. Similar undecidability extends to other algebraic domains. For semigroups, the word problem—determining equality of words under a finite presentation—is also undecidable, first demonstrated by Andrei Markov in 1951 through reductions involving unsolvable equations that mimic computational non-halting. In the context of rings, showed in 1959 that the of algebraic rings is undecidable, using interpretations of within ring structures to embed undecidable predicates. Likewise, for fields, proved in 1949 that the of the rational field \mathbb{Q} is undecidable by defining integers and reducing to field equations. Alfred Tarski's contributions to highlight contrasts in decidability within these structures; while he established in 1948–1951 that the theory of real closed fields admits and is thus decidable, broader algebraic theories like those of general fields remain undecidable. Tarski also posed the high school algebra problem around 1949, questioning whether a of high-school-level identities suffices to prove all true equations involving , , and over the positive integers; this was resolved negatively in 1980 by Alex Wilkie, who constructed an unprovable identity, underscoring incompleteness in algebraic identity systems.

Broader Implications

In Computer Science and Algorithms

In , the undecidability of problems like the imposes fundamental limits on algorithm design, particularly in automated verification and analysis of software. Static analysis tools, which aim to detect or prove without execution, cannot guarantee completeness for general programs, as determining non-trivial behavioral —such as whether a program terminates or accesses invalid memory—is impossible via any . This limitation arises because the serves as the root cause, rendering perfect bug-finding undecidable. extends this by proving that all non-trivial semantic of programs are undecidable, directly impacting the feasibility of exhaustive static checks. To address these limits, researchers employ approximation techniques that trade completeness for decidability and scalability. Abstract interpretation provides a framework for over-approximating program semantics, enabling sound but conservative analyses of undecidable properties like or control . Heuristics, such as counterexample-guided refinement, iteratively refine approximations to balance precision and efficiency. For decidable subsets, verifies temporal properties on finite-state systems exhaustively, succeeding in domains like design where state spaces are bounded. These methods ensure practical utility despite theoretical barriers. Real-world applications highlight these challenges. Virus detection is undecidable in general, as no algorithm can reliably distinguish arbitrary from benign code without risking false negatives or positives, a result stemming from reductions to the . Similarly, compiler optimizations face undecidability in tasks like phase ordering, where selecting the optimal sequence of transformations for code improvement cannot be algorithmically resolved for all inputs. Engineers mitigate this through empirical heuristics and , accepting suboptimal but safe results. Undecidability differs from but connects to complexity barriers like P versus ; -complete problems, such as , are decidable yet potentially intractable, underscoring that undecidability represents an even stricter impossibility than polynomial-time intractability. offers a way to bypass undecidability by fixing small parameters, rendering certain instances decidable— for example, analyzing halting for programs of bounded size or resources. In modern contexts, such as and , undecidability affects learning theory; determining whether a learner consistently underfits data is uncomputable, complicating guarantees for generalization.

In Philosophy of Mathematics

The discovery of undecidable problems, particularly through , posed a profound challenge to in , as exemplified by . Hilbert envisioned a complete and consistent that could mechanistically justify all mathematical truths using finitary methods, thereby securing the foundations of against paradoxes. However, demonstrated that any sufficiently powerful consistent contains undecidable propositions—statements that are true but neither provable nor disprovable within the system—directly undermining the feasibility of such a complete formalization. This revelation shifted philosophical perspectives, revealing inherent limitations in formal systems and prompting debates on whether could ever be fully axiomatized without gaps. In the , undecidability has fueled tensions between and , with Gödel emerging as a key proponent of the former. Gödel argued that undecidable truths exist objectively in an independent mathematical reality, accessible through human intuition rather than mere construction, thereby supporting mathematical where propositions possess determinate truth values beyond formal provability. This view contrasts with , which emphasizes constructive proofs and rejects non-constructive existence, as Gödel showed that classical arithmetic's strength exceeds intuitionistic limits, implying that undecidables affirm a platonistic of abstract objects. Consequently, undecidability bolsters by suggesting that mathematical truth transcends human invention, residing in a discoverable realm. The implications of undecidability extend to arguments against the mechanization of human reasoning, notably in the Lucas-Penrose thesis. John Lucas and invoked Gödel's theorems to contend that since humans can recognize the truth of undecidable statements unprovable in any consistent —such as the Gödel sentence for a given theory—the human mind cannot be fully simulated by a or any algorithmic process. This challenges by positing that mathematical insight involves non-computable elements, highlighting limits to formalizing and elevating human reasoning above mechanical computation. Gregory Chaitin's incompleteness results in further illuminate undecidability's philosophical depth, linking it to inherent in . Chaitin proved that in any of fixed , propositions asserting the (incompressibility) of sufficiently complex strings become undecidable, as their proofs would exceed the system's descriptive power. This ties incompleteness to , suggesting that harbors unprovable truths due to informational limits, akin to Gödel's work but emphasizing over mere syntax. Philosophically, it underscores that progress in often requires adopting new axioms to resolve such gaps, mirroring empirical sciences rather than achieving absolute closure. Broadly, undecidable problems have reshaped debates on mathematical truth, asserting that truth often lies beyond provability in formal systems. While platonists like Gödel view undecidables as objective facts in an eternal mathematical domain, alternative philosophies—such as or —grapple with whether such statements possess truth values independent of frameworks or if they remain indeterminate. This has profound ramifications for , challenging the notion of a singular, complete body of mathematical and inviting where multiple axiomatic extensions coexist without resolving all undecidables.

References

  1. [1]
    [PDF] 1 Undecidability - CS 373: Theory of Computation
    Definition 1. A language L is undecidable if L is not decidable. Thus, there is no Turing machine. M that halts on every input and L(M) ...
  2. [2]
    [PDF] Computability theory - UC Berkeley math
    Feb 25, 2024 · Famous examples of undecidable problems include deciding whether a sentence of number theory true, deciding whether a diophantine equation ...Missing: primary sources
  3. [3]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  4. [4]
    [PDF] Undecidable Problems: A Sampler - MIT Mathematics
    Feb 28, 2012 · Introduction. The goal of this survey article is to demonstrate that undecidable decision problems arise.Missing: sources | Show results with:sources
  5. [5]
    Undecidable Problems
    Feb 14, 2025 · The class of undecidable problems represent the fundamental limits of what we can accomplish with digital computers.
  6. [6]
    Computability and Complexity - Stanford Encyclopedia of Philosophy
    Jun 24, 2004 · Thus almost all sets are undecidable. Turing actually constructed a non-decidable set. As we will see, he did this using a diagonal argument ...
  7. [7]
    Gödel's Incompleteness Theorems
    Nov 11, 2013 · A theory \(F\) is called essentially undecidable if every consistent extension of it in the language of \(F\) is undecidable. The above proof ...
  8. [8]
    [PDF] introduction to computability theory - UCLA Mathematics
    Definition 5B.1 (R.e. sets). A set A ⊆ N is recursively or computably. enumerable if either A = ∅, or some total, recursive function enumerates it, i.e., A = { ...
  9. [9]
    [PDF] Recursive and Recursively Enumerable Sets
    Definition: A set (or relation) is recursive (or computable or decidable) if it is computable as a total 0-1 valued function. NOTE: The terms “recursive” ...
  10. [10]
    The Rise and Fall of the Entscheidungsproblem
    By the time Turing and Church engaged with the Entscheidungsproblem, a number of decision methods were known for parts of the functional calculus. Besides the ...
  11. [11]
    [PDF] Computability and Recursion
    These two trends of recursion and computability were brought together in the 1930's by Gödel, Church, Kleene, Turing, and others partly in response to ...
  12. [12]
    Recursive Functions - Stanford Encyclopedia of Philosophy
    Apr 23, 2020 · A primary problem in the theory of recursively enumerable sets is the problem of determining the degrees of unsolvability of the unsolvable ...
  13. [13]
    CLASSES OF RECURSIVELY ENUMERABLE SETS AND THEIR ...
    H. G. RICE. 1. Introduction. In this paper we consider classes whose elements are re- cursively enumerable sets of non-negative integers.
  14. [14]
    [PDF] thy.1 Rice's Theorem
    If A is computable, then either C is empty or C is the set of all the partial computable functions. An index set is a set A with the property that if n and m ...
  15. [15]
    [PDF] Partial Recursive Functions and Recursively Enumerable Sets
    Rice's Theorem: Let C be a set of unary recursive partial. を functions. Let Ip = {{e: [e], c C }, the set of indices of members of C.
  16. [16]
    [PDF] Mapping reducibility and Rice's theorem - MIT OpenCourseWare
    – Rice's Theorem, a general theorem about undecidability of properties of Turing machine behavior (or program behavior). Mapping reducibility and Rice's ...
  17. [17]
    [PDF] 5. Peano arithmetic and Gödel's incompleteness theorem
    The incompleteness theorem is formulated and proved for decidable extensions of Peano arithmetic. Peano arithmetic is a natural collection of sentences ...
  18. [18]
    [PDF] Gödel's Incompleteness Theorems
    Gödel's Second Incompleteness Theorem states that a consistent formal system powerful enough to express. Peano arithmetic cannot prove its own consistency.
  19. [19]
    [PDF] 23.1 Gödel Numberings and Diagonalization - CS@Cornell
    Apr 14, 2009 · We will briefly sketch both methods. Definition: A Gödel numbering is a mapping from a set of expressions to N that satisfies the. following ...
  20. [20]
    The Consistency of the Axiom of Choice and of the Generalized ...
    K. Gödel,. The Consistency of the Axiom of Choice and of the Generalized Continuum-Hypothesis, Proc. Natl. Acad. Sci. U.S.A. ...
  21. [21]
    THE INDEPENDENCE OF THE CONTINUUM HYPOTHESIS - PNAS
    THE INDEPENDENCE OF THE CONTINUUM HYPOTHESIS. Paul J. CohenAuthors Info & Affiliations. December 15, 1963. 50 (6) 1143-1148.
  22. [22]
    P. S. Novikov, “On the algorithmic unsolvability of the word problem ...
    \by P.~S.~Novikov \paper On the algorithmic unsolvability of the word problem in group theory \serial Trudy Mat. Inst. Steklov. \yr 1955 \vol 44 \pages 3--143 \ ...Missing: original | Show results with:original
  23. [23]
    Alfred Tarski and Decidable Theories - jstor
    In his abstract [49ad], Tarski briefly discussed the theories of algebraically closed fields and of real-closed fields. The abstract describes in algebraic ...
  24. [24]
    Did the Incompleteness Theorems Refute Hilbert's Program?
    Did Gödel's theorems spell the end of Hilbert's program altogether? From one point of view, the answer would seem to be yes—what the theorems precisely show ...
  25. [25]
    Kurt Gödel - Stanford Encyclopedia of Philosophy
    Feb 13, 2007 · In his philosophical work Gödel formulated and defended mathematical Platonism, the view that mathematics is a descriptive science, or ...
  26. [26]
    [PDF] Penrose's Gödelian argument - Mathematics
    So now Penrose has gone to great lengths in SM to lay out his Gödelian argument and to try to defend it against all possible objections. I must say that even ...
  27. [27]
    [PDF] Randomness and Mathematical Proof - Gwern
    The endeavor to define and measure randomness has greatly clarified the significance and the implications of. Gödel's incompleteness theorem. That theorem ...
  28. [28]