Fact-checked by Grok 2 weeks ago

Computability

, also known as recursion theory, is a branch of and that investigates algorithms—formally defined as computable functions—and the inherent limits of what can be computed, including the classification of problems as solvable or unsolvable by mechanical means. Emerging in the 1930s, the field was pioneered through independent efforts by , , Emil Post, and Stephen Kleene, who each proposed equivalent models of computation, such as lambda-definable functions, Turing machines, Post normal systems, and recursive functions, to formalize the intuitive notion of effective calculability. A foundational achievement of computability theory is the precise definition of computable functions via the , an abstract device introduced by Turing in 1936 that consists of an infinite tape, a reading/writing head, and a of states to simulate any algorithmic process. Turing's model demonstrated the undecidability of the : there is no general algorithm that can determine, for an arbitrary Turing machine and input, whether the machine will eventually halt or run indefinitely, establishing fundamental limits on and program verification. This result, along with Church's parallel proof of the unsolvability of the (the for ), underscored that certain mathematical problems lie beyond the reach of computation. The Church-Turing thesis, first articulated by in 1936 and later formalized by Kleene, conjectures that any that can be effectively computed by a following a of instructions is computable by a , unifying the various early models under a single standard of algorithmic power. Though unprovable within standard , the thesis has been corroborated by the equivalence of diverse computational models and remains a cornerstone for understanding the boundaries of digital computation. extends beyond basic decidability to explore degrees of unsolvability, computably enumerable sets, and relative computability, influencing areas such as , software reliability, and the foundations of .

Foundations of Computability

Definition and Motivation

is a branch of and that investigates the fundamental limits of algorithmic processes, determining which functions and decision problems can be solved by mechanical means. A or problem is computable if there exists an —a finite sequence of well-defined instructions—that produces the correct output for every valid input after a finite number of steps. This definition emphasizes the existence of such an , rather than its efficiency, and applies to problems over discrete domains like the natural numbers. While computability addresses whether a solution exists in principle, examines the resources—such as time or space—required to compute it, classifying problems based on efficiency bounds like or . For instance, a list of numbers is computable, as algorithms like can arrange elements in finite steps for any input size, demonstrating practical algorithmic solvability in everyday computing tasks. In contrast, determining whether an arbitrary —a with coefficients—has solutions is undecidable, meaning no can reliably solve it for all cases, as proven through reductions to known undecidable problems like the . This distinction motivates the study of computability by revealing inherent boundaries in mathematics and computation, influencing fields from to . The Church-Turing thesis provides a foundational that unifies various models of , asserting that any effectively calculable can be computed by a —a theoretical device with an infinite tape and read-write head—and thus all reasonable notions of algorithm are equivalent in expressive power. Though unprovable in a strict mathematical sense, the thesis is supported by the equivalence of diverse formalisms like recursive functions and to , offering a benchmark for what constitutes "computable." Understanding computability requires prerequisites such as basic , including the concepts of algorithms as step-by-step procedures and as mappings between sets, which provide the groundwork for analyzing solvability.

Historical Development

The foundations of were laid in the early amid efforts to formalize and resolve foundational crises. In 1928, and posed the as part of Hilbert's broader program to establish the decidability of mathematical statements within formal systems, seeking an to determine the provability of any logical formula in first-order predicate calculus. This challenge aimed to mechanize proof verification, reflecting Hilbert's vision of as a finite, constructive discipline free from paradoxes. A pivotal shift occurred in 1931 when Kurt Gödel's incompleteness theorems demonstrated inherent limitations in formal axiomatic systems, showing that within sufficiently powerful arithmetical systems, there exist true statements that cannot be proved or disproved, thus linking incompleteness directly to notions of effective computability and foreshadowing undecidability results. The year 1936 marked an for computability, with independent breakthroughs by , , Emil Post, and Stephen Kleene defining equivalent models of computation. Church introduced as a system for effective calculability, using it to prove the undecidability of the . Simultaneously, Turing devised the abstract to characterize computable real numbers and resolve the same problem negatively. Post developed normal systems (also known as production systems), another equivalent formalism for computation. Kleene formalized general recursive functions, providing another precise notion of computability equivalent to the others. These concurrent inventions culminated in the Church-Turing thesis, conjecturing that their models capture the intuitive notion of effective computation. In the 1930s and 1940s, developments solidified recursive function theory through Kleene's extensions, including his work with J. Barkley Rosser on the equivalence of recursive and lambda-definable functions, establishing a robust framework for studying computable functions and predicates. This era's contributions by Kleene and collaborators like Rosser emphasized hierarchies of recursive functions, bridging logic and computation. By the 1960s, evolved into modern , with the formalization of by Juris Hartmanis and Richard E. Stearns, who introduced measures of time and space resources for Turing machines, shifting focus from mere decidability to efficiency.

Computability Problems

Decidable and Semi-Decidable Problems

In , a is classified as decidable if there exists an that terminates on every input instance and correctly outputs "yes" if the instance belongs to the set and "no" otherwise. This ensures both (correct acceptance or rejection) and (coverage of all cases) with guaranteed termination. A classic example is primality testing: given a positive n, determine if n is prime. The AKS algorithm provides a deterministic polynomial-time solution, confirming that PRIMES is decidable. Formally, decidable sets, also known as recursive sets or languages, are those for which a exists that acts as a decider: it halts in an accepting state for members of the set and a rejecting state for non-members, on all inputs. Recursive sets capture the intuitive notion of problems solvable by , encompassing all effectively computable predicates. For instance, recognizing whether a binary string has even is decidable: a simple scans the string once, counting symbols modulo 2, and halts with the result. In contrast, semi-decidable problems, or recursively enumerable sets, admit an that halts and accepts on "yes" instances (members of the set) but may run indefinitely without halting on "no" instances (non-members). There is no guarantee of termination for rejection, making these problems only partially solvable by computation. An example is verifying if a given , on empty input, outputs a specific string: simulate the program's execution step by step until the string appears in the output (accept if it does) or continue forever if it does not. Every decidable set is semi-decidable, since a decider that always halts provides a semi-decider by ignoring the rejection case's termination. However, the converse does not hold: some semi-decidable sets are not decidable. For example, the set of that halt on empty input is semi-decidable—simulate each machine on empty input in parallel until one halts (accepting those that do)—but lacks a decider that terminates on all cases. This asymmetry highlights the boundary between full and partial computability. Decidable sets exhibit strong closure properties: if A and B are decidable, then their A \cup B and A \cap B are also decidable, achieved by running deciders for A and B sequentially or in parallel as needed. Semi-decidable sets are closed under (dovetailing simulations of semi-deciders) and (simultaneously simulating both), but lack closure under complementation, as the complement of a non-recursive semi-decidable set is generally not semi-decidable. These properties underscore the robustness of decidability relative to semi-decidability.

Examples of Undecidable Problems

One prominent example of an undecidable problem is the (PCP), introduced by Emil Post in 1946. The PCP asks whether, given a finite collection of "dominoes"—each consisting of two strings over a finite , one on the "top" and one on the "bottom"—there exists a finite sequence of these dominoes such that the concatenation of the top strings equals the concatenation of the bottom strings. Post demonstrated the undecidability of this problem for alphabets with at least two symbols via a reduction from the . Another foundational arises in : , originally posed by in 1900 as part of his list of 23 unsolved problems. This problem seeks an algorithm that, for any given polynomial equation with integer coefficients (a ), determines whether it has integer solutions. In 1970, proved the problem undecidable by showing that the set of Diophantine equations with solutions is not recursive, building on prior work by Martin Davis, , and to establish that every recursively enumerable set is Diophantine. In the realm of geometry and combinatorics, the domino tiling problem—also known as the Wang tiling problem or periodic tiling problem—provides yet another example of undecidability. Formulated by Hao Wang in the early 1960s, it asks whether a given finite set of square "tiles" (Wang tiles), each with colored edges that must match adjacent tiles, can tile the infinite plane periodically without gaps or overlaps. Robert Berger proved in 1966 that this decision problem is undecidable, constructing over 20,000 tile types whose tiling behavior simulates arbitrary Turing machine computations, thereby reducing from the halting problem. Berger's result also implied the existence of aperiodic tile sets, which tile the plane only non-periodically. These undecidable problems highlight the profound implications of computability limits for mathematics: for instance, reveals that solvability in cannot be algorithmically decided, affecting fields from to arithmetic geometry by showing that vast classes of Diophantine questions evade mechanical resolution. Similarly, the tiling problem underscores non-algorithmic aspects of spatial reasoning and . Overall, such results demonstrate that broad swaths of mathematical inquiry, including core areas like and , harbor inherently non-algorithmic content. A common technique for establishing undecidability in these cases involves reduction: an instance of a known , such as the , is computationally mapped to an instance of the target problem so that a solution to the target would yield a solution to the original, implying mutual undecidability if the mapping is effective.

Formal Models of Computation

Automata and Language Recognition

Finite automata serve as foundational models in for recognizing regular languages, which form the simplest class in the of formal languages. A (DFA) is formally defined as a 5-tuple (Q, \Sigma, \delta, q_0, F), where Q is a of states, \Sigma is the input alphabet, \delta: Q \times \Sigma \to Q is the transition function, q_0 \in Q is the start state, and F \subseteq Q is the set of accepting states; it accepts a if, starting from q_0, the ends in a state in F after processing the input. A (NFA) extends this to a 5-tuple (Q, \Sigma, \delta, q_0, F) with \delta: Q \times \Sigma \to 2^Q, allowing multiple possible transitions, and accepts if there exists at least one path to an accepting state. The languages recognized by DFAs and NFAs are , as any NFA can be converted to an DFA via the subset construction, which builds DFA states as subsets of NFA states and simulates all possible nondeterministic paths. This was established in the context of decision problems for finite automata. languages, the class accepted by these automata, are precisely those generated by regular expressions, as shown by Kleene's , which proves mutual equivalence among regular expressions, NFAs, and DFAs through constructive conversions. A key property distinguishing regular languages is captured by the pumping lemma: if L is a regular language, there exists a pumping length p such that for any string w \in L with |w| \geq p, w = xyz where |xy| \leq p, |y| > 0, and xy^k z \in L for all k \geq 0. This lemma is used to prove non-regularity; for example, to show that L = \{ a^n b^n \mid n \geq 0 \} is not regular, assume it is and take w = a^p b^p, leading to a contradiction since pumping y (in the a-prefix) yields strings not in L. To recognize more expressive languages, pushdown automata (PDAs) incorporate a for memory. A nondeterministic pushdown automaton (NPDA) is a 7-tuple (Q, \Sigma, \Gamma, \delta, q_0, Z_0, F), where \Gamma is the stack alphabet, Z_0 \in \Gamma is the initial stack symbol, and \delta: Q \times \Sigma_\epsilon \times \Gamma \to 2^{Q \times \Gamma_\epsilon} is the transition function (with \Sigma_\epsilon = \Sigma \cup \{\epsilon\} and \Gamma_\epsilon = \Gamma \cup \{\epsilon\}); it accepts by final state or empty stack if a computation reaches acceptance after consuming the input. NPDAs recognize context-free languages (CFLs), which are generated by context-free grammars (CFGs) and correspond to type-2 grammars in the , a classification introducing levels of language complexity based on generative power. The equivalence between NPDAs and CFGs is established constructively: any CFG can be converted to an NPDA that simulates leftmost derivations using the , and vice versa, any NPDA can be converted to a CFG capturing its accepting computations. Membership in a CFL—determining if a string is generated by a CFG—is decidable via the , a dynamic programming method that, for a CFG in , fills a table T[i,j] indicating nonterminals deriving substrings from position i to i+j-1, achieving O(n^3) time for input length n. This algorithm, independently developed in the , confirms the decidability of the word problem for CFLs. For both regular languages and CFLs, emptiness (whether the language is empty) and finiteness (whether the language is finite) are decidable: for regular languages via in the automaton's state graph, and for CFLs by checking productive nonterminals in the CFG and analyzing the for cycles generating infinite languages. However, universality—whether the language equals \Sigma^*—is undecidable for CFLs, as it can encode undecidable problems like the through reductions showing no algorithm can always determine if a CFG generates all strings.

Turing Machines and Equivalents

A consists of an infinite tape divided into cells that can hold symbols from a , a read/write head that scans one cell at a time, and a of states including a start state and a halt state. The machine's operation is governed by a transition function, which, based on the current state and the scanned , specifies a symbol to write on the tape, a direction to move the head (left or right), and a next state. This model, introduced by in , formalizes mechanical computation by simulating the step-by-step process of an idealized human calculator working with unlimited paper and pen. Variants of the , such as multi-tape machines with multiple tapes and independent heads, extend the basic model for convenience in analysis but do not increase its computational power. Any multi-tape can be simulated by a single-tape machine through a systematic encoding of multiple tape contents onto one tape using markers to separate tracks, ensuring equivalence in the class of computable functions despite a quadratic increase in simulation time. Turing further defined a universal Turing machine that can simulate the execution of any other , given as input the target machine's finite description (its states, symbols, and transition table) encoded as a on the , along with the . This universal machine operates by iteratively applying the encoded transitions to maintain and update the simulated machine's configuration, enabling the concept of a general-purpose programmable and facilitating arguments involving in computability. The partial functions computable by Turing machines correspond exactly to the partial recursive functions, which are generated from zero, successor, and projection functions via , primitive recursion, and the μ-operator. The μ-operator introduces nondeterminism in by selecting the least value \mu y \, [f(x, y) = 0] where a recursive f holds, allowing Turing machines to model unbounded searches essential for defining non-total computable functions. Turing machines subsume weaker automata models: a finite automaton can be simulated by restricting a Turing machine to read-only tape operations without writing or moving left, while a is simulated using the tape as a with bounded head movement. Conversely, under the Church-Turing , Turing machines are equivalent in expressive power to other foundational models like partial recursive functions and , as each can encode and compute the same class of partial functions through appropriate translations of definitions and reductions. A formal language over a finite alphabet is recursive if and only if it is decided by a that always halts, accepting strings in the language and rejecting those not in it, thereby characterizing the decidable problems within the Turing machine framework.

Hierarchy of Computational Power

Finite and Pushdown Automata

Finite automata, specifically deterministic and nondeterministic finite automata (DFA and NFA), recognize the class of languages, which occupy the lowest level (type-3) in the of formal languages. These languages are generated by regular grammars and can be described using regular expressions. A key feature of languages is their under fundamental operations: the of two languages is , as is their concatenation and (repetition). For instance, if L_1 and L_2 are , then L_1 \cup L_2, L_1 L_2 = \{xy \mid x \in L_1, y \in L_2\}, and L_1^* = \bigcup_{i=0}^\infty L_1^i (with L_1^0 = \{\epsilon\}) are also . These properties follow from constructions on automata or regular expressions, enabling the composition of complex patterns from simpler ones. The Myhill-Nerode theorem provides a precise of regular languages and a for minimizing DFAs. It states that a language L over alphabet \Sigma is regular if and only if the \sim_L on \Sigma^*, defined by x \sim_L y if and only if for all z \in \Sigma^*, xz \in L \Leftrightarrow yz \in L, has finitely many equivalence classes. The number of classes equals the number of states in the minimal DFA recognizing L. This theorem, developed in the late , underpins algorithms for by partitioning strings into equivalence classes based on their future behavior in the language. Despite their versatility, regular languages have limitations, as demonstrated by the pumping lemma for regular languages. This lemma asserts that for any regular language L, there exists a pumping length p such that any string w \in L with |w| \geq p can be divided as w = xyz where |xy| \leq p, |y| > 0, and xy^iz \in L for all i \geq 0. The lemma proves non-regularity by contradiction; for example, the language \{a^n b^n \mid n \geq 0\} is not regular. Assume it is, with pumping length p. Take w = a^p b^p; then w = xyz with |xy| \leq p and |y| > 0, so y consists of only a's. Pumping i=2 yields xy^2 z = a^{p + |y|} b^p \notin L, a contradiction. Pushdown automata (PDA) extend finite automata with a stack, recognizing context-free languages (CFLs), which correspond to type-2 grammars in the Chomsky hierarchy. These grammars have productions of the form A \to \alpha, where A is a nonterminal and \alpha is a string of terminals and nonterminals. CFLs include non-regular languages like \{a^n b^n \mid n \geq 0\}, which a PDA can recognize by pushing a's onto the stack and popping for each b. The pumping lemma for CFLs states that for any CFL L, there is a pumping length p such that any z \in L with |z| \geq p can be written as z = uvxyz where |vxy| \leq p, |vy| > 0, and uv^i x y^i z \in L for all i \geq 0. This tool identifies non-CFLs, such as \{a^n b^n c^n \mid n \geq 0\}, by showing no such division preserves membership under pumping. Certain properties of CFLs are decidable, while others are not. Membership—whether a string belongs to the language generated by a (CFG)—is decidable via the Cocke-Younger-Kasami () algorithm, which runs in O(n^3) time for a string of length n. Emptiness—whether a CFG generates any strings—is also decidable by checking from the start symbol to terminals in the grammar's . However, equivalence—whether two CFGs generate the same language—is undecidable, as shown by reduction from the . Context-free grammars can be ambiguous, meaning some strings have multiple parse trees (leftmost derivations). Ambiguity may be accidental, resolvable by rewriting the grammar, or inherent, where every CFG for the language is ambiguous. The language of even-length palindromes, \{ww^R \mid w \in \{a,b\}^*\} (where w^R is the reverse of w), admits an unambiguous CFG, such as S \to aSa \mid bSb \mid SS \mid \epsilon, which generates unique parses. In contrast, languages like \{a^i b^j c^k \mid i = j \text{ or } j = k, i,j,k \geq 1\} are inherently ambiguous, with no unambiguous CFG existing, as proven by analyzing the degree of ambiguity in generating functions.

Recursive and Recursively Enumerable Sets

In computability theory, a set S \subseteq \mathbb{N} is recursive if there exists a Turing machine that, given any input x, halts and outputs 1 if x \in S and 0 otherwise. This ensures decidability of membership, as the machine always terminates with a correct yes/no answer. Recursive sets are closed under complement, union, and intersection, reflecting their computational tractability. A set is co-recursive if its complement is recursive, though the term is often used interchangeably with recursive in this context since complements preserve decidability. Recursively enumerable (RE) sets, also known as semi-decidable sets, generalize recursive sets by relaxing the halting requirement on non-members. A set S is RE if there exists a Turing machine that halts and accepts on inputs in S, but may loop indefinitely on inputs not in S; equivalently, S is the domain of a partial computable function or accepted by a nondeterministic Turing machine. Every recursive set is RE, but the converse fails, establishing RE sets as a strictly broader class at the apex of the Chomsky hierarchy (type-0 languages). An index set is the collection of indices e such that the e-th Turing machine recognizes a language with a specific property, often central to analyzing RE sets. Rice's theorem provides a foundational undecidability result for RE sets: any non-trivial of the partial recursive functions (i.e., an that is neither empty nor all of \mathbb{N}) is undecidable, meaning no can determine whether a given satisfies the . This theorem underscores the inherent limitations in verifying properties of computable functions beyond syntactic aspects. The distinction between RE and recursive sets is established via , a tracing back to foundational work in recursion theory. To show that not all RE sets are recursive, consider the halting set K = \{ e \mid \phi_e(e) \downarrow \}, where \phi_e is the computed by the e-th and \downarrow denotes halting. K is RE, as a machine can simulate \phi_e(e) and accept if it halts. Assume for contradiction that K is recursive, so membership is decidable by some total computable \chi_K. Construct a total computable function f(e) = 1 - \chi_K(e) if \chi_K(e) = 0, else loop forever. Then f would be partial computable with index g, but f(g) loops while \phi_g(g) halts, or vice versa, yielding a . Thus, K is RE but not recursive. Illustrative examples highlight these classes. The halting set K exemplifies an RE set whose complement (non-halting instances) is co-RE but not RE, as verifying non-halting requires checking all possible computations without a bounded search. Conversely, the totality problem, defined as \mathsf{TOT} = \{ e \mid \phi_e \text{ is total (halts on all inputs)} \}, is \Pi_2-complete in the hierarchy—its complement (machines that fail to halt somewhere) is \Sigma_2 and not RE, as it requires over non-halting inputs, which cannot be semi-decided—but \mathsf{TOT} itself is not RE, residing higher in the hierarchy and requiring over inputs.

Undecidability Results

The Halting Problem

The halting problem is a fundamental decision problem in computability theory, posed by Alan Turing in 1936: given a Turing machine M and an input string w, does M halt (terminate) when started on w? Formally, the language is H = \{\langle M, w \rangle \mid M halts on input w\}, where \langle M, w \rangle denotes an encoding of the machine and input as a string. Turing proved that no Turing machine can decide membership in H, establishing it as undecidable. This result demonstrates intrinsic limits on what can be computed algorithmically. To enable self-reference in the proof, Turing machines must be encodable as natural numbers, allowing a machine to operate on its own description. Turing introduced "description numbers" (D.N.), a systematic encoding where machine configurations, symbols (e.g., blank as 0, marks as 1 or 2), and transitions are represented numerically—for instance, a simple machine's D.N. might be 31332531173113353111731113322531111731111335317. This method, akin to , permits constructing a that simulates any other machine given its D.N. and input. The undecidability proof proceeds by . Assume a decider \mathcal{D} exists that, on input \langle M, w \rangle, outputs "halt" if M halts on w and "loop" otherwise. Using a universal \mathcal{U}, construct a diagonalizer \mathcal{H} with D.N. K: \mathcal{H} takes input x, computes \mathcal{D}(\langle \mathcal{H}, x \rangle, x), and if \mathcal{D} says "halt," then \mathcal{H} loops indefinitely; if "loop," then \mathcal{H} halts. Applying \mathcal{H} to its own D.N. K yields a : if \mathcal{D}(K, K) = "halt," then \mathcal{H} loops (contradiction); if "loop," then \mathcal{H} halts (contradiction). Thus, no such \mathcal{D} exists. This undecidability implies profound practical limitations: no general-purpose or "" can verify whether an arbitrary terminates on given input, complicating tasks like , virus detection, and in . An important extension concerns the restricted halting problem on the empty tape: the set K = \{\langle M \rangle \mid M halts on the empty input \epsilon\} is recursively enumerable (semi-decidable, as a machine can simulate M until it halts) but not recursive. Moreover, K is complete for the class of recursively enumerable sets under many-one reductions, meaning any recursively enumerable set reduces to K, underscoring its centrality in computability.

Rice's Theorem and Generalizations

Rice's theorem, established by Henry Gordon Rice in , asserts that every non-trivial of the partial recursive functions—or equivalently, of the recursively enumerable (RE) languages accepted by s—is undecidable. A concerns the behavior of the function or the language it defines, rather than syntactic aspects of the machine's description, and is non-trivial if there exists at least one RE language satisfying it and at least one that does not. Formally, for a set P of RE languages where \emptyset \neq P \neq \{L \mid L \text{ is RE}\}, the I_P = \{\langle M \rangle \mid L(M) \in P\}, with \langle M \rangle denoting an encoding of M and L(M) its accepted language, is undecidable. The proof relies on a from the , which is known to be undecidable. Assume P is non-trivial, so there exist M_1 and M_0 such that L(M_1) \in P and L(M_0) \notin P. To determine if an arbitrary M halts on input w, construct a new machine N_{M,w} that, on any input x, first simulates M on w: if this simulation halts, N_{M,w} simulates M_1 on x; otherwise, it simulates M_0 on x. Then, M halts on w L(N_{M,w}) \in P, so \langle M, w \rangle \in A_{TM} if and only if \langle N_{M,w} \rangle \in I_P, where A_{TM} is the language. If the empty language \emptyset \in P, the roles of M_1 and M_0 are swapped to ensure the reduction preserves membership correctly. This construction appends the desired behavior to the simulation outcome, establishing the undecidability of I_P. Generalizations of Rice's theorem extend its undecidability results beyond standard RE languages. One such extension applies to co-recursively enumerable (co-RE) properties, where non-trivial semantic properties of languages whose complements are RE are similarly undecidable, following analogous reductions that account for halting on non-acceptance. Further generalizations appear in , providing analogs for resource-bounded classes; for instance, non-trivial properties of languages in NP are \Sigma_2^P-complete to decide, mirroring Rice's sweep but within polynomial-time hierarchies rather than full undecidability. Applications of Rice's theorem demonstrate its breadth in computability. The equivalence problem for Turing machines—determining whether L(M_1) = L(M_2) for given M_1 and M_2—is undecidable, as it corresponds to the non-trivial property of a constructed accepting exactly the symmetric difference of L(M_1) and L(M_2), which reduces to checking membership in a Rice . Similarly, regularity testing—deciding if L(M) is a —is undecidable, since the set of regular languages forms a proper non-empty of all RE languages, making it a non-trivial .

Alternative and Concurrent Models

Lambda Calculus and Recursive Functions

Lambda calculus, introduced by Alonzo Church in 1936, serves as a formal system for expressing computation through function abstraction and application, providing a foundation for functional models of computability equivalent to Turing machines. The syntax consists of three kinds of terms: variables (e.g., x), abstractions (denoted \lambda x.M, where M is a term and x a variable), and applications (denoted (M N), where M and N are terms). Computation proceeds via beta-reduction, the primary reduction rule: (\lambda x.M) N \to M[x := N], substituting N for free occurrences of x in M, with additional rules for eta-conversion and congruence ensuring term equivalence. This untyped system captures higher-order functions, where functions can take other functions as arguments or return them as results, enabling the encoding of complex data and operations. To represent data, developed encodings using pure terms. numerals encode natural numbers: the numeral for n is \overline{n} = \lambda f. \lambda x. f^n x, where f^n x denotes n-fold application of f to x (e.g., \overline{0} = \lambda f. \lambda x. x, \overline{1} = \lambda f. \lambda x. f x, \overline{2} = \lambda f. \lambda x. f (f x)). Booleans are encoded as \mathbf{true} = \lambda x. \lambda y. x and \mathbf{false} = \lambda x. \lambda y. y, allowing conditional selection via application (e.g., \mathbf{true} \, M \, N \to M). Pairs are encoded as \langle M, N \rangle = \lambda z. z M N, with projections \mathbf{fst} = \lambda p. p \, \mathbf{true} and \mathbf{snd} = \lambda p. p \, \mathbf{false}, enabling the construction of lists and other structures from these primitives. operations, such as successor (\mathbf{succ} = \lambda n. \lambda f. \lambda x. f (n f x)) and (\mathbf{plus} = \lambda m. \lambda n. \lambda f. \lambda x. m f (n f x)), follow naturally from these encodings, demonstrating calculus's expressive power for numerical . Recursion in lambda calculus, which lacks built-in looping, is enabled by the fixed-point theorem, stating that for any term F, there exists a term X such that F X = X (i.e., X is a fixed point of F). This is realized via the Y combinator, Y = \lambda f. (\lambda x. f (x x)) (\lambda x. f (x x)), which satisfies Y F = F (Y F), allowing recursive definitions like the factorial: \mathbf{fact} = Y \, (\lambda r. \lambda n. \iszero \, n \, \overline{1} \, (\mathbf{mult} \, n \, (r \, (\mathbf{pred} \, n)))), where \iszero = \lambda n. n \, (\lambda x. \mathbf{false}) \, \mathbf{true}. The theorem, rooted in Kleene's work on recursion, underpins the ability to define self-referential functions without explicit recursion primitives. Mu-recursive functions, formalized by Stephen Kleene in 1936, extend primitive recursive functions—built from zero, successor, and projection via composition and primitive recursion—with the minimization operator \mu, yielding partial functions. Primitive recursion defines total functions like addition (add(0, y) = y, add(s(x), y) = s(add(x, y))), while \mu y \, \phi(x, y) = z finds the smallest z such that \phi(x, z) = 0 or diverges if none exists, capturing search and enabling computation of all partial recursive functions. Kleene's normal form theorem asserts that every mu-recursive function \phi_e(x_1, \dots, x_n) can be expressed as \phi_e(\mathbf{x}) = U(\mu y \, T(e, \mathbf{x}, y)), where T and U are fixed primitive recursive predicates and functions, with e indexing the function (often via Godel numbering). This normal form provides a uniform representation, showing mu-recursive functions compute exactly the partial functions definable in the system. The equivalence between lambda calculus and mu-recursive functions to Turing machines was established through mutual simulations: lambda terms encode Turing machine configurations and transitions, allowing beta-reduction to mimic tape operations, while mu-recursive functions simulate Turing machine steps via primitive recursion for deterministic evolution and minimization for halting. Church demonstrated in 1936 that lambda-definable functions coincide with recursive ones, and Turing's 1937 paper proved lambda calculus computes precisely the Turing-computable functions by constructing a lambda interpreter for Turing machines. Thus, these models define the same class of partial recursive functions, confirming their status as equivalent formalizations of mechanical computation.

Concurrency-Based Models

Concurrency-based models of computation extend traditional sequential paradigms by incorporating multiple processes that execute simultaneously, enabling the study of and distributed systems within the framework of . Basic concurrency involves the interleaving of actions from processes, where the overall emerges from the non-deterministic merging of individual execution sequences. This nondeterminism arises from choices in process scheduling or points, allowing multiple possible outcomes for the same initial state without violating at the individual process level. Such models capture real-world phenomena like while remaining within the bounds of Turing-equivalent computability. A foundational concurrency-based model is (CSP), introduced by in 1978. CSP employs a syntax of guarded commands, where each command is prefixed by a that determines its executability, facilitating structured nondeterministic choice among alternatives. Communication occurs via handshake protocols, where an output command from one process synchronizes with a matching input command from another, ensuring and reliable data exchange without . This design emphasizes synchronization to avoid races, modeling concurrency as a network of sequential processes interacting through channels. The π-calculus, developed by Robin Milner and colleagues in the early 1990s, advances concurrency modeling by introducing mobility through name passing. In this calculus, processes are defined using names that serve as both channels and data, allowing links between processes to be dynamically created or altered during execution. For instance, a process can send a name (representing a channel) to another, enabling reconfiguration of communication topologies on the fly. The syntax includes constructs like output (¯x⟨y⟩.P), input (x(y).P), and parallel composition (P|Q), with semantics based on labeled transitions that support such dynamic interactions. This mobility captures evolving system structures, such as in distributed networks. Regarding expressiveness, both CSP and the are Turing-complete, capable of simulating Turing machines through encodings of sequential computation within concurrent settings, though they introduce additional undecidability results beyond the . For example, detection—determining whether a system can reach a state where all processes are blocked awaiting communication—is undecidable in the asynchronous polyadic , as it reduces to problems like Post's . These models differ from sequential ones in their emphasis on behavioral : while fair scheduling can align their languages with Turing-equivalent power, bisimulation provides a finer-grained notion, equating processes only if they match step-by-step, including internal synchronizations, as defined in the semantics.

Extensions Beyond Standard Computability

Oracle Machines

Oracle machines extend the model of by incorporating an "," which provides instantaneous answers to queries about membership in a fixed, possibly undecidable set A \subseteq \mathbb{N}. Formally, an M^A is a augmented with a read-only oracle tape containing elements of A, and special query states that allow M^A to write a x on the oracle tape and receive an immediate response of 1 if x \in A or 0 otherwise, without expending computational steps. This setup enables the study of relative computability, where the power of the machine is measured relative to the oracle set A. The concept was introduced by to analyze limitations in formal systems and ordinal logics. The arithmetic hierarchy organizes subsets of natural numbers according to the quantifier complexity in their arithmetic definitions, with levels \Sigma_n^0 and \Pi_n^0 for n \geq 1. A set is in \Sigma_n^0 if it can be defined by a formula with n alternating unbounded quantifiers beginning with an existential one, and similarly for \Pi_n^0 starting with universal. Equivalently, these classes correspond to oracle computations: the \Sigma_1^0 sets are the recursively enumerable sets (computable with the empty oracle), while higher levels \Sigma_n^0 consist of sets recursively enumerable relative to the (n-1)-th iterate of the halting oracle. This oracle-based characterization, linking the hierarchy to iterated applications of the halting problem oracle, was developed by Stephen Kleene and Emil Post. Turing degrees provide a partial order on the equivalence classes of sets under Turing reducibility, where two sets B and C have the same degree if each is computable from the other via an oracle machine (i.e., B \leq_T C and C \leq_T B). The degrees form an upper semi-lattice under this order, capturing the relative "strength" or unsolvability of oracles. Central to this structure is the jump operator, which maps a set A to its jump A', the degree of the halting problem relative to A—that is, the set of indices of oracle machines with oracle A that halt on the empty input. The jump of the empty set, denoted $0', is precisely the degree of the standard halting problem. This framework was formalized by Emil Post, building on Turing's oracle ideas. A key insight from machines is that undecidability results relativize: the is undecidable for machines with the empty oracle but becomes decidable with access to the halting oracle itself, as the oracle directly answers halting queries. More broadly, machines reveal barriers in proof techniques for complexity separations. In particular, the Baker-Gill-Solovay theorem constructs oracles A and B such that \mathrm{P}^A = \mathrm{NP}^A while \mathrm{P}^B \neq \mathrm{NP}^B, showing that relativizing proofs cannot resolve the \mathrm{P} versus \mathrm{NP} question.

Hypercomputation Models and Limits

Hypercomputation refers to theoretical models of that purportedly surpass the capabilities of Turing machines by allowing processes or non-discrete operations within finite physical resources. These models challenge the Church-Turing thesis by exploring scenarios where undecidable problems, such as the , become solvable, often by extending time, space, or representational precision beyond standard limits. While physically realizable remains speculative, these constructs provide insights into the boundaries of effective computability in mathematical and physical contexts. Infinite-time Turing machines (ITTMs), introduced by Hamkins and Lewis, extend the operation of standard Turing machines into transfinite ordinal time, where computation proceeds through limit stages defined by ordinal numbers. In this model, the machine's tape and state are updated at successor ordinals, while at limit ordinals, the tape configuration stabilizes to the "limit" of previous configurations, and the state is determined by whether the machine eventually writes a specific symbol infinitely often. Halting occurs when the machine enters a special halting state, potentially after uncountably many steps, enabling ITTMs to compute functions beyond Turing machines, such as the set of writable reals, which includes all constructible reals but excludes some non-constructible ones. Notably, ITTMs can solve the for ordinary Turing machines by simulating them over ordinal time until stabilization. Malament-Hogarth (MH) spacetimes offer a relativistic framework for , where permits spacetimes allowing one observer (e.g., near a ) to experience infinite while another distant observer perceives only finite time. In such geometries, a computer along an infinite worldline can perform supertasks, completing infinitely many steps before signaling a result to the finite-time observer, potentially deciding undecidable problems like the for Turing machines. Etesi and Németi demonstrated that MH spacetimes enable non-Turing computations, including solutions to Diophantine equations undecidable in standard models, by leveraging the causal structure of . However, the physical feasibility depends on exotic but mathematically valid solutions to Einstein's field equations. Analog computation models, as analyzed by Pour-El and Richards, explore continuous dynamical systems like differential equations solved via physical processes, which can yield non-Turing-computable outputs from Turing-computable inputs. For instance, the unique solution to the wave equation with computable initial on a compact may be non-computable everywhere, suggesting that analog devices could "compute" functions outside the recursive hierarchy. This challenges the notion that physical systems are inherently limited to Turing-equivalent , though practical realizations face issues of precision and noise. Extensions of the Church-Turing thesis, such as the physical Church-Turing thesis proposed by , posit that any finite can be simulated to arbitrary by a , incorporating to bound hypercomputational claims. In quantum and relativistic settings, this thesis suggests limits: while quantum computers efficiently solve certain problems, they remain within Turing bounds, and relativistic effects in MH spacetimes may not evade energy or stability constraints in realistic physics. These models achieve hypercomputation in theory—e.g., ITTMs deciding the and MH setups solving arithmetic statements—but their limits highlight tensions with empirical physics, where infinite or time remains unattainable.

References

  1. [1]
    [PDF] an introduction to computability theory - UChicago Math
    Aug 29, 2014 · Computability theory is a branch of mathematical logic that focuses on algo- rithms, formally known in this area as computable functions, and ...
  2. [2]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    By A. M. TURING. [Received 28 May, 1936. —Read 12 November, 1936.] The "computable" numbers may be described briefly as the real numbers whose expressions as a ...
  3. [3]
    [PDF] What is the Church-Turing Thesis?
    Abstract We aim to put some order to the multiple interpretations of the Church-. Turing Thesis and to the different approaches taken to prove or disprove it.Missing: authoritative | Show results with:authoritative
  4. [4]
    Computability and Complexity - Stanford Encyclopedia of Philosophy
    Jun 24, 2004 · A mathematical problem is computable if it can be solved in principle by a computing device. Some common synonyms for “computable” are “solvable”, “decidable”, ...
  5. [5]
    [PDF] Computability theory - Berkeley Math
    Feb 25, 2024 · Key to proving Gödel and Turing's theorems was making a precise definition of a computable function.
  6. [6]
    The Church-Turing Thesis (Stanford Encyclopedia of Philosophy)
    Jan 8, 1997 · The Church-Turing thesis concerns the concept of an effective or systematic or mechanical method, as used in logic, mathematics and computer science.Missing: authoritative | Show results with:authoritative
  7. [7]
    Grundzüge der Theoretischen Logik - SpringerLink
    Free delivery 14-day returnsDownload chapter PDF · Einleitung. D. Hilbert, W. Ackermann. Pages 1-2. Der Aussagenkalkül. D. Hilbert, W. Ackermann. Pages 3-42. Der Klassenkalkül. D. Hilbert, ...
  8. [8]
    Grundzüge der theoretischen Logik : Hilbert, David, 1862-1943
    Sep 5, 2019 · Grundzüge der theoretischen Logik. by: Hilbert, David, 1862-1943 ... DOWNLOAD OPTIONS. No suitable files to display here. IN COLLECTIONS.
  9. [9]
    [PDF] Alonzo Church Source: American Journal of Mathematics, Vol. 58, No.
    An Unsolvable Problem of Elementary Number Theory. Author(s): Alonzo Church. Source: American Journal of Mathematics, Vol. 58, No. 2 (Apr., 1936), pp. 345-363.Missing: primary | Show results with:primary
  10. [10]
    General recursive functions of natural numbers
    About this article. Cite this article. Kleene, S.C. General recursive functions of natural numbers. Math. Ann. 112, 727–742 (1936). https://doi.org/10.1007 ...Missing: paper | Show results with:paper
  11. [11]
    ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS
    I. Introduction. In his celebrated paper [1], A. M. Turing investigated the computability of sequences (functions) by mechanical procedures and showed.
  12. [12]
    S. C. Kleene. General recursive functions of natural numbers ...
    S. C. Kleene. General recursive functions of natural numbers. Mathematische Annalen, Bd. 112 (1935–1936), S. 727–742. - Volume 2 Issue 1.
  13. [13]
    Decidability - Brown CS
    May 4, 2016 · A problem is decidable if there exists an algorithm to solve it that is sound, complete, and terminating.
  14. [14]
    [PDF] PRIMES is in P - Microsoft
    A. Kalai, A. Sahai, and M. Sudan, Notes on primality test and analysis of AKS,. Private communication, August 2002.
  15. [15]
    [PDF] Turing Machines Recursive/Recursively Enumerable Languages
    A language is Turing-decidable (decidable, or recursive) if some. Turing machine decides it. (NTU EE). Turing Machines. Spring 2024. 10 / 40. Page 11. Turing- ...
  16. [16]
    Decidable and Undecidable Problems in Theory of Computation
    Oct 1, 2024 · Examples, Problems like checking if a number is even or odd, or if a string belongs to a regular language (like finding a match in a search).<|separator|>
  17. [17]
    [PDF] (8) Suppose that the following input is given to yacc:
    Run M on the even length strings in Σ* in lexicographic order, interleaving the computations. As soon as two such computations have rejected, halt. Proof not in ...
  18. [18]
    [PDF] 1 More Unsolvable Problems - UNC Computer Science
    The blank tape halting problem asks if a Turing machine halts on a blank tape. It is unsolvable because it is related to the unsolvable halting problem.
  19. [19]
    [PDF] 1 Closure Properties - 1.1 Decidable Languages
    Proposition 1. Decidable languages are closed under union, intersection, and complementation. Proof. Given TMs M1, M2 that decide languages L1, and L2.
  20. [20]
    [PDF] 5.2: Closure Properties of Recursive and Recursively Enumerable ...
    In this section, we will see that the recursive and recursively enumerable languages are closed under union, concatenation, closure and intersection.
  21. [21]
    [PDF] Undecidability in number theory1 - MIT Mathematics
    The undecidability of the halt- ing problem gave us a listable set that is not com- putable. By the DPRM theorem, having this is the same as having a ...
  22. [22]
    [PDF] Hilbert's Tenth Problem for Fixed d and n - UMD Computer Science
    Translation in Soviet Math Dok- lady, Vol 11, 1970. [21] Yuri Matiyasevich. Hilbert's Tenth Problem. MIT press, Cambridge, 1993. [22] Yuri Matiyasevich and ...Missing: citation URL
  23. [23]
    [PDF] Finite Automata and Their Decision Proble'ms#
    Abstract: Finite automata are considered in this paper as instruments for classifying finite tapes. Each one- tape automaton defines a set of tapes, ...
  24. [24]
    [PDF] representation of events in nerve nets and
    McCulloch and Pitts in their original paper give a theory for nerve nets without circles [Part II of their paper] and a theory for arbitrary nerve nets.
  25. [25]
    [PDF] TIIKEE MODELS FOR TIE DESCRIPTION OF LANGUAGE
    We study the formal properties of a set of grammatical trans- formations that carry sentences with phra.se structure into new sentences with derived phrase.
  26. [26]
    [PDF] Great Algorithms: CYK - cs.wisc.edu
    Sep 1, 2013 · Abstract. The CYK algorithm, named after Cocke, Younger, and Kasami, is an algorithm for deciding if a string is in a context-free language.
  27. [27]
    [PDF] Introduction to Automata Theory, Languages, and Computation
    Hopcroft, John E., 1939-. Introduction to automata theory, languages, and computation / John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman.-2nd ed. p. cm. ISBN ...
  28. [28]
    Turing machines - Stanford Encyclopedia of Philosophy
    Sep 24, 2018 · Turing's original paper is concerned with computable (real) numbers. A (real) number is Turing computable if there exists a Turing machine ...Definitions of the Turing Machine · Computing with Turing Machines
  29. [29]
    [PDF] 4/20/20 Equivalence of 1 and Multi-tape Turing Machines
    Our simulation of a multi-tape TM on a single-tape TM will be dif- ferent. The single-tape TM will have to take many steps to simulate each step of the multi- ...
  30. [30]
    Recursive Functions - Stanford Encyclopedia of Philosophy
    Apr 23, 2020 · The recursive functions are a class of functions on the natural numbers studied in computability theory, a branch of contemporary ...
  31. [31]
  32. [32]
    [PDF] Undecidable Problems for Context-free Grammars
    We discuss some basic undecidable problems for context-free lan- guages, starting from Valid and invalid computations of TM's: a tool for prov- ing CFL problems ...
  33. [33]
    The Independence of Inherent Ambiguity From Complementedness ...
    The two theorems presented in this paper lay the suspicion to rest by providing (1) an inherently ambiguous language with context-free complement and (2) an ...Missing: original | Show results with:original
  34. [34]
    [PDF] Recursive and Recursively Enumerable Sets
    Definition: A set (or relation) is recursive (or computable or decidable) if it is computable as a total 0-1 valued function. NOTE: The terms “recursive” ...
  35. [35]
    [PDF] 1 A Characterization of Recursively Enumerable Sets - UCSD Math
    Apr 16, 2012 · The Halting problem H defined by. H := {pMq | M halts on input } is ... H is many-one complete for the r.e. sets. Example 2.8 ...
  36. [36]
    CLASSES OF RECURSIVELY ENUMERABLE SETS AND THEIR ...
    H. G. RICE. 1. Introduction. In this paper we consider classes whose elements are re- cursively enumerable sets of non-negative integers.
  37. [37]
    [PDF] 1 Rice's Theorem
    Rice's Theorem states that if a property of languages is non-trivial, then it is undecidable. It does not apply to properties of Turing machines.
  38. [38]
    [PDF] Rice's Theorem
    Nov 29, 2012 · In English, some r.e. language has the property and some r.e. language does not. All the example properties we listed above are nontrivial with ...
  39. [39]
    [PDF] Lecture 22: Rice's Theorem Proof
    The proof is in the form of a reduction from the halting problem ATM to LP . We are able to make a 'generic' reduction for any P by taking one TM M1 such that ...
  40. [40]
    [PDF] 2nd Rice Theorem
    This TM accepts (M, w) if and only if M' accepts a language in S, or equivalently, if and only if M does not accept w. As such a TM does not exist, we know My ...<|control11|><|separator|>
  41. [41]
    A second step towards complexity-theoretic analogs of Rice's Theorem
    Rice's Theorem states that every nontrivial language property of the recursively enumerable sets is undecidable. Borchert and Stephan (1997) initiated the ...
  42. [42]
    [PDF] An Unsolvable Problem of Elementary Number Theory Alonzo ...
    Mar 3, 2008 · An example of such a problem is the problem to find a means of de- termining of any given positive integer n whether or not there exist positive.Missing: computable | Show results with:computable
  43. [43]
    [PDF] THE CALCULI OF LAMBDA-CONVERSION
    The Calculi of Lambda-Conversion, by ALONZO CHURCH. 7 Finite Dimensional Vector Spaces, by PAUL R. HALMOS. 10. Topics in Topology, by SOLOMON LEFSCHETZ. 11 ...Missing: primary source
  44. [44]
    General recursive functions of natural numbers - Semantic Scholar
    General recursive functions of natural numbers · S. Kleene · Published 1 December 1936 · Mathematics · Mathematische Annalen.
  45. [45]
    General recursive functions of natural numbers. - EuDML
    Kleene, S.C.. "General recursive functions of natural numbers.." Mathematische Annalen 112 (1936): 727-742. <http://eudml.org/doc/159849>.Missing: paper | Show results with:paper
  46. [46]
    Communicating sequential processes - ACM Digital Library
    This paper suggests that input and output are basic primitives of programming and that parallel composition of communicating sequential processes is a ...
  47. [47]
    A calculus of mobile processes, I - ScienceDirect.com
    We present the π-calculus, a calculus of communicating systems in which one can naturally express processes which have changing structure.Missing: expressiveness computability
  48. [48]
    [PDF] with a focus on the expressivity of process calculi - Pure
    Jun 11, 2018 · specification of Reactive Turing Machines in the π-calculus is proposed and verified. In Section 5.3, we discuss the executability of ...
  49. [49]
    (PDF) On detecting deadlock in the Pi-Calculus. - ResearchGate
    Apr 27, 2016 · Based on the proposed definition, we show that detecting deadlock in the asynchronous polyadic π-calculus is usually an undecidable problem.
  50. [50]
    [PDF] Systems of logic based on ordinals (Proc. Lond. Math. Soc., series 2 ...
    Turing considered several natural ways in which ordinal logics could be constructed: (i) A p, obtained by successively adjoining statements directly overcoming ...
  51. [51]
    Relativizations of the P = ? N P Question - SIAM Publications Library
    We investigate relativized versions of the open question of whether every language accepted nondeterministically in polynomial time can be recognized ...
  52. [52]
    Does general relativity allow an observer to view an eternity in a ...
    Nov 11, 1991 · I investigate whether there are general relativistic spacetimes that allow an observerμ to collect in a finite time all the data from the worldline of another ...
  53. [53]
    Infinite Time Turing Machines - jstor
    Notice that the natural input for these machines is an infinite binary string x c 2". Thus, the infinite time computable functions are partial functions on ...
  54. [54]
    Non-Turing Computations Via Malament–Hogarth Space-Times
    We investigate the Church–Kalmár–Kreisel–Turing theses theoretical concerning (necessary) limitations of future computers and of deductive sciences, ...