Fact-checked by Grok 2 weeks ago

Theory of computation

The theory of computation is a foundational of and that studies the nature of , including what problems can be solved by algorithms, the resources required to solve them, and the abstract models that formalize computational processes. It addresses fundamental questions about the capabilities and limitations of computers, such as whether all problems are computable, how to model computational machines, and whether some problems are inherently more difficult than others. The field is typically divided into three main branches: , , and . Automata theory explores abstract computing devices, known as automata, and the formal languages they recognize, establishing hierarchies of computational power from simple finite automata for regular languages to more advanced models like pushdown automata for context-free languages. Computability theory, building on the Church-Turing thesis, investigates which functions are computable using Turing machines and reveals undecidable problems, such as the , which cannot be solved by any algorithm. Complexity theory classifies computable problems based on the time and space resources needed, introducing key classes like (problems solvable in polynomial time) and (problems verifiable in polynomial time), and addressing challenges like the P versus NP question. Originating in the 1930s with seminal work by on computable numbers and the , the theory of computation provides the mathematical underpinnings for modern computing, influencing areas from algorithm design and programming languages to artificial intelligence and . Its principles ensure rigorous analysis of computational bounds, guiding the development of efficient technologies while highlighting inherent impossibilities in .

Introduction

Definition and Scope

The theory of computation is the branch of and that studies the nature of , abstract machines, and the fundamental limits of what problems can be computed or solved algorithmically. It addresses the inherent capabilities and limitations of computational processes, independent of specific physical implementations. This field formalizes concepts of computation to determine which classes of problems are solvable and under what constraints, providing a mathematical foundation for understanding algorithmic processes. The scope of the theory of computation encompasses mechanical processes that can be executed step-by-step, the notion of effective calculability—which refers to procedures that can be carried out mechanically by a or using finite means—and the study of algorithmic solvability for mathematical problems. Effective calculability, as originally conceptualized, involves symbolic manipulations that yield results for functions on natural numbers through rigorous, finite methods. Algorithmic solvability focuses on whether there exists a systematic to resolve a given problem instance, emphasizing universality across equivalent models of computation. Unlike practical , which deals with architectures, , and real-world implementations constrained by resources like time and memory, the theory of computation examines idealized, abstract models to explore in principle, without regard to physical feasibility. This abstraction allows for proofs about the boundaries of computation, distinguishing theoretical solvability from challenges in deploying solutions on actual devices. Central terminology in the field includes the , defined as a well-defined computational procedure that takes input values and produces corresponding outputs through a finite sequence of steps. A is one for which an algorithm exists that, given any input from its domain, halts and produces the correct output, capturing the essence of mechanically realizable mappings. A , in turn, is a formal question with a yes-or-no , such as whether a given input satisfies a specified property, serving as the basic unit for analyzing solvability. These terms, pioneered by figures like and , underpin the field's exploration of computation's frontiers.

Fundamental Questions

The theory of computation seeks to address core questions about the nature and limits of mechanical processes for solving mathematical problems. A primary inquiry is what functions are computable, meaning whether there exists a systematic procedure that can produce the correct output for any valid input to a given problem. This question distinguishes between problems that admit algorithmic solutions and those that are inherently unsolvable by any finite means. Another fundamental question is how efficiently problems can be solved, examining the minimum resources—such as time, space, or parallel operations—required to achieve within practical bounds. Additionally, the field probes the limits of , exploring what aspects of reasoning, proof, or can be fully mechanized versus those that require human or infinite processes. Significant progress has been made on the equivalence of computational models, establishing that diverse formalisms—such as Turing machines, recursive functions, and —define the same class of computable functions, as encapsulated in the Church-Turing thesis. This resolution provides a unified foundation for understanding computation, though the thesis itself remains a rather than a proven theorem. These insights tease the boundaries of what models like automata can achieve in recognizing patterns, a topic explored in . Open problems continue to challenge the field, particularly regarding the structure of complexity classes. The of complexity classes, such as the , raises questions about whether levels are strictly separated or prone to ; for example, if \Sigma_k^p = \Pi_k^p for some k, the entire hierarchy would to the k-th level. The nature of -complete problems remains elusive, as resolving whether they lie in P would imply the equality P = NP—a flagship open question with implications for optimization and verification tasks. These unresolved issues highlight ongoing debates about the fine-grained separation of computational power. The answers to these fundamental questions play a crucial role in guiding practical algorithm design by identifying infeasible problems and inspiring efficient approximations, while also informing hardware limitations through models that predict scalability barriers in real-world systems. For instance, complexity results help developers choose between exact and heuristic methods for resource-constrained environments. This theoretical framework ensures that advancements in respect inherent boundaries, fostering innovations in software and .

Historical Development

Precursors and Early Ideas

The , described around 300 BCE in Euclid's Elements, represents one of the earliest known systematic procedures for computation, providing a step-by-step method to determine the of two integers through repeated division and remainder operations. This mechanical process exemplified algorithmic thinking by breaking down a mathematical problem into a finite sequence of deterministic steps, laying informal groundwork for later notions of effective procedures in computation. In the 17th century, envisioned a , a universal logical calculus that would mechanize reasoning by reducing complex arguments to symbolic manipulations akin to arithmetic operations. Building on this, 19th-century developments included Babbage's , conceptualized in 1837 as a programmable mechanical device capable of performing arbitrary calculations using punched cards for input and control, separating storage from processing in a manner prescient of modern computers. Concurrently, George Boole's 1847 work The Mathematical Analysis of Logic introduced , formalizing through binary operations on variables representing true or false, which provided the algebraic foundation for designing logic circuits in computational systems. Philosophical efforts to mechanize gained momentum in the late 19th and early 20th centuries, with roots in David Hilbert's list of problems that sought a finitary basis for proofs and a decision procedure for mathematical statements. Hilbert's later program, articulated in 1928, aimed to formalize all of as a consistent, complete amenable to mechanical verification, reflecting broader aspirations to automate proof-checking and eliminate paradoxes through rigorous symbol manipulation. However, these precursors suffered from a fundamental limitation: the absence of a precise, formal definition of "" or "," leaving concepts of mechanical procedures intuitive rather than rigorously delimited until Kurt Gödel's 1931 incompleteness theorems exposed inherent barriers to full mechanization.

Foundational Era (1930s–1940s)

The theory of computation emerged in the 1930s as a direct response to foundational crises in mathematics, particularly the paradoxes arising in set theory and the limitations of formal systems exposed by earlier work such as Bertrand Russell's paradox in 1901 and the development of axiomatic approaches like Zermelo-Fraenkel set theory. These issues prompted David Hilbert's formalist program, outlined in his 1928 address and book with Wilhelm Ackermann, which sought to establish mathematics on a secure, finitary basis through complete axiomatization and proof theory. Central to this was Hilbert's Entscheidungsproblem, posed in 1928, which asked whether there exists an algorithm to determine the provability of any mathematical statement within a formal system, thereby aiming to mechanize mathematical reasoning and resolve undecidability concerns. A pivotal blow to Hilbert's ambitions came from Kurt Gödel's incompleteness theorems, published in 1931, which demonstrated inherent limitations in s capable of expressing basic arithmetic. Gödel's first theorem showed that any consistent containing the is incomplete, meaning there exist true statements within its language that cannot be proved or disproved inside the system; the second theorem extended this by proving that such a system cannot prove its own consistency. These results, derived through —a technique to encode syntactic objects as numbers—revealed that no single could capture all mathematical truths, undermining the completeness goal of and shifting focus toward the boundaries of what is computable. The Entscheidungsproblem was decisively resolved in the negative during 1936 by Alonzo Church and Alan Turing, independently establishing the undecidability of algorithmic verification for logical validity. Church, building on his lambda calculus—a formal system for expressing functions introduced in the early 1930s—defined "effective calculability" via λ-definability and proved in his 1936 paper that no such general decision procedure exists for first-order logic. Concurrently, Turing introduced the abstract concept of a Turing machine, a device capable of simulating any algorithmic process through a finite set of states and symbols on an infinite tape, and demonstrated that the halting problem for these machines is undecidable, directly answering Hilbert's question. These works not only refuted the Entscheidungsproblem but also provided concrete models of computation, highlighting the limits of mechanical procedures in mathematics. Complementing these developments, Stephen Kleene formalized general recursive functions in 1936, extending the primitive recursive functions formalized by Kurt Gödel in 1931, building on examples like Wilhelm Ackermann's 1928 function, which is total recursive but not primitive recursive. Kleene's framework defined a class of functions computable by procedures involving unbounded minimization, proving their equivalence to Church's λ-definable functions and Turing's computable functions, thus unifying disparate notions of algorithm under recursion theory. This convergence in 1936 marked the crystallization of computability theory, laying the groundwork for understanding the scope and limitations of computation as a response to the era's foundational challenges.

Postwar Expansion

Following , the theory of computation experienced significant institutional and theoretical expansion, building on prewar foundational models to address emerging computational needs in programming languages and machine design. In the 1950s and 1960s, advanced prominently, with and introducing nondeterministic finite automata and decision problems for them in their seminal 1959 paper, enabling formal analysis of computational acceptance for finite inputs. Concurrently, developed a classifying formal grammars and languages in 1956, providing a for understanding relevant to early design. These contributions diversified the field beyond pure computability, fostering applications in language processing and verification. The establishment of dedicated conferences marked the field's maturation as a distinct discipline. The first Symposium on Switching Circuit Theory and Logical Design, now known as the IEEE Symposium on Foundations of Computer Science (FOCS), convened in 1960 in , serving as a primary venue for theoretical advancements in and . Complementing this, the Association for Computing Machinery (ACM) launched the Symposium on Theory of Computing (STOC) in 1969, sponsored by its Special Interest Group on Algorithms and Computation Theory (SIGACT), which had formed in 1968 to promote theoretical research. These events institutionalized discourse among researchers, accelerating idea exchange. In the 1960s, the field expanded into , with Juris Hartmanis and Richard E. Stearns formalizing time and measures as functions of input size in 1965, laying groundwork for complexity classes like those bounding efficient computation. Key later figures, such as and , built on this through influential textbooks that synthesized complexity results for broader adoption. The 1970s saw further growth with Cook's 1971 theorem demonstrating the of , catalyzing analysis of intractable problems and algorithm efficiency. Institutionalization solidified through ACM's influence and integration into computer science curricula. ACM's curricular guidelines from recommended core theory courses, embedding automata, , and in undergraduate programs to train practitioners in . This integration, alongside SIGACT's support for conferences and publications, transformed theory of computation from a niche pursuit into a cornerstone of education and research by the late 20th century.

Core Branches

Automata Theory

Automata theory is the study of abstract computing devices, known as , and the computational problems that can be solved using them. These devices model the behavior of systems that process inputs according to predefined rules, focusing on their ability to recognize patterns in strings over an . The field examines the power and limitations of such machines in accepting or rejecting languages, providing a foundation for understanding at various levels of . Central to automata theory is the Chomsky hierarchy, which classifies formal languages based on the types of grammars that generate them and the corresponding automata that recognize them. Proposed by , this hierarchy organizes languages into four levels: Type 3 (regular languages), Type 2 (context-free languages), Type 1 (context-sensitive languages), and Type 0 (recursively enumerable languages). Each level represents increasing expressive power, with regular languages being the simplest and recursively enumerable languages the most general. Type 3 grammars consist of productions of the form A \to aB or A \to a, where A and B are non-terminals and a is a terminal; Type 2 allows A \to \alpha, where \alpha is any string of terminals and non-terminals; Type 1 restricts productions to \alpha A \beta \to \alpha \gamma \beta where |\alpha \gamma \beta| \geq |\alpha A \beta| (i.e., |\gamma| \geq 1); and Type 0 imposes no restrictions beyond the standard replacement rules. This classification highlights the correspondence between grammar complexity and the resources needed for recognition. Key automata in this hierarchy include finite automata for regular languages, pushdown automata for context-free languages, and linear bounded automata for context-sensitive languages. Finite automata are the simplest, consisting of a finite set of states, an input , a transition function, a start state, and a set of accepting states. Deterministic finite automata (DFA) have a unique transition for each input symbol from each state, while nondeterministic finite automata (NFA) allow multiple or no transitions, including \epsilon-transitions without consuming input; despite this, NFAs recognize exactly the same class of languages as DFAs. These were formalized by and , who proved the equivalence between NFAs and DFAs via subset construction. Pushdown automata (PDA) extend finite automata by incorporating a , enabling recognition of context-free languages through operations like push, pop, and \epsilon-moves; a PDA is defined by states, input , stack , transition function, start state, and accepting states (either by final state or empty ). This model was introduced by and Marcel-Paul Schützenberger, linking PDAs directly to context-free grammars. Linear bounded automata (LBA) further augment PDAs with a tape bounded linearly by the input length, recognizing context-sensitive languages. Pumping lemmas provide tools to prove that certain languages are not or context-free by showing they violate the structural properties of these classes. For languages, the states that if L is , there exists a pumping length p such that for any string w \in L with |w| \geq p, w can be divided as w = xyz where |xy| \leq p, |y| > 0, and xy^iz \in L for all i \geq 0. This , proven by and Scott, implies that sufficiently long strings in languages contain repeatable substrings without altering membership. For context-free languages, a similar holds: if L is context-free, there exists p such that for w \in L with |w| \geq p, w = uvxyz where |vxy| \leq p, |vy| > 0, and uv^ixy^iz \in L for all i \geq 0. Established by Yehoshua Bar-Hillel, Micha A. Perles, and Eli Shamir, this captures the recursive structure inherent in context-free derivations. In practice, underpins compiler design, particularly in and . , or scanning, uses finite automata to tokenize by recognizing patterns like identifiers and keywords as regular languages, enabling efficient matching via DFA implementations. employs pushdown automata to analyze syntactic structure according to context-free grammars, facilitating the construction of parse trees for valid programs. These applications demonstrate how automata enable systematic language processing in software tools.

Computability Theory

Computability theory, a central branch of the , examines the fundamental limits of what can be computed by mechanical procedures, focusing on the solvability of decision problems and the inherent undecidability of certain questions. It establishes that there exist well-defined mathematical problems for which no can provide a correct yes-or-no answer in finite time for all inputs, revealing absolute barriers beyond which computation cannot proceed regardless of available resources. This field originated in through efforts to formalize the notion of effective calculability and has profound implications for understanding the boundaries of algorithmic solvability in and . In , problems are often formalized as , which are subsets of over a finite . A is decidable, or recursive, if there exists an that, for any input , halts and correctly outputs whether the belongs to the . In contrast, a is semi-decidable, or recursively enumerable, if there exists an that halts and accepts in the but may loop indefinitely on not in the . The class of recursive is strictly contained within the class of recursively enumerable , as every recursive is recursively enumerable, but the converse does not hold. These definitions align with the Church-Turing thesis, which posits that recursive functions capture all effectively computable functions. A seminal result in is the undecidability of the , which asks whether a given program will halt on a given input. proved this in using a diagonalization argument: assume a exists, construct a program that behaves oppositely on its own description, leading to a . This shows the is not recursive, establishing the first explicit example of an and demonstrating that no general can predict program termination. The proof relies on reductions showing that solvability of the would imply solutions to broader classes of problems, but since it leads to , such solvability is impossible. Rice's theorem generalizes this undecidability, stating that any non-trivial semantic property of the functions computed by Turing machines—meaning any property that holds for some but not all such functions—is undecidable. Formulated by Henry Gordon Rice in 1953, the theorem applies to properties of recursively enumerable languages, proving that questions like "Does this program compute an even function?" or "Is the range of this function finite?" cannot be decided algorithmically. The proof involves reducing the to the property in question, showing that if the property were decidable, the halting problem would be too, which is impossible. This theorem underscores that undecidability pervades non-trivial questions about program behavior. To prove undecidability, employs reductions that map one problem to another, preserving solvability. A from language A to B is a total f such that x ∈ A f(x) ∈ B; introduced by Post in 1944, it shows that if B is decidable, then A is too. A , also from Post's 1944 work but formalized via oracle machines in Turing's 1939 analysis, allows an for A to query an oracle for B multiple times, using adaptive computations. are more general than , as every is a , but not vice versa; they are used to establish relative undecidability, such as showing the to itself. These techniques enable chaining undecidability proofs across problems. The of sets in distinguishes recursive sets, which are decidable, from recursively enumerable sets, which include the recursive ones but extend to semi-decidable cases like the halting set. Stephen Kleene formalized general recursive functions in 1936, showing that the ranges of these functions yield recursively enumerable sets, while total recursive functions define the recursive sets. This inclusion is proper, as the complement of a recursive set is recursive, but the complement of a non-recursive recursively enumerable set is not recursively enumerable. The hierarchy highlights that while all recursive sets can be recognized by finite automata for certain subclasses, the full recursively enumerable class requires more powerful models and admits undecidable membership questions.

Computational Complexity Theory

Computational complexity theory is a branch of the theory of computation that studies the resources required to solve computational problems, focusing on time and space as functions of input size, under the assumption that the problems are decidable. It classifies problems based on the efficiency of algorithms that solve them, using asymptotic measures to abstract away constant factors and lower-order terms. This field emerged to quantify the inherent difficulty of computation, distinguishing between tractable problems solvable in polynomial time and intractable ones requiring exponential resources. Seminal work in this area formalized complexity classes and established foundational results on their separations and completeness. The deterministic time complexity class DTIME(f(n)) consists of all languages decidable by a deterministic in at most O(f(n)) steps on inputs of length n, where f(n) is a time-constructible . Similarly, (f(n)) includes languages decidable using O(f(n)) space. Nondeterministic counterparts are defined analogously: NTIME(f(n)) for languages accepted by nondeterministic s in O(f(n)) time, and NPSPACE(f(n)) for those using O(f(n)) space. These classes capture the power of nondeterminism, where machines can "guess" solutions and verify them efficiently. For polynomial-bounded resources, = \bigcup_{k \geq 0} DTIME(n^k) denotes problems solvable in polynomial time, while = \bigcup_{k \geq 0} NTIME(n^k) includes problems verifiable in polynomial time. The relationship between and remains an open question, posed by in 1971, with profound implications: if = , many optimization problems in fields like and would become efficiently solvable, revolutionizing design; conversely, \neq would confirm the intractability of certain real-world challenges. A cornerstone of the field is , which identifies the hardest problems in . The Cook-Levin theorem, established independently by Cook and in 1971, proves that the (SAT)—determining if a Boolean formula has a satisfying assignment—is NP-complete, serving as the first such problem via polynomial-time reductions from arbitrary NP problems. Subsequent work by Richard Karp in 1972 demonstrated that 21 combinatorial problems, including , are NP-complete by reducing 3-SAT (a restricted version of SAT with clauses of three literals) to them. For instance, the reduction from 3-SAT to constructs a graph where vertices represent literals and clauses, ensuring a satisfying assignment corresponds to a vertex cover of size equal to the number of variables. This completeness implies that solving any NP-complete problem efficiently would collapse P and NP. Hierarchy theorems establish strict separations among complexity classes, showing that more resources enable solving strictly more problems. The time hierarchy theorem states that if f(n) is time-constructible and g(n) = o(f(n) \log f(n)), then DTIME(g(n)) \subsetneq DTIME(f(n)); a argument constructs a decidable in O(f(n)) time but not in o(f(n) \log f(n)) time. The space hierarchy theorem provides a tighter bound: for space-constructible s(n) \geq \log n with g(n) = o(s(n)), DSPACE(g(n)) \subsetneq DSPACE(s(n)), again via on a universal simulator. These results, due to Juris Hartmanis and Richard Stearns in 1965, underpin the belief in a rich structure of complexity classes beyond P and . Asymptotic analysis employs Big-O notation to describe growth rates, where a function T(n) = O(g(n)) if there exist constants c > 0 and n_0 such that T(n) \leq c \cdot g(n) for all n \geq n_0. Popularized by in contexts, this notation facilitates comparing efficiencies without exact constants. For example, comparison-based algorithms like mergesort require \Omega(n \log n) time in the worst case, expressed as O(n \log n), highlighting why linear-time sorting is impossible under this model. Such notations enable precise classification of problems, as in distinguishing polynomial-time from exponential-time classes.

Formal Models of Computation

Turing Machines

A Turing machine (TM) is a mathematical model of computation that formalizes the notion of an algorithm operating on discrete data. Introduced by Alan Turing in 1936, it consists of an infinite, one-dimensional tape divided into cells, each capable of holding a single symbol from a finite alphabet \Gamma, which includes a blank symbol. A read/write head moves along the tape, reading or writing symbols and shifting left or right one cell at a time. The machine's control is governed by a finite set of states Q, with a designated start state q_0 and accept/reject states. The behavior is defined by a partial transition function \delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R\}, where for a current state q \in Q and tape symbol \sigma \in \Gamma, \delta(q, \sigma) = (q', \sigma', D) specifies the next state q', the symbol \sigma' to write, and the direction D (left L or right R) to move the head; if undefined, the machine halts and rejects. Computation begins with the input string on the tape (rest blank), head at the leftmost cell, and machine in q_0; it halts accepting if entering the accept state, rejecting otherwise, or may loop indefinitely. Several variants extend the basic single-tape deterministic TM while preserving computational power. A multi-tape TM employs k \geq 2 infinite tapes, each with its own read/write head, and a transition function \delta: Q \times \Gamma^k \to Q \times \Gamma^k \times \{L, R, S\}^k, where S stays in place; any multi-tape TM can be simulated by a single-tape TM with at most quadratic time overhead, establishing equivalence. A nondeterministic TM (NTM) allows \delta to map to a finite set of possible next states, symbols, and directions, enabling branching computations; a language is accepted if at least one computation path accepts, and NTMs are equivalent to deterministic TMs in recognizability, though simulation incurs exponential time in the worst case. The universal Turing machine (UTM), also from Turing's 1936 work, simulates any other TM given its description as input on the tape (encoded as a finite string) and the original input; it uses a fixed transition table to interpret and execute the encoded machine's steps, demonstrating that a single machine can perform any computable function by appropriately encoding the program. Turing machines capture the intuitive notion of algorithmic computation, as posited by the Church-Turing thesis, which asserts that any effectively calculable function on natural numbers is computable by a TM. This model subsumes weaker automata: a finite automaton can be simulated by a TM that ignores the tape beyond the input length and uses only its finite control for transitions, effectively padding the tape with blanks to mimic finite memory. TMs also feature prominently in undecidability proofs, such as showing certain problems insoluble by any algorithm. As a concrete example, consider a deterministic TM recognizing the of palindromes over \{0,1\} (strings reading the same forwards and backwards). The TM iteratively marks and matches symbols from both ends of the input, using special symbols to track progress. Let \Gamma = \{0,1,\text{blank},X,Y\} where X marks a matched $0andY marks a matched $1; states include q_0 (start, at left end), q_l (scan right to end after marking left), q_r (check and mark right end), q_{back} (return to left end), q_a (accept), q_rj (reject). The process handles both even and odd lengths by continuing until the head reaches the middle without mismatch. Key transitions (simplified; full table omits shifts over blanks/marks for brevity):
  • From q_0 at left unmarked symbol: \delta(q_0, 0) = (q_l, X, R); \delta(q_0, 1) = (q_l, Y, R); if blank (\epsilon), go to q_a (empty or single middle).
  • In q_l (scan to right end): \delta(q_l, 0) = (q_l, 0, R); \delta(q_l, 1) = (q_l, 1, R); \delta(q_l, X) = (q_l, X, R); \delta(q_l, Y) = (q_l, Y, R); on \epsilon: \delta(q_l, \epsilon) = (q_r, \epsilon, L) (now at rightmost non-blank).
  • In q_r (match right symbol to left-marked): if right is $0and left was markedX(state carries info, e.g., substateq_{r0}): \delta(q_{r0}, 0) = (q_{back}, X, L); mismatch e.g., \delta(q_{r0}, 1) = (q_{rj}, 1, S)$; if already marked (X or Y), scan left to find next, but in iterative: after marking, return.
The machine shuttles the head back to the left end in q_{back}, skipping over marked symbols: \delta(q_{back}, X) = (q_{back}, X, L); \delta(q_{back}, Y) = (q_{back}, Y, L); on $0/1: go to q_0 to mark next left. Upon reaching left blank without finding unmarked, accept if no mismatch occurred. This construction runs in O(n^2) steps due to repeated traversals.

Alternative Models

Alternative models of computation offer diverse formalisms that capture the same class of computable functions as the , providing alternative lenses through which to understand effective calculability. These models emphasize different aspects, such as functional , recursive definitions, operations, , or axiomatic production rules, and their equivalence to underscores the robustness of the underlying concept of . The untyped lambda calculus, developed by , is a system for expressing functions and their applications using abstraction and application as primitive operations. Terms are built from variables, abstractions of the form \lambda x.M (where x is a variable and M is a term), and applications of the form M N. Computation proceeds via beta-reduction, the key reduction rule that substitutes the argument N for free occurrences of x in M, denoted \lambda x.M \, N \to M[N/x]. Church encodings represent data structures, such as natural numbers, within this pure functional framework; for example, the Church numeral for n is \lambda f.\lambda x.f^n x, where f^n x applies f iteratively n times to x. This model highlights a functional perspective on computation, where all operations are expressed as higher-order functions. μ-recursive functions, formalized by Stephen Kleene, define computable functions starting from basic functions and closing under composition, primitive recursion, and minimization. The base functions include the zero function Z(n) = 0, successor function S(n) = n+1, and projection functions \pi_i^k(n_1, \dots, n_k) = n_i. Primitive recursion builds functions like addition via schemes such as f(0, \vec{y}) = g(\vec{y}) and f(n+1, \vec{y}) = h(n, \vec{y}, f(n, \vec{y})). The μ-operator introduces minimization: for a total function g, the μ-recursive function is \mu n \, g(n, \vec{y}) = z where z is the least natural number such that g(z, \vec{y}) = 0, if it exists. An example is the predecessor function \text{pred}(0) = 0 and \text{pred}(n+1) = n, definable using primitive recursion on successor. This arithmetic-based model emphasizes recursive construction of functions on natural numbers. Register machines, as defined by John Shepherdson and Howard Sturgis, consist of a finite number of registers holding non-negative integers and a program of instructions that manipulate these registers. Basic instructions include increment R_i \leftarrow R_i + 1, decrement R_i \leftarrow R_i - 1 (if R_i > 0, else no-op), and zero-test JZ \, i \, l (jump to line l if R_i = 0). Computation starts with inputs in designated registers and halts when reaching a stop instruction. These machines simulate arbitrary recursive functions by encoding tapes into pairs of registers using . This imperative model illustrates via simple arithmetic operations on unbounded counters. Markov algorithms, introduced by Andrey Andreyevich Markov, are string rewriting systems operating on words over a finite . An is a sequence of ordered production rules of the form \alpha \to \beta, where \alpha and \beta are strings; application involves replacing the leftmost occurrence of \alpha in the current word with \beta, proceeding to the next rule unless marked as terminal (e.g., \to . \beta), which halts the process. Rules are applied deterministically in order until no applicable rule remains or a terminal rule fires. This model formalizes computation as successive substitutions on symbolic strings, akin to a generalized production system. Post systems, developed by Emil Post, are axiomatic tag systems that model computation through production rules on strings over a finite alphabet. A system specifies initial strings (axioms) and rules of the form g_i P \to Q, where g_i is a tag symbol, P and Q are fixed strings, and application deletes the first |P| symbols after g_i (if matching P) and appends Q. Starting from an axiom, derivations generate new strings by applying rules, with computation corresponding to theorem-proving in this semi-Thue-like framework. Tag systems simplify to cases where productions depend only on the initial symbol, enabling universal computation via halting or non-halting derivations. This combinatorial model views computation as generative processes in formal languages.

Church-Turing Thesis

Statement and Implications

The Church-Turing thesis posits that every effectively calculable on the natural numbers can be computed by a . This conjecture equates the informal notion of "effective calculability"—a process that can be carried out by a human clerk following explicit, finite instructions—with the formal power of as defined by in his 1936 paper. Independently, arrived at an equivalent formulation using λ-calculus in his 1936 work, proposing that a is effectively calculable if it can be obtained through a series of effective operations from basic functions. The carries profound implications for , asserting that in classical settings, no device can perform "super-Turing" computations, such as solving the for arbitrary Turing machines. This establishes that the limits of Turing machines define the boundaries of algorithmic solvability, guiding algorithm design by emphasizing problems within this computable realm and highlighting undecidable issues like the . It also unifies diverse formal models of , such as recursive functions and register machines, under Turing equivalence. Variants of the thesis distinguish between a weak form, focused on mathematical functions and effective procedures in logic, and a strong form, which extends the claim to assert that any physical process in the universe is Turing-computable. The weak version aligns closely with the original 1936 formulations, while the strong version, sometimes called the physical Church-Turing thesis, implies that laws of physics do not permit hypercomputational devices. Criticisms of the often center on proposals, such as machines that access non-computable oracles to exceed Turing limits, or models involving infinite computations in finite time; however, these remain theoretical constructs without empirical support or physical feasibility. In , the thesis underscores limits on mechanical reasoning, indicating that Turing-based systems cannot achieve computations beyond effectively calculable functions, thereby constraining the scope of algorithmic .

Equivalence of Models

The equivalence of major models of computation demonstrates that they possess the same expressive power for defining computable functions, meaning any function computable in one model can be simulated by another, albeit potentially with overhead in time or space. This equivalence underpins the Church-Turing thesis by showing that diverse formalisms—such as Turing machines (TMs), the λ-calculus, and μ-recursive functions—capture the same class of partial recursive functions. Simulations between these models typically involve encoding the state and data structures of one into the primitives of the other, ensuring that the simulation preserves computability while bounding resource overheads. A key result is the mutual simulation between TMs and the λ-calculus. proved that every λ-definable function is computable by a TM and vice versa, by encoding λ-terms and their reductions directly onto the TM tape. The proof outline proceeds by representing λ-terms as sequences of tape symbols, using a Gödel-like numbering to encode variables, abstractions, and applications as finite strings; the TM then simulates β-reduction by scanning and rewriting these encodings, mimicking the substitution rules of λ-calculus. Conversely, a TM can be encoded as a λ-term that interprets its transition function, with the tape modeled as a Church-encoded list. This bidirectional simulation establishes that the λ-calculus and TMs define the same partial functions, with only polynomial overhead in the size of the encoded terms. Similarly, TMs are equivalent to μ-recursive functions, the class of partial functions obtained from basic functions (zero, successor, ) via , primitive , and μ-operator (unbounded minimization). A TM can simulate μ-recursive functions by iteratively applying their definitions: it encodes the function's schema on the tape and executes and steps deterministically, using the μ-operator to search for the least fixed point by dovetailing computations. The reverse simulation constructs a TM interpreter for recursive definitions, where the tape holds the function index and arguments, and the machine unfolds the recursion depth-first until or divergence. This shows that the partial recursive functions computed by TMs exactly coincide with the μ-recursive functions, preserving partial (i.e., functions may not terminate on all inputs). Total recursive functions (without μ-operator) form a proper subclass, but the full holds for partial ones. Within the TM model, variants like multi-tape and single-tape TMs are equivalent, but simulations incur resource costs. A multi-tape TM can be simulated by a single-tape TM in quadratic time: the single tape interleaves the contents of all tapes, separated by markers, and simulates each multi-head move by shifting the relevant segments left or right, which requires O(n) steps per original step for an input of length n, yielding O(n²) overall time. The reverse simulation is trivial, as a multi-tape TM can ignore extra tapes. Nondeterministic TMs (NTMs) also simulate deterministic TMs (DTMs) directly, but the converse—simulating NTMs with DTMs—requires more space. Savitch's theorem states that for any space bound s(n) ≥ log n, NSPACE(s(n)) ⊆ DSPACE(s(n)²), proved by a deterministic recursive procedure that checks reachability in the NTM's configuration graph using divide-and-conquer on pairs of configurations, storing only O(s(n)²) space for the midpoint recursion depth. This quadratic space blowup highlights that nondeterminism provides no asymptotic space advantage over determinism.

Connections to Other Fields

Applications in Computer Science

The theory of computation provides foundational principles for design, particularly in and . Lexical analyzers, or scanners, recognize tokens in using regular expressions, which are equivalent to regular languages accepted by finite automata. This equivalence allows efficient implementation of lexers as deterministic finite automata (DFAs), enabling linear-time scanning of input streams. For syntax analysis, parsers employ context-free grammars (CFGs) to structure programs hierarchically, with pushdown automata simulating the process to validate syntactic correctness. Seminal work formalized these techniques, showing that LR parsers, derived from CFGs, can handle a broad class of programming languages in linear time. In design, guides the selection and analysis of efficient solutions, emphasizing polynomial-time algorithms within class to ensure . Designers analyze time and space bounds using asymptotic notation, prioritizing algorithms solvable in O(n^k) time for practical inputs, as exponential-time methods become infeasible for large n. This approach underpins , , and algorithms, where complexity classes like and inform approximations for intractable problems, such as the traveling salesman problem reduced to finding near-optimal tours via heuristics when exact solutions are NP-hard. Software verification leverages for , an automated technique to exhaustively explore system states against specifications. By constructing a product from the system's finite-state model and the property's representation, verifiers detect violations like deadlocks or failures in concurrent software. This method, in the state space for many cases, has verified protocols and systems, scaling through symbolic representations like decision diagrams to handle millions of states. Cryptography relies on complexity-theoretic assumptions, particularly one-way functions whose inversion is linked to NP-hard problems, ensuring secure primitives like encryption and signatures. These functions, easy to compute but hard to reverse on average, underpin public-key systems by assuming that problems like remain intractable unless P=NP. For instance, the security of encryption rests on the hardness of factoring large composites, an candidate problem, allowing efficient while resisting polynomial-time attacks. Modern programming languages bridge theoretical gaps through type systems inspired by , enabling safe polymorphism and inference without runtime overhead. The Hindley-Milner system, a restriction of , infers principal types for expressions using unification, supporting let-polymorphism in languages like and . This allows generic functions, such as map over lists of any type, to be type-checked statically, preventing errors like type mismatches while preserving expressiveness from untyped lambda terms. The theory of computation intersects deeply with mathematical logic through Kurt Gödel's incompleteness theorems, which establish fundamental limits on formal systems. The first incompleteness theorem states that any consistent formal system capable of expressing basic arithmetic is incomplete, meaning there exist true statements within the system's language that cannot be proved or disproved using its axioms. This incompleteness directly informs computability by highlighting undecidable propositions, as the theorem constructs a self-referential sentence asserting its own unprovability, which mirrors the structure of undecidable problems like the halting problem. The second incompleteness theorem further asserts that such a system's consistency cannot be proved within itself, assuming consistency, thereby linking logical provability to non-computable notions of truth. Alan Turing drew explicit inspiration from Gödel's work in his 1936 paper, formalizing computation via Turing machines and proving the halting problem undecidable, thus bridging incompleteness to the limits of algorithmic decidability. Intuitionistic logic, pioneered by L. E. J. Brouwer in the early 20th century, rejects the and emphasizes constructive proofs, where a statement's truth requires an explicit construction rather than mere non-contradiction. This paradigm aligns closely with the theory of s, as intuitionistic demands that mathematical objects be effectively computable. Stephen Kleene developed a realizability interpretation in the 1940s, showing that principles of intuitionistic arithmetic can be justified by recursive functions, where a proof of an existential statement provides a computable . Kleene's function realizability maps intuitionistic proofs to recursive functionals, ensuring that intuitionistic theorems about numbers correspond to computable procedures, thus embedding constructive within the recursive function framework. This connection extends to higher-order intuitionistic systems, where Kleene and Vesley's axiomatization of Brouwer's uses realizability to validate principles via partial recursive functions. In , computability manifests through computable ordinals and hyperarithmetic sets, extending the arithmetic hierarchy beyond finite levels. Computable ordinals, introduced by and Stephen Kleene, are well-orderings of the natural numbers that can be recognized and traversed by Turing machines; the supremum of all such ordinals is the Church-Kleene ordinal \omega_1^{CK}, which marks the boundary of ordinal computability. Kleene's system of ordinal notations, detailed in his 1938 work, provides a recursive of these ordinals up to \omega_1^{CK}, enabling the formalization of transfinite recursion in computable terms. Building on this, Martin Davis defined hyperarithmetic sets in 1950 as those obtainable by transfinite iterations of the Turing jump along computable ordinals, forming the hyperarithmetic hierarchy \mathbf{H}_\alpha for \alpha < \omega_1^{CK}. These sets capture definability in , with the full hyperarithmetic sets comprising all sets \Delta_1^1 in the projective hierarchy, linking set-theoretic definability to relative computability. Category theory provides a structural lens on computation through Cartesian closed categories (CCCs), which model the . Joachim Lambek established in that CCCs—categories with finite products and exponential objects—are precisely the categorical semantics for typed lambda terms, where objects represent types, morphisms represent terms, and the exponential B^A encodes spaces. In a CCC, corresponds to the A \times B \to C \cong A \to (B \to C), mirroring abstraction and application, thus allowing categorical proofs of properties like beta-reduction. This correspondence, part of the Curry-Howard-Lambek , equates proofs in with terms and categorical maps, facilitating abstract reasoning about computable s. Modern connections to and further integrate computability with verified computation. Gerhard Gentzen's and (Hauptsatz) from 1934 provide a foundation for , where consistency proofs of arithmetic systems are reduced to up to ordinals like \epsilon_0, directly tying proof strength to computable ordinals. (HoTT), developed since 2011, extends Martin-Löf with univalence and higher inductive types, interpreting types as topological spaces and identities as paths, enabling machine-checked proofs of homotopy-theoretic results. HoTT's computational content supports verified computation in proof assistants like and Agda, where equality proofs are paths and synthetic allows formal of computational properties, such as those in type-safe programming. This framework addresses gaps in classical by providing constructive, computable interpretations of higher-dimensional .

References

  1. [1]
    Mathematics and Computation - Ideas | Institute for Advanced Study
    The Theory of Computation (ToC) is the study of the formal foundations of computer science and technology. This dynamic and rapidly expanding field ...
  2. [2]
    [PDF] CSCI 5444: Introduction to Theory of Computation
    What are the fundamental capabilities and limitations of computers? • How do we model “computational machines”? • Are all computational machines equally ...
  3. [3]
    [PDF] Introduction to Computation Theory
    May 25, 2024 · 1 Introduction. Computation Theory is the study of the fundamental principles underly- ing computation and the analysis of algorithms. It's ...
  4. [4]
    MIT CSAIL Theory of Computation - Research
    Theory of Computation (TOC) is the study of the inherent capabilities and limitations of computers: not just the computers of today, but any computers that ...
  5. [5]
    Theory of Computing - Computer Science | UC Davis Engineering
    Dec 6, 2020 · The theory of computing is the mathematical foundation for studying computation, defining algorithms, problems, and efficient problem solving. ...
  6. [6]
    [PDF] Effective Calculability and Unsolvability - Computer Science
    Effective Calculability and Unsolvability. No matter where it is or when it is, at some point soon after we begin learning about computer science, someone ...
  7. [7]
    Computability and Complexity - Stanford Encyclopedia of Philosophy
    Jun 24, 2004 · A mathematical problem is computable if it can be solved in principle by a computing device. Some common synonyms for “computable” are “solvable”, “decidable”, ...
  8. [8]
    [PDF] COMPUTING SCIENCE AND TECHNOLOGY vs. THEORY OF ...
    COMPUTING SCIENCE AND TECHNOLOGY vs. THEORY OF COMPUTATION. Computing Science: Finds orders, structures, and patterns in useful (effective) compu- tations ...
  9. [9]
    What Is an Algorithm/What Is Computation?
    Feb 18, 2021 · Herman, Gabor T. (1983), "Algorithms, Theory of" [PDF], in Anthony S. Ralston (ed.), Encyclopedia of Computer Science and Engineering ...
  10. [10]
    [PDF] CS663 Theory of Computation 1 Introduction
    Theory of Computation is to study the fundamental capabilities and limitations of computers. It is all about bounds.
  11. [11]
    [PDF] Computability and Recursion
    Church [11, p. 90] wrote, “The purpose of the present paper is to propose a definition of effective calculability.” Similarly,. Turing 1936 did ...
  12. [12]
    “Computability” Theory
    Theory. A decision problem and its decidability. In simple terms, a decision problem is a question that has a yes-no answer. When given an algorithm and a ...
  13. [13]
    CSE 105 - Theory of Computation, Fall 2019
    This course will help you answer fundamental questions about computing: What problems are computers capable of solving? What resources are needed to solve a ...
  14. [14]
    Chapter 0 - Introduction to the Theory of Computation
    Answer this Basic Question: What are the practical limitations of computers? · Consider hard problems that take too long to effectively compute · Basic question: ...
  15. [15]
    [PDF] Theory of Computation - Chapter 3 The Church-Turing Thesis
    ▻ These were shown to be equivalent. ▻ Church-Turing thesis: “computable” means “TM-computable”. Connection between intuitive and formal notions of ...
  16. [16]
    [PDF] The Church-Turing Thesis
    The Church-Turing Thesis asserts the equivalence of the intuitive notion of an algorithm. (i.e. a sequence of “pencil and paper operations”) with Turing ...
  17. [17]
    [PDF] Lecture 5: Polynomial Hierarchy - Cornell: Computer Science
    Feb 3, 2009 · An interesting property of the polynomial hierarchy is that if any two classes in the hierarchy are equal, then the hierarchy “collapses.” The ...
  18. [18]
    [PDF] The Polynomial Hierarchy - Duke Computer Science
    In this section we will see that if the answer is positive, then the polynomial hierarchy collapses to Σ. Therefore, such a containment seems unlikely.
  19. [19]
    [PDF] Challenges for Theory of Computing
    Software and efficient algorithms are the base of today's technology and of technology to come. The historical role of theory of computing is clear: “Theory ...
  20. [20]
    [PDF] On the nature of the Theory of Computation (ToC)
    Apr 19, 2018 · A recurring theme across the theory of computation is that surprising algorithms and protocols are discovered when the models at hand allow ...
  21. [21]
    Euclidean Algorithm - an overview | ScienceDirect Topics
    The Euclidean algorithm for computing the greatest common divisor is recognized as the first known algorithm, dating back to circa 400–300 BCE, and is ...
  22. [22]
    [PDF] HISTORY OF COMPUTER SCIENCE
    A rigorous mathematical basis for the analysis of algorithms began with the work of Donald. Knuth, author of The Art of Computer Programming. 1970's. The theory ...
  23. [23]
    Leibniz and the Calculus Ratiocinator: Philosophical and Historical ...
    This paper deals with the interconnections between mathematics, metaphysics, and logic in the work of Leibniz. On the one hand, it touches upon some ...
  24. [24]
    [PDF] A BRIEF HISTORY OF COMPUTING - Koc Lab
    – Leibniz, Boole, Frege, Cantor, Hilbert, Gödel, Turing. • And don't forget. – Pascal, Babbage, Russell, von Neumann, Shannon, Bush…. • But also. – Ada ...Missing: precursors Euclidean algorithm
  25. [25]
    [PDF] Boole's Algebra of Logic 1847 - University of Waterloo
    Nov 20, 2022 · In his two books and one paper from the mid 1800s on reduc- ing logical reasoning about classes to mathematical reasoning,. Boole had a ...
  26. [26]
    [PDF] Mechanization of Mathematics - Department of Computer Science
    Hilbert's program was a hopeless pipe dream. These famous results seem to close the doors on those who would hope to mechanize mathematics. But we are not ...
  27. [27]
    [PDF] An Unsolvable Problem of Elementary Number Theory
    Mar 3, 2008 · Thus it is shown that no more general definition of effective calculability than that proposed above can be obtained by either of two methods ...
  28. [28]
    [PDF] ON COMPUTABLE NUMBERS & ENTSCHEIDUNGSPROBLEM
    APPENDIX. Computabiliiy and effective calculability. The theorem that all effectively calculable (A-definable) sequences are computable and its converse are ...
  29. [29]
    General recursive functions of natural numbers
    About this article. Cite this article. Kleene, S.C. General recursive functions of natural numbers. Math. Ann. 112, 727–742 (1936). https://doi.org/10.1007 ...Missing: Stephen | Show results with:Stephen
  30. [30]
    Finite Automata and Their Decision Problems - Semantic Scholar
    Finite Automata and Their Decision Problems · M. Rabin, D. Scott · Published in IBM Journal of Research and… 1 April 1959 · Computer Science, Mathematics.
  31. [31]
    Chomsky hierarchy | Encyclopedia of Computer Science
    1956. Chomsky, N. "Three Models for the Description of Language," IRE Trans ... In this paper a new family of stateless (non-deterministic) pushdown ...
  32. [32]
    1st SWCT 1960: Chicago, Illinois, USA - DBLP
    1st Annual Symposium on Switching Circuit Theory and Logical Design, Chicago, Illinois, USA, October 9-14, 1960. IEEE Computer Society 1960.
  33. [33]
    Symposium on Theory of Computing - Wikipedia
    STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest ...
  34. [34]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns. And thus ...
  35. [35]
    [PDF] The Complexity of Theorem-Proving Procedures - Computer Science
    Theorem 1: If a set S of strings is accepted by some nondeterministic Turing machine within polynomial time, then S is P-reducible to { DNF tautologies}.
  36. [36]
    [PDF] The ACM and IEEE-CS Guidelines for Undergraduate CS Education
    This paper reviews the history of these curricular guidelines. BACKGROUND. There were several studies of computing and curricula in the early 1960s, including ...
  37. [37]
    [PDF] Introduction to Automata Theory
    What is Automata Theory? ▫ Study of abstract computing devices, or. “machines”. ▫ Automaton = an abstract computing device.
  38. [38]
    Basics of Automata Theory - Stanford Computer Science
    Automata theory deals with the logic of computation with respect to simple machines, referred to as automata.
  39. [39]
    [PDF] Introduction to Automata Theory, Languages, and Computation
    First, in 1979, automata and language theory was still an area of active research. A purpose of that book was to encourage mathematically inclined students to ...
  40. [40]
    [PDF] Finite Automata and Their Decision Proble'ms#
    Abstract: Finite automata are considered in this paper as instruments for classifying finite tapes. Each one- tape automaton defines a set of tapes, ...
  41. [41]
    On context-free languages and push-down automata - ScienceDirect
    This note describes a special type of one-way, one-tape automata in the sense of Rabin and Scott that idealizes some of the elementary formal features used
  42. [42]
    [PDF] Lecture Notes on Lexical Analysis
    Feb 9, 2023 · Lexical analysis is the first phase of a compiler. Its job is to turn a raw byte or char- acter input stream coming from the source file ...Missing: Applications | Show results with:Applications
  43. [43]
    General recursive functions of natural numbers. - EuDML
    Kleene, S.C.. "General recursive functions of natural numbers.." Mathematische Annalen 112 (1936): 727-742. <http://eudml.org/doc/159849>.Missing: Stephen | Show results with:Stephen
  44. [44]
    [PDF] Complexity Theory - Rutgers Computer Science
    Jan 25, 2012 · If t is time-constructible and s is space-constructible, however, then DTIME[t], NTIME[t], DSPACE[s], and. NSPACE[s] can be defined without loss ...Missing: NPSPACE | Show results with:NPSPACE
  45. [45]
    The complexity of theorem-proving procedures - ACM Digital Library
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.
  46. [46]
    [PDF] The Status of the P versus NP Problem
    We will also have much better predictions of weather and earthquakes and other natural phenomenon. P = NP would also have big implications in mathematics. One ...
  47. [47]
    Reducibility among Combinatorial Problems - SpringerLink
    Download book PDF · Complexity of Computer Computations. Reducibility among Combinatorial Problems. Download book PDF. Richard M. Karp. Part of the book series ...
  48. [48]
    [PDF] On the Computational Complexity of Algorithms Author(s)
    On the Computational Complexity of Algorithms. Author(s): J. Hartmanis and R. E. Stearns. Source: Transactions of the American Mathematical Society, Vol. 117 ( ...
  49. [49]
    [PDF] Variants of Turing Machines
    Equivalence Between Single-tape TMs and Multitape TMs. Theorem. Every multitape Turing machine has an equivalent single-tape Turing machine. CSC527, Chapter ...
  50. [50]
    [PDF] 9 Nondeterministic Turing Machines - Jeff Erickson
    Unlike deterministic Turing machines, there is a fundamental asymmetry between the acceptance and rejection criteria for nondeterministic Turing machines. Let N ...Missing: introduction | Show results with:introduction
  51. [51]
    The Church-Turing Thesis (Stanford Encyclopedia of Philosophy)
    Jan 8, 1997 · The Church-Turing thesis concerns the concept of an effective or systematic or mechanical method, as used in logic, mathematics and computer science.The Case for the Church... · The Church-Turing Thesis and...
  52. [52]
    Turing machine
    Example machine: Palindromes. The transition function for a Turing Machine to recognize palindromes. (0,a) = (1,x,R) /* State 0 (start state) */ (0,b) = (2,x ...
  53. [53]
    [PDF] THE CALCULI OF LAMBDA-CONVERSION
    The Calculi of Lambda-Conversion, by ALONZO CHURCH. 7 Finite Dimensional ... (1936), pp. 345 - 363. 10. Alonzo Church, Me.thema.tica.l logic ...
  54. [54]
    Computability of Recursive Functions | Journal of the ACM
    Computability of Recursive Functions. Authors: J. C. Shepherdson. J. C. ... First page of PDF. Formats available. You can view the full content in the ...
  55. [55]
    Introduction to a General Theory of Elementary Propositions
    Mar 19, 2013 · Post, Emil L. Publication date: 1921-07-01. Publisher: American Journal ... PDF download · download 1 file · SINGLE PAGE PROCESSED JP2 ZIP ...
  56. [56]
    The Church-Turing Thesis: Logical Limit or Breachable Barrier?
    Jan 1, 2019 · In its original form, the Church-Turing thesis concerned computation as Alan Turing and Alonzo Church used the term in 1936—human computation.
  57. [57]
    [PDF] Hypercomputation: computing more than the Turing machine - arXiv
    I then explain how the Church-Turing Thesis is commonly misunderstood and present new theses which better describe the possible limits on computability.
  58. [58]
    [PDF] From the Chinese Room Argument to the Church-Turing Thesis
    It's no exaggeration to say that the Church-Turing thesis has constituted the funda- mental inspiration behind AI, the reason for thinking that elec- tronic ...
  59. [59]
    [PDF] Equivalence of Turing Machine and μ-Recursive Functions
    µ-recursive function can be computed on a Turing machine. By definition, a µ-recursive function is any function that can be obtained from 0, σ, and πk i by ...
  60. [60]
    [PDF] 6 Turing Machines - Jeff Erickson
    proved that all general recursive functions are definable in the λ-calculus and vice versa. ... Turing machine as a "Turing machine interpreter written in Turing ...<|separator|>
  61. [61]
    Relationships between nondeterministic and deterministic tape ...
    The amount of storage needed to simulate a nondeterministic tape bounded Turingmachine on a deterministic Turing machine is investigated.
  62. [62]
    [PDF] Introduction to Automata Theory, Languages, and Computation
    The "lexical analyzer” of a typical compiler, that is, the compiler com- ponent that breaks the input text into logical units, such as identifiers, keywords ...
  63. [63]
    [PDF] ALFRED V. AHO JEFFREY D. ULLMAN - The Theory of Parsing ...
    Organizes and systematizes the en- tire field of compiler theory into a unified terminology, orientation and methodology. • Covers parser optimization tech-.
  64. [64]
    [PDF] On Basing One-Way Functions on NP-Hardness
    Nov 22, 2005 · One-way functions are functions that are easy to compute but hard to invert, where the hardness condition refers to the average-case complexity ...