Fact-checked by Grok 2 weeks ago

Turing machine

A Turing machine is an of that formalizes algorithmic processes through a simple, idealized device consisting of an infinite tape divided into discrete cells, a read-write head that scans and modifies on the tape, and a finite with a set of that dictates transitions based on the current state and scanned symbol. Invented by British mathematician in 1936, the model was introduced in his seminal paper "On Computable Numbers, with an Application to the ," where it served to precisely define the notion of by simulating the step-by-step procedures of a "computer" performing calculations. The Turing machine operates deterministically: at each step, given its current internal state and the symbol under the head, it follows a fixed transition rule to write a new symbol (or erase by writing a blank), move the head one cell left or right, and enter a new state, continuing until it reaches a halting state or loops indefinitely. This setup allows it to perform any computation that can be expressed as a finite sequence of such elementary operations, including reading input on the tape, processing it, and producing output. Turing also described a , a single device capable of simulating any other Turing machine given its description as input, laying the groundwork for programmable computers and the concept of software. In , Turing machines provide a foundational framework for understanding the limits of computation, proving key results such as the undecidability of the —which demonstrates that no general exists to determine whether an arbitrary Turing machine will halt on a given input. The model's equivalence to other formal systems of computation, including Alonzo Church's and recursive functions, underpins the Church-Turing thesis, a widely accepted stating that any effectively calculable function can be computed by a Turing machine, though it remains unprovable as it bridges formal mathematics with intuitive notions of . Despite their theoretical nature, Turing machines have profoundly influenced modern , serving as the basis for complexity classes like P and NP, and highlighting the boundaries between solvable and unsolvable problems.

Overview

Informal description

A Turing machine is an abstract computational device that models the process of algorithmic computation through simple mechanical operations. It consists of an infinite tape divided into discrete cells, each capable of holding a single symbol from a finite alphabet, such as 0, 1, or a blank symbol. A read/write head moves along the tape, scanning one cell at a time, reading the symbol in that cell, and either writing a new symbol or leaving it unchanged. The machine operates in one of a finite number of internal states, and its behavior is governed by a fixed set of instructions, often conceptualized as a transition table, which specifies the next action—writing a symbol, moving the head left or right, and changing to a new state—based solely on the current state and the scanned symbol. The extends infinitely in both directions, allowing unlimited , though only a finite portion is used in any . The head starts on a designated with the input encoded as a of symbols on the , surrounded by blanks. At each step, the machine consults its transition table to determine its response, effectively simulating a step-by-step procedure without external memory limits beyond the itself. proceeds until the machine enters a special halting state, at which point it stops, and the final configuration of the represents the output. This halting mechanism ensures that the machine terminates for valid inputs, producing a result encoded on the . To illustrate, consider a simple Turing machine that adds two unary numbers, where numbers are represented by strings of 1's separated by a 0 (e.g., the tape initially holds 1110111, encoding 3 + 4). The machine begins in an initial with the head on the leftmost 1. It first moves right through the first group of 1's and the separator 0 to reach the second group. Then, it erases one 1 from the right end of the second group and moves left to the separator, overwriting the 0 with a 1 to extend the first group. This process repeats for each 1 in the second group: after erasing the first 1 from the second group and overwriting the separator, the tape becomes 111111 (six 1's) followed by a single 1 in the second group position; continuing this way, it erases all four 1's from the second group while appending four 1's to the first, yielding 1111111 (seven 1's) as the sum. Once the second group is fully erased (reaching a blank in that area), the machine moves to a halting , leaving the tape with the unary representation of 7 as output.

Physical analogy

The Turing machine can be intuitively understood through a physical analogy proposed by Alan Turing, likening it to a human clerk performing computations on an infinite strip of paper. In this setup, the clerk uses a long paper tape divided into squares, each capable of holding a symbol from a finite set, such as digits or marks, with the tape extending infinitely in both directions to represent unlimited memory. The clerk employs a typewriter-like head to read the symbol on the current square, erase or write a new symbol as needed, and move the tape left or right by one square, all while consulting a fixed rulebook that dictates the next action based on the observed symbol and the clerk's current "state of mind." This analogy maps the Turing machine's core elements directly to familiar objects: the infinite tape serves as both input medium and workspace, the head as the focused reading/writing tool, and the rulebook as a deterministic of instructions ensuring each step follows mechanically without . The clerk's finite —limited to recalling only the current and symbol—corresponds to the machine's of states, enforcing a structured, step-by-step akin to rote calculation. The rulebook's rigidity highlights the deterministic nature, where no choice exists; for each state-symbol pair, a unique action (write, move, change state) is prescribed, mirroring how the machine avoids in . While the analogy breaks down in practicality—the human clerk cannot physically handle an infinite , requiring idealized assumptions about endless supply—it effectively illustrates the sequential, local processing at the heart of the model. This visualization aids non-experts in grasping how complex computations emerge from simple, repetitive operations on a linear medium, bridging the abstract formalism to tangible without implying real-world buildability.

Formal definition

Components and symbols

A Turing machine is formally defined by a collection of basic components that enable it to perform computations through symbol manipulation on an . These components include a of states, an , a read/write head, and a transition function that dictates behavior based on the current state and scanned symbol. This model, originally conceived by to formalize the notion of mechanical computation, has been standardized in modern terms as a 7-tuple M = (Q, \Sigma, \Gamma, \delta, q_0, q_{\text{accept}}, q_{\text{reject}}). The finite set of states Q represents the possible internal configurations or "m-configurations" of the machine, capturing its control logic at any step. This set is finite, ensuring that the machine's is bounded in . It includes a distinguished initial state q_0 \in Q, from which begins, and halting states such as q_{\text{accept}} and q_{\text{reject}} for recognizing languages (or a single halting state in some variants for general ). In Turing's original , these states correspond to the machine's "state of mind," determining the sequence of operations. The serves as the machine's unbounded , modeled as a bi-infinite one-dimensional of extending in both directions. Each holds exactly one from the finite \Gamma, which includes a special blank , often denoted \square or B, representing empty . The input \Sigma is a proper of \Gamma (excluding the blank), consisting of the that may appear in the initial input string placed contiguously on the starting from some reference position, with all other initially blank. This setup allows the to store and modify arbitrarily during . A single read/write head provides the interface between the finite control (states) and the , always positioned over exactly one . The head can read the in the current , erase or overwrite it with another from \Gamma, and then move to an adjacent , either left (L) or right (R). Initially, the head is positioned on the leftmost of the input or at a designated starting . This mechanism enables and modification of the tape contents. The transition function \delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R\} is the core rule set that defines the machine's deterministic behavior. For each pair consisting of the current state q \in Q and scanned symbol a \in \Gamma, \delta(q, a) specifies a triple (q', b, D), where q' \in Q is the next state, b \in \Gamma is the symbol to write in the current cell (replacing a), and D \in \{L, R\} directs the head movement. The function is partial: for halting states, no transitions are defined, causing the machine to stop. These transitions collectively determine the evolution of the machine's configuration over time.

Configurations and transitions

A configuration of a Turing machine captures its complete state at any instant during computation, consisting of the current internal , the contents of the , and the position of the read/write head. Formally, an instantaneous description (ID) is represented as a string of the form P q_i S_j Q, where P is the (possibly empty) sequence of symbols to the left of the head, q_i is the current , S_j is the symbol scanned by the head, and Q is the (possibly infinite) sequence of symbols to the right of the head, with trailing blanks implied. This notation derives from the formulation where configurations encode the machine's progress without ambiguity. The begins with an initial configuration: the machine starts in its initial state q_0, the holds the input flanked by blanks on both sides, and the head is positioned on the leftmost symbol of the input. Subsequent configurations are generated by applying the transition function \delta, which, given the current state and scanned symbol, specifies a new symbol to write, a direction to move the head (left or right), and a next state. This process yields a sequence of configurations until the machine halts. The output, or yield, of the is the finite non-blank portion of the in the halting configuration. Halting occurs if the transition function \delta is undefined for the current state and scanned symbol, or if the machine enters a designated halting state with no further transitions. However, computations may enter non-halting loops, continuing indefinitely without reaching a halt, as seen in machines designed to simulate unending processes. Consider a Turing machine that recognizes the of palindromes over {0,1}, strings that read the same forwards and backwards, such as "101". The machine operates by repeatedly matching and erasing the first and last symbols: it reads the leftmost symbol, remembers it in its , erases it to a blank, moves right to the end of the (current) input, checks if the rightmost symbol matches the remembered one, erases it if so, and returns leftward to repeat until the tape is effectively empty (accepting if all matches succeed) or a mismatch occurs (rejecting). For input "101" on a tape initially flanked by blanks (B), the computation proceeds as follows (using simplified ID notation with the state embedded and head position indicated by underlining; trailing blanks omitted for brevity):
  • Initial: \underline{q_0} 1 0 1 (head on first 1, state q_0 start).
  • After step 1: B \underline{q_1} 0 1 (read 1, wrote B, moved right to state q_1 remembering 1; head on 0).
  • After step 2: B 0 \underline{q_2} 1 (moved right, now at end in state q_2 to check match; head on last 1).
  • After step 3: B \underline{q_3} 0 B (matched 1, wrote B on last 1, moved left to state q_3 to return; head on 0).
  • After step 4 (continuing): \underline{q_4} B B B (returned left to first B, but in full process: read 0, wrote B, moved right remembering 0; then to end on B, match by moving left since middle, and halt in accept state after verifying the single middle symbol).
This trace illustrates how configurations evolve through targeted symbol manipulation and head movement, confirming the palindrome property after O(n^2) steps for length n.

Visualization and implementation

State diagrams

State diagrams provide a graphical method to model the behavior of a Turing machine, representing it as a where nodes depict the machine's states. The initial state is typically indicated by an incoming arrow, while halting states are often shown with double circles to distinguish them from active states. Directed edges connect these nodes, each labeled with a transition triple consisting of the symbol read from the tape, the symbol written to the tape, and the of head movement (L for left or R for right), such as "1/0,R" to denote reading a 1, overwriting it with 0, and moving right. To construct a , begin by creating a for each in the machine's finite control, as defined by the transition function δ. Then, for every possible input and current , add an outgoing edge to the next , labeling it according to δ's output: the written and movement direction. This process ensures the diagram fully captures the machine's deterministic without . A representative example is a 3- Turing machine for incrementing a number, where the input is a of 1's representing the value (e.g., 111 for 3), and the output s an additional 1 (e.g., 1111 for 4). The s include q₀ (start, scanning right over 1's to the blank), q₁ (writing 1 on the blank and moving left), and q₂ (halt). Transitions include: from q₀ on 1/1,R to q₀ (skip existing 1's); from q₀ on B/1,L to q₁ ( 1 and return); from q₁ on 1/1,L to q₁ (move left over 1's); and from q₁ on B/B,R to q₂ (halt after returning to start). This visually traces the from scanning to modification and termination. State diagrams offer key advantages in design and analysis by clearly visualizing and state transitions, which facilitates detecting infinite loops or unreachable states during machine development. They are instrumental in theoretical proofs, such as those for the problem, where enumerating and graphing all possible n-state machines (e.g., for n=5) aids in identifying maximal runtime configurations before halting. However, state diagrams have limitations, as they abstract away the tape's content and head position, providing no direct view of memory evolution; thus, they must be supplemented with configuration traces or simulations for thorough verification of computations.

Tape and head mechanics

In practical implementations of Turing machines, the theoretically infinite is simulated using finite but expandable data structures to approximate unbounded . A standard approach employs two s (or lists): one represents the tape contents to the left of the read-write head (with the top of the holding the symbol immediately left of the head), and the other represents the contents from the current position under the head to the right (with the top holding the symbol under the head). This two-stack model efficiently handles bidirectional extension by treating blanks as implicit beyond the stacks' bounds. Head movement in this model is managed by transferring symbols between the stacks. Reading peeks at the top of the right (or uses blank if empty) to get the current under the head. Writing replaces the top of the right with the new (pop and if not empty, or if empty). For moving right, pop the top (written ) from the right and it onto the left . For moving left, pop the written from the right , pop from the left (or use blank if empty) as the new under the head, the old written onto the right , then the new onto the right . This avoids fixed-size limitations and simulates the infinite described in the original model, where the extends indefinitely with blanks. Although theoretically unbounded, real-world implementations face memory constraints, as stacks or lists grow with length, potentially leading to allocation failures for extensive runs. However, these limits do not alter the model's to , as the is presumed arbitrarily extensible in theory, allowing to proceed until practical resources are exhausted. Transition lookup during may reference a for clarity, but is typically implemented as a mapping current and to the next action. The following pseudocode illustrates a single simulation step using the two-stack model, assuming stacks left_tape and right_tape with methods empty(), peek(), pop(), push(), and the head position encoded such that right_tape top is under head:
function single_step(current_state, left_tape, right_tape, delta, blank):
    # Read current symbol (peek top of right_tape or blank if empty)
    if right_tape.empty():
        current_symbol = blank
    else:
        current_symbol = right_tape.peek()
    
    # Apply transition function δ (state diagram or table lookup)
    (new_state, write_symbol, [direction](/page/Direction)) = [delta](/page/Delta)(current_state, current_symbol)
    
    # Write the new symbol (replace or extend right_tape)
    if right_tape.empty():
        right_tape.push(write_symbol)
    else:
        right_tape.pop()
        right_tape.push(write_symbol)
    
    # Move head
    if [direction](/page/Direction) == 'R':
        # Transfer written symbol to left
        transferred = right_tape.pop()
        left_tape.push(transferred)
    elif [direction](/page/Direction) == 'L':
        # Complex transfer for left: pop written, get new under from left or blank
        old_current = right_tape.pop()
        if left_tape.empty():
            new_current = blank
        else:
            new_current = left_tape.pop()
        # Reconstruct right_tape: push old_current then new_current (top)
        right_tape.push(old_current)
        right_tape.push(new_current)
    
    return new_state, left_tape, right_tape
This step reads the symbol under the head, applies the transition to determine the new state, written symbol, and direction, then updates the tape and head position accordingly.

Variants

Nondeterministic Turing machines

A nondeterministic Turing machine (NTM) is a variant of the Turing machine that permits multiple possible transitions from a given configuration, enabling the computation to branch into several paths simultaneously. Unlike the deterministic transition function δ: Q × Γ → Q × Γ × {L, R}, the nondeterministic version defines δ: Q × Γ → finite subsets of (Q × Γ × {L, R}), where each subset contains zero or more possible next states, tape symbols to write, and head directions (left or right). This allows the machine to "choose" among options at each step, modeling computations that involve guessing or exploration of alternatives. The concept of choice machines, precursors to modern NTMs, was first described by in his foundational paper on computable numbers. Formally, an NTM is accepted if at least one computation path reaches an accepting state, even if other paths reject or loop indefinitely; rejection occurs only if all paths reject. Nondeterministic and deterministic Turing machines are equivalent in computational power, recognizing the same class of recursively enumerable languages. An NTM can simulate any deterministic Turing machine in time by treating the single deterministic transition as a , requiring no additional overhead. Conversely, a deterministic Turing machine can simulate an NTM by exhaustively exploring all possible branches in a depth-first or breadth-first manner, maintaining a or of current configurations to track the tree; this simulation expands the state space to represent sets of possible NTM states (via a analogous to that for finite automata), but incurs an exponential time cost proportional to the raised to the length of the . To illustrate nondeterminism, consider an NTM designed to guess and verify the middle position of an even-length input string w of length n over alphabet {0,1}, say to prepare for palindrome checking. Starting in initial state q_0 with the head at the leftmost symbol, the machine nondeterministically branches to guess the midpoint by moving right k steps (where k = n/2) while marking guessed positions, then reverses to compare symbols symmetrically. A sample computation trace for input w = 0101 (n=4, middle between positions 2 and 3) might proceed as follows:
  • Configuration 1: q_0, ... _ 0 1 0 1 _, head at first 0.
  • Branch A (wrong guess, moves right once): δ(q_0, 0) includes (q_mark, 0, R) → Configuration 2A: q_mark, ... _ 0 1 0 1 _, head at first 1; then loops or rejects after mismatch.
  • Branch B (correct guess, moves right twice): From Config 1, δ(q_0, 0) also includes path to move R twice → Configuration 2B: q_mid, ... _ 0 1 | 0 1 _, head at second 1 (marked | as middle); then branches to verify left/right matches, accepting if symmetric.
This branching allows efficient "guessing" of the middle without scanning the entire tape linearly in the worst case along accepting paths. NTMs model idealized parallel computation, where branches represent simultaneous exploration of possibilities, akin to unbounded nondeterministic choice in search problems. They underpin the NP, defined as the set of decision problems solvable by an NTM in polynomial time, highlighting the P vs. NP question of whether deterministic polynomial-time solvability matches this nondeterministic power.

Multitape Turing machines

A multitape Turing machine is a variant of the standard Turing machine that employs multiple infinite tapes, each equipped with an independent read/write head that can move left or right. Formally, for a machine with k tapes, the components include a finite set of states Q, a tape alphabet \Gamma (including a blank symbol \sqcup), an initial state q_0 \in Q, a set of accepting states F \subseteq Q, and a transition function \delta: Q \times \Gamma^k \to Q \times \Gamma^k \times \{L, R\}^k. The input string is placed on the first tape with the head starting at the leftmost symbol, while the remaining tapes are initialized to blanks with their heads at the leftmost cell. At each step, the machine reads the symbols under all k heads, writes new symbols on each tape, moves each head left or right, and transitions to a new state based on \delta. Despite the added tapes, multitape Turing machines recognize the same class of languages as single-tape machines, as any multitape machine can be simulated by a single-tape machine. The simulation encodes the contents of all k tapes onto a single using a larger , where segments for each tape are separated by special markers (e.g., #), and the positions of the heads are indicated by distinct symbols overlaid on the tape contents. To simulate one step of the multitape machine, the single-tape head scans the entire encoded to read the current symbols under the virtual heads, computes the next , and updates the encoding by rewriting the relevant parts. This process incurs an overhead, as each scan traverses a length proportional to the current tape usage. Regarding complexity, if a decides a in time T(n), its single-tape simulator runs in O(T(n)^2) time, since each of the T(n) multitape steps requires O(T(n)) single-tape steps to scan and update the encoding. remains equivalent up to constants, as the simulation uses space linear in the multitape space usage. Conversely, multitape machines can simulate single-tape machines in linear time by dedicating one tape to the original tape's content. An illustrative example is a two-tape Turing machine that copies its input from the first tape to the second tape. Starting with the input w on tape 1 (head at the left end) and tape 2 blank (head at the start), the machine enters a : it reads the symbol on tape 1, writes the same symbol on tape 2, moves both heads right, and repeats until encountering a blank on tape 1. Upon reaching the blank, it moves both heads left to the beginning of w on tape 2 (or halts if verification is not needed), effectively duplicating the string in a single linear pass, which highlights the efficiency gain from parallel read/write operations unavailable in single-tape models.

Extensions

Oracle machines

An , also known as an o-machine, extends the standard Turing machine by incorporating an external "" that provides instantaneous answers to queries about membership in a fixed, undecidable set, enabling the exploration of relative . Introduced by to address limitations in solving certain number-theoretic problems, such as those equivalent to the , the oracle acts as a that decides predicates beyond the reach of ordinary . In Turing's formulation, the machine enters a designated query state (e.g., q_4) while the tape encodes a or instance; the oracle then determines its "duality" () and transitions the machine to a true state (q_2) or false state (q_3). In the modern standardization, an includes a dedicated read-only oracle tape alongside the standard work tape and control mechanism. The transition function \delta is augmented to allow writing a query—typically an index e representing a decision instance—onto the tape, followed by entering a query ; the then appends a single bit (0 or 1) indicating the answer, which the machine reads to proceed. This setup preserves the Turing machine's finite states and tape alphabet but adds non-computable decision power, allowing the to simulate any computation relative to the set while querying it as needed. Configurations extend standard ones by including the tape contents, with transitions unchanged except during interactions. Turing degrees formalize the comparative difficulty of sets under this relative , partitioning sets of natural numbers into classes where two sets A and B have the same if each is computable from an oracle machine using the other (A \equiv_T B). A set C is Turing reducible to B (C \leq_T B) if an oracle machine with oracle B can compute C's ; the forms an upper semi-lattice under this reducibility, with the computable sets at \mathbf{0}. The halting problem set K = \{ e \mid the e-th Turing machine halts on the empty tape\} defines the first non-trivial \mathbf{0}', the Turing jump of \mathbf{0}; while K is undecidable by any standard Turing machine, an oracle machine with oracle K can decide membership in K. A concrete example illustrates the power of the halting oracle: consider an oracle machine M that, on input e (encoding a Turing machine T_e), writes e to the oracle tape, enters the query state, and reads the oracle's response bit b \in \{0,1\} indicating whether T_e halts on blank tape (b=1) or not (b=0); M then accepts if b=1 and rejects otherwise. This single query suffices to decide the for blank tapes, a task undecidable without the oracle. The iterated application of oracles generates the Turing jump hierarchy, where the jump of a \mathbf{d} is \mathbf{d}', the of the relative to an oracle of \mathbf{d}. This yields a strict \mathbf{0} < \mathbf{0}' < \mathbf{0}'' < \cdots, with each level capturing increasingly complex undecidability. The arithmetic hierarchy, classifying subsets of natural numbers by quantifier complexity in first-order arithmetic, aligns with this: a set is in \Sigma_n^0 (existential quantifiers dominating after n-1 alternations) if and only if it is recursively enumerable relative to the oracle $0^{(n-1)}, the (n-1)-th jump of \mathbf{0}; dually, \Pi_n^0 sets are co-recursively enumerable relative to the same oracle. This correspondence, established through oracle computations, reveals how finite oracle iterations capture the full arithmetical sets, beyond which lie the hyperarithmetic hierarchy.

Universal Turing machines

A universal Turing machine is a single Turing machine capable of simulating the behavior of any other Turing machine, given an appropriate encoding of that machine and its input. Introduced by in 1936, it serves as a foundational model demonstrating that all computable functions can be executed by one fixed device, provided the description of the target computation is supplied as input. This universality underscores the uniformity of computation, where the "program" for any task is treated as data on the machine's tape. The encoding of a target Turing machine M and input w for the universal machine U typically uses a systematic scheme to represent states, symbols, transitions, and the input string in a finite alphabet, often binary or unary for compactness. States are numbered sequentially (e.g., from 0 to n-1 for n states) and represented in binary, prefixed with identifiers like "q" for regular states or specific markers for start and halt states. Symbols from the tape alphabet are similarly encoded in binary after a prefix such as "a", with the blank symbol as the zero representation. Transitions, which form the core of M's behavior, are encoded as quadruples or quintuples capturing the rule \delta(q, a) = (p, b, D), where q is the current state, a the read symbol, p the next state, b the write symbol, and D the head direction (left or right); these are serialized as concatenated binary strings separated by delimiters like 1's or blocks of 0's, often prefixed with unary counts for the total number of states, symbols, and transitions to define the machine's structure. The input w is appended after the machine description, separated by a marker such as #. Turing himself employed description numbers (D.N.) derived from sequences of symbols like D, A, and C to encode complete configurations, enabling numerical representation of states and instructions. In the simulation process, U receives the encoded pair \langle M, w \rangle on its tape and interprets it step-by-step to mimic M's execution on w. U maintains an internal representation of M's tape, current state, and head position—often using multiple tracks on its own tape for separation—and iteratively scans the encoded transitions to find the matching rule based on M's current state and scanned symbol. Upon matching, U updates M's simulated tape by writing the specified symbol, moves the simulated head in the indicated direction, changes the state, and repeats until M reaches a halting state, at which point U also halts. If M enters a non-halting loop, U runs indefinitely. This direct emulation ensures U outputs the same result as M would, with the simulation typically requiring a constant factor more steps than M's original computation. For efficiency, multitape variants of U can accelerate the search for transitions by parallelizing tape access. The existence of a universal Turing machine has profound implications for computability theory, establishing that a single machine suffices for all partial recursive functions and enabling key results like the undecidability of the halting problem through self-referential constructions, where a machine can simulate itself on encoded inputs. This universality also laid the groundwork for modern programmable computers, where software encodes the "machine" to be simulated.

Equivalence to other models

Relation to lambda calculus

The Church-Turing thesis posits that any effectively computable function on the natural numbers can be computed by a or equivalently defined in the , establishing these models as interchangeable formalizations of computation. This thesis, proposed independently by and in 1936, underscores their mutual capacity to capture the intuitive notion of mechanical procedures, with both systems defining the class of partial recursive functions. To demonstrate equivalence, Turing machines can be encoded within the lambda calculus by representing the tape as a pair of lists (for left and right portions), symbols via Church encodings (e.g., booleans for binary alphabets), and states as higher-order functions that process the current configuration. The transition function δ is simulated through lambda terms that apply β-reduction to update the tape—using operations like cons for symbol insertion, successor for head movement, and function application for state changes—thereby mimicking each machine step in a finite number of reductions. Conversely, lambda expressions are encoded as strings on a Turing machine tape, with β-reduction simulated via tape manipulations that match and substitute subterms, allowing the machine to evaluate any lambda term. The formal proof of equivalence relies on mutual interpretability: Turing showed in 1937 that every λ-definable function is computable by a , and every Turing-computable function is λ-definable, with simulations preserving computability in finite steps. This aligns both models with the partial recursive functions, as established by and , confirming they enumerate the same class of effective procedures.

Post-Turing models

Post-Turing models encompass several abstract computational frameworks developed subsequent to Alan Turing's 1936 formulation, each proven to capture precisely the same class of computable partial functions as the , thereby reinforcing the without extending beyond its limits. These models vary in their conceptual foundations—ranging from imperative register-based operations to functional constructions—yet all demonstrate mutual simulability, often through encoding techniques like . Key examples include , , and recursive function classes, which provide alternative lenses for understanding effective computability. Register machines operate with a finite set of registers storing non-negative integers, supporting instructions to increment or decrement a register by 1 (the latter only if the register is non-zero), clear a register to zero, and conditionally branch based on whether a register is zero. Marvin Minsky proved in 1967 that even a machine with just two registers suffices to simulate any , establishing Turing completeness for this model. The simulation encodes the Turing machine's tape contents as the exponent of the prime 2 (i.e., $2^k where k represents the tape symbols in binary) and the head position as the exponent of 3, using prime factorization properties to perform read/write/move operations via register arithmetic; for instance, extracting or modifying exponents mimics tape access without auxiliary storage beyond the registers. Abacus models, formalized by John C. Shepherdson and Howard E. Sturgis in 1963 as unlimited register machines, extend the register paradigm by allowing an unbounded number of registers, each capable of holding arbitrarily large natural numbers. Basic instructions include incrementing a register, decrementing it if positive, zeroing it, copying contents from one register to another, and jumping conditionally on zero or unconditionally. These machines compute exactly the partial recursive functions, with equivalence to shown through direct simulation: a Turing machine's tape and state can be represented in registers, and vice versa, preserving halting behavior. Time and space bounds in such simulations align closely with Turing machine resource usage, as register operations correspond linearly to tape steps in the encodings. The recursive functions framework, developed by Stephen C. Kleene, builds the class of general recursive functions from base functions—zero, successor, and projections—using composition and primitive recursion to form primitive recursive functions, then adjoining the μ-operator for unbounded minimization to handle non-total cases. Turing machines simulate recursive functions by encoding the function's definition (as a tree of compositions and recursions) into the tape alphabet and executing it via a step-by-step interpreter that applies the schemata; conversely, any Turing-computable function can be expressed recursively. This equivalence underscores the functional model's alignment with imperative computation. Partial recursive functions extend the recursive class to include those where minimization may diverge, defining the domain as the set of inputs yielding defined outputs—precisely the partial functions computable by . Equivalence follows from , which posits a universal partial recursive function \phi_e(x) that, given index e encoding a function and input x, computes it; this mirrors the , allowing simulation in either direction via index-based encoding of programs.

Computability and limitations

Undecidability results

The halting problem, also known as the Entscheidungsproblem in its general form, asks whether there exists a Turing machine H that can decide, for any given Turing machine M and input w, whether M halts on w. No such Turing machine H exists, as proven by a diagonalization argument. Assume for contradiction that H exists and correctly outputs "halt" or "loop" for every pair \langle M, w \rangle, where \langle M, w \rangle is an encoding of M and w using a universal Turing machine. Construct a new machine M' that simulates H on \langle M', \epsilon \rangle (where \epsilon is the empty tape): if H says "halt," then M' loops forever; if "loop," then M' halts. This leads to a contradiction, as H cannot correctly classify M' on \epsilon. Thus, the halting problem is undecidable. Rice's theorem generalizes this undecidability to semantic properties of Turing machines. It states that for any non-trivial property P of the language recognized by a Turing machine—meaning P holds for some but not all recursively enumerable languages—there is no Turing machine that can decide whether an arbitrary Turing machine recognizes a language satisfying P. The proof reduces the halting problem to deciding P: given M and w, construct a machine M' that ignores its input, simulates M on w, and if it halts, accepts a fixed non-empty language L_0 with property P; otherwise, accepts the empty language (without P). Then, M' has property P if and only if M halts on w, making the decision undecidable. For example, determining whether the language of a Turing machine is infinite is undecidable by Rice's theorem. The Busy Beaver function \Sigma(n) exemplifies a non-computable function arising from Turing machine limitations. Defined as the maximum number of steps taken by any halting Turing machine with n states and two symbols on a blank tape, \Sigma(n) grows faster than any computable function. To see its non-computability, note that a machine computing \Sigma(n) could solve the halting problem for n-state machines by simulating all such machines up to \Sigma(n) steps, which is impossible. Thus, \Sigma(n) is uncomputable, highlighting the intrinsic bounds on what Turing machines can quantify about their own class. Known values include \Sigma(1) = 1, \Sigma(2) = 6, \Sigma(3) = 21, \Sigma(4) = 107, and \Sigma(5) = 47{,}176{,}870, but values for larger n (such as n > 5) remain elusive due to undecidability. The (PCP) provides another fundamental , simpler in formulation yet equivalent in power to the . Given two lists of strings over an \{a, b\}, say top list u_1, \dots, u_k and bottom list v_1, \dots, v_k, the PCP asks whether there exists a sequence of indices i_1, \dots, i_m (with repetitions allowed) such that the u_{i_1} \cdots u_{i_m} = v_{i_1} \cdots v_{i_m}. PCP is undecidable, proven by from the : for a machine M and input w, encode the of M on w into string lists where a solution corresponds to a halting path matching symbol by symbol. If no solution exists, the simulation loops indefinitely. This shows PCP's undecidability without relying on Turing machine encodings directly.

Complexity classes

Complexity classes in the theory of Turing machines focus on the resources required for decidable computations, particularly time and space bounds on deterministic and nondeterministic machines. These classes categorize problems based on the computational effort needed to solve them, providing a framework to understand the relative difficulty of decision problems within the realm of . Resource-bounded Turing machines limit the number of steps (time) or tape cells () used during computation, enabling the study of efficient versus intractable problems among those that halt. Time complexity classes measure the steps a Turing machine takes to decide a . The class DTIME(f(n)) consists of all languages decidable by a deterministic Turing machine in O(f(n)) steps on inputs of length n, where f is a time-constructible function. The class , or polynomial time, is the union over all constants k of DTIME(n^k), capturing problems solvable efficiently in time in the input size. Space complexity classes analogously bound the tape usage. DSPACE(f(n)) includes languages decidable by a deterministic Turing machine using O(f(n)) space. PSPACE is the union over k of (n^k), encompassing problems solvable with polynomial space, though potentially exponential time. Nondeterministic time and space classes extend these definitions to nondeterministic Turing machines, which, as described in the section on nondeterministic Turing machines, allow multiple computation paths. NTIME(f(n)) comprises languages accepted by such machines in O(f(n)) steps on some accepting path. NPSPACE is the polynomial-space analog for nondeterministic machines. By , NPSPACE equals , showing that nondeterminism does not increase power beyond quadratic space overhead for space-bounded computations. Hierarchy theorems establish strict inclusions among these classes, demonstrating that more resources enable solving strictly harder problems. The deterministic time hierarchy theorem states that if f(n) log f(n) = o(g(n)) for time-constructible f and g, then DTIME(f(n)) is a proper of DTIME(g(n)); for example, TIME(n) \subsetneq TIME(n \log n). Similar hierarchies hold for space and nondeterministic time. These results imply separations like P \subsetneq EXPTIME, but leave open whether P = NP, as nondeterministic polynomial time might collapse the polynomial hierarchy if equal to deterministic polynomial time.

Comparison to real computation

Digital computers

Digital computers, particularly those following the , exhibit key conceptual similarities to the Turing machine . The (CPU) operates as a finite state control mechanism, akin to the Turing machine's finite set of states, which determines the next action based on the current state and input symbol. Similarly, (RAM) approximates the Turing machine's tape by providing a linear addressable space for storing and retrieving data and instructions, though in a finite capacity. Programs executed on these computers can be regarded as encoded versions of the Turing machine's transition table, specifying state changes, symbol writes, and head movements in a deterministic manner. Despite these parallels, significant differences arise from the theoretical nature of Turing machines versus the practical constraints of hardware. A standard Turing machine assumes an infinite tape for , enabling computations of arbitrary length without resource exhaustion, whereas digital computers rely on finite , which imposes hard limits on the and duration of executable programs. Additionally, access to the Turing machine's tape is strictly sequential, requiring the read/write head to move step-by-step along the tape, in contrast to the capability of in architectures, where data can be addressed and retrieved directly regardless of position. The notion of Turing completeness bridges these models by establishing that any sufficiently powerful digital computer can emulate a Turing machine. Programming languages such as are Turing-complete, meaning they possess the expressive power to simulate any Turing machine computation, provided unlimited time and memory are available—though real implementations are bounded by hardware. A universal Turing machine further underscores this equivalence, as it can interpret and execute the description of any other Turing machine, mirroring how digital computers run arbitrary software. In contemporary , Turing machines retain relevance by abstracting away specifics, enabling the theoretical and their without regard to implementation details like processor speed or . This focus on computational limits and universality makes the model indispensable for understanding what is fundamentally computable on digital systems.

Hypercomputation critiques

Hypercomputation refers to theoretical models of computation that purportedly exceed the capabilities of Turing machines by solving undecidable problems, such as the . These models often invoke idealized physical or mathematical constructs to achieve super-Turing power, but they face significant critiques regarding their physical realizability. Critics argue that such systems violate fundamental physical laws, including those governing time, energy, and information processing, thereby extending the Church-Turing thesis to assert that no physically feasible device can surpass Turing computability. One prominent example is the infinite-time Turing machine (ITTM), which extends standard Turing machines by allowing computation to proceed through transfinite ordinal time, performing limit steps at limit ordinals to update tape cells based on previous values. Introduced by Hamkins and Lewis, ITTMs can compute functions beyond Turing machines, such as the truth predicate for . However, critiques highlight their physical impossibility, as they require infinite duration or unattainable precision in state tracking, contradicting relativistic constraints on time and . Malament-Hogarth spacetimes propose via relativistic effects, where a computer follows a worldline allowing (experienced by the device) while an observer's remains finite, potentially enabling acausal signaling near black holes or wormholes. Earman and explored this in the context of supertasks, suggesting it could solve undecidable problems through steps in finite external time. Yet, such spacetimes are deemed physically unrealizable due to violations of global hyperbolicity, the , and the absence of configurations permitting without singularities or energy divergences. Analog computers, particularly those modeled as continuous dynamical systems with real-number inputs acting as oracles, represent another proposal. Siegelmann's models, using real weights and activations, claim to compute non-recursive functions by exploiting the uncountable precision of real numbers. Critiques, notably from , contend that these rely on unphysically precise real-number oracles, which cannot exist due to , finite energy, and the preventing perfect measurement or replication of quantum states encoding reals. Zeno machines, which perform infinitely many steps in finite time by accelerating computations supertask-style (e.g., halving intervals like ), offer a related model for . Potgieter reviewed their formal properties, showing potential for non-Turing outputs. Physical critiques emphasize impossibility: achieving infinite steps demands unbounded acceleration, violating the , and requires infinite energy or precision, bounded by thermodynamic and quantum limits. The Church-Turing thesis, when extended to physical systems, posits that effective computation is limited to Turing-equivalent processes, reinforced by the lack of for hyperdevices despite extensive physical exploration. No observed phenomena, from particle accelerators to cosmological data, suggest mechanisms enabling super-Turing operations. An ongoing debate concerns , which provides probabilistic speedups (e.g., via ) but remains equivalent to probabilistic Turing machines, not , due to unitary evolution and measurement constraints.

Historical development

Precursors and Entscheidungsproblem

In the 19th century, early precursors to modern computational concepts emerged through mechanical and logical innovations. Babbage's , designed starting in 1837, represented a programmable mechanical device capable of performing arbitrary calculations via punched cards for input and control, analogous to a rudimentary Turing machine in its separation of storage and processing. Complementing this, developed an algebraic treatment of logic in works such as The Mathematical Analysis of Logic (1847) and (1854), where he represented logical propositions using symbolic equations with operations like addition for disjunction and multiplication for conjunction, treating logic as a branch of governed by idempotent laws such as x^2 = x. These ideas gained formal momentum in the early amid efforts to rigorize mathematics. In 1900, presented 23 problems at the in , with the 10th problem posing the —the challenge of devising an to determine the validity of any statement in predicate logic, ensuring that mathematics could be mechanized and all questions resolved finitely. This problem, refined in Hilbert and Wilhelm Ackermann's 1928 textbook to focus on deciding whether a is universally valid or satisfiable through finite steps, underscored Hilbert's program for a complete, consistent foundation of mathematics. Early attempts to address such foundational issues included Thoralf Skolem's work on primitive recursive functions in the . In his paper, Skolem informally defined a class of functions built from basic operations (successor, , constant zero) via and primitive recursion, such as the predecessor function p(0) = 0, p(Sx) = x, and relations like divisibility, aiming to formalize without impredicative definitions. These functions were limited to total computable ones that always terminate, excluding more general recursive functions like the , thus providing only a subclass of effectively calculable operations. A pivotal blow to Hilbert's optimism came with Kurt Gödel's 1931 incompleteness theorems, which employed diagonalization to reveal inherent limitations in formal systems. In his paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems," Gödel constructed self-referential sentences via arithmetization, showing that any consistent system extending Robinson arithmetic contains undecidable statements, such as a Gödel sentence asserting its own unprovability: G \leftrightarrow \neg \Prov(\ulcorner G \urcorner). The second theorem extended this to prove that such a system cannot establish its own consistency, profoundly influencing later undecidability results by demonstrating that no single formal system could capture all mathematical truths.

Turing's 1936 paper

Alan Turing's seminal paper, "On Computable Numbers, with an Application to the Entscheidungsproblem," was received by the London Mathematical Society on 28 May 1936 and read on 12 November 1936, appearing in the Proceedings of the London Mathematical Society in 1937. Alonzo Church reviewed the paper positively in the Journal of Symbolic Logic in 1937, noting its rigorous formalization of computation. The work was motivated by David Hilbert's Entscheidungsproblem, which asked whether there exists an algorithm to determine the truth of any statement in first-order logic; Turing reduced this to the question of whether a machine ever halts on a given input, showing undecidability. The paper's structure begins with an analogy to human computation, likening a human calculator to a machine with a limited set of states and an unlimited paper tape for recording symbols. Turing introduces "a-machines" ( machines), which are deterministic devices operating on an tape divided into squares, each holding a from a finite . These machines have a finite number of "m-configurations" representing internal states and move left or right based solely on the current and scanned , ensuring deterministic behavior without loops unless explicitly designed. Computable real numbers are defined as those whose decimals can be generated by such a circle-free (non-looping) a-machine, allowing arbitrary through the tape's unbounded extent. Key innovations include the infinite tape, which enables handling sequences of arbitrary length without predefined bounds, contrasting with finite automata. Turing sketches the concept of a "universal " capable of simulating any other a-machine given its table of instructions as input on the tape, laying groundwork for programmable . To prove undecidability, he employs a diagonal argument on the enumeration of all possible a-machines: assuming a D that decides if another halts leads to a by constructing a machine that does the opposite of D's prediction on its own description. This demonstrates that not all reals are computable and resolves the negatively.

Post-1936 impacts

Following Alan Turing's seminal 1936 paper, his ideas profoundly influenced practical computing developments during and after . From 1939 to 1945, Turing worked at , where he contributed to codebreaking efforts using early electromechanical devices, gaining hands-on experience with automated computation that informed his later designs. In 1945, shortly after the war, Turing proposed the Automatic Computing Engine (ACE) at the National Physical Laboratory, an electronic stored-program computer directly inspired by his universal machine concept and wartime machinery like the Colossus. This design emphasized simplicity and efficiency, aiming to execute any via programmable instructions, and laid groundwork for subsequent British computers such as the Pilot ACE, operational by 1950. John von Neumann's 1945 "First Draft of a Report on the " similarly reflected Turing's theoretical framework, incorporating ideas of universal computation in its architecture for a stored-program discrete variable computer, though without explicit reference to Turing machines; Turing himself cited the report in his proposal, bridging and . These efforts marked the transition from models to realizable hardware, establishing Turing machines as a foundational for digital computer design in the late . In the 1950s and , Turing machines became central to the emerging field of , particularly through formalizations in recursion theory. Stephen Kleene's 1952 book Introduction to Metamathematics equated recursive functions with Turing-computable ones, providing a rigorous framework for effective computability and influencing subsequent developments in and . Hartley Rogers Jr.'s 1967 monograph Theory of Recursive Functions and Effective Computability further systematized the field, using Turing machines to define degrees of unsolvability and explore the of recursive functions, solidifying their role in analyzing algorithmic limitations. During this period, researchers also investigated minimal configurations, such as Marvin Minsky's 1967 demonstration of a 7-state, 4-symbol , and later efforts toward even smaller variants, exemplified by the 2-state, 3-symbol machine proposed by in 2002 and proven universal in 2007. These constructions highlighted the compactness of computation, spurring studies on the boundaries of mechanical describability. From the 1970s onward, Turing machines permeated algorithms education and advanced theoretical extensions. Standard textbooks, such as Michael Sipser's Introduction to the Theory of Computation (first edition 1997), dedicate chapters to Turing machines as the canonical model for computability and complexity, using them to introduce undecidability and nondeterminism. Similarly, Thomas Cormen et al.'s Introduction to Algorithms (first edition 1990) references Turing machines in discussions of computational models, underscoring their ubiquity in curricula. In theoretical advancements, David Deutsch's 1985 paper introduced the quantum Turing machine, a probabilistic extension allowing superposition and entanglement to model quantum computation, which extends classical universality while enabling exponential speedups for certain problems. Concurrently, Tibor Radó's 1962 busy beaver function—defined via the maximum steps or output of halting n-state Turing machines—spawned ongoing competitions to compute its values, illustrating non-computable growth rates and challenging automated verification; for instance, the 5-state case was resolved in 2024 after exhaustive analysis confirming 47,176,870 steps as the maximum, while the 6-state case remains unresolved with values exceeding $10 \uparrow\uparrow 15. In recent decades, Turing machines have informed cryptographic proofs and alternative computing paradigms without supplanting the classical model. , Silvio Micali, and Charles Rackoff's 1985 paper defined zero-knowledge proofs using interactive Turing machines, enabling verifiers to confirm statements (e.g., graph non-isomorphism) without gaining additional knowledge, a cornerstone of secure protocols like those in . Applications briefly leverage s for simulation in such systems. No fundamental paradigm shifts have emerged, but refinements appear in unconventional substrates; for example, Andrew Currin et al.'s 2016 design and in vitro implementation of a nondeterministic using DNA strand displacement and polymerase chain reactions demonstrates potential for solving NP-complete problems biochemically, bridging with molecular computing. These developments reaffirm Turing machines' enduring analytical power across domains.

References

  1. [1]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  2. [2]
    [PDF] Turing Machines and Computability - OSU Math Department
    Jul 6, 2010 · Turing machines were invented by Alan Turing in 1936, as an attempt to axioma- tize what it meant for a function to be “computable.
  3. [3]
    [PDF] 6 Turing Machines
    4 A simple formal model of mechanical computation now known as Turing machines. 4 A description of a single universal machine that can be used to compute any ...
  4. [4]
  5. [5]
    Turing machines - Stanford Encyclopedia of Philosophy
    Sep 24, 2018 · Turing machines, first described by Alan Turing in Turing 1936–7, are simple abstract computational devices intended to help investigate the extent and ...Definitions of the Turing Machine · Computing with Turing Machines
  6. [6]
    Turing Machines: Examples
    Mar 25, 2022 · 4.2 Unary Form Integer Addition ... Suppose that a tape contains pair of integers m,k in unary form separated by a single 'x'. Construct a TM to ...
  7. [7]
    [PDF] Introduction to the Theory of Computation, 3rd ed.
    ... Introduction to the Theory of Computation first appeared as a Preliminary Edition in paperback. The first edition differs from the Preliminary Edition in ...
  8. [8]
    [PDF] Formal Languages, Automata and Computation Turing Machines
    FORMAL DEFINITION OF A TURING MACHINE. A TM is 7-tuple M = (Q,Σ,Γ, δ,q0,qaccept ,qreject ) where Q,Σ,Γ are all finite sets. 1. Q is the set of states,. 2. Σ is ...
  9. [9]
  10. [10]
    [PDF] Turing Machines
    This is the Turing machine's finite state control. It issues commands that drive the operation of the machine. This is the Turing machine's finite state ...
  11. [11]
    Turing Machines and State Transition Diagrams
    Mar 27, 2009 · State Transition Diagram · Circle represents a state, · Arrows represent state transitions. · Each arrow also represents one instruction. · Arrow is ...
  12. [12]
    [PDF] Turing machines Group work - compsci.ie
    The operation of the Turing Machine is controlled by the finite-state machine (controller). ... Turing Machines – Example (unary increment). Test input: 111 (3) …
  13. [13]
    The Busy Beaver Challenge - bbchallenge
    Space-time diagrams provide a condensed way to visualise the behavior of Turing machines. ... To this day, no 5-state Turing machine is known to halt after more ...
  14. [14]
    [PDF] 1 Turing Machines and Effective Computability
    Nov 4, 2020 · itively, the computation of a one-tape TM can be simulated with two stacks by storing the tape contents to the left of the head on one stack ...
  15. [15]
    [PDF] An ACL2 Proof of the Turing Equivalence of M1 J Strother Moore
    So, like him, we represent a tape as two lists of 0s and 1s, representing ... “Turing machine tm (starting in st) on tape runs forever” means. ∀ n ...
  16. [16]
    [PDF] Computability
    The Turing machine can be simulated by two pushdown tapes. The movement of the head in one direction can be simulated by popping the top item of one stack and ...Missing: arrays | Show results with:arrays
  17. [17]
    [PDF] 9 Nondeterministic Turing Machines - Jeff Erickson
    In his seminal 1936 paper, Turing also defined an extension of his “automatic machines” that he called choice machines, which are now more commonly known as ...
  18. [18]
    [PDF] Turing Machines - Stanford University
    Nov 8, 2013 · To simulate the NTM N with a DTM D, we construct D as follows: · On input w, D converts w into an initial ID for N starting on w.
  19. [19]
    [PDF] The Complexity of Theorem-Proving Procedures
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings1 means ...
  20. [20]
    [PDF] Lecture 13: Time Complexity - People | MIT CSAIL
    Theorem: Let t : ℕ → ℕ satisfy t(n) ≥ n, for all n. Then every t(n) time multi-tape TM has an equivalent O(t(n)2) time one-tape TM. Our simulation of multitape ...Missing: source | Show results with:source
  21. [21]
    Turing Machines: Examples
    Sep 10, 2025 · Leave the 2nd tape head where it is, and move the first tape head to the right until we hit the end of the second string. Now start moving ...Missing: pseudocode | Show results with:pseudocode
  22. [22]
    [PDF] Systems of logic based on ordinals (Proc. Lond. Math. Soc., series 2 ...
    Turing considered several natural ways in which ordinal logics could be constructed: (i) A p, obtained by successively adjoining statements directly overcoming ...
  23. [23]
    [PDF] arXiv:2112.03677v7 [cs.CC] 12 Nov 2023
    Nov 12, 2023 · Definition 2. ([AB09], Oracle Turing machines) An oracle Turing machine is a. Turing machine M that has a special read-write tape we call ...
  24. [24]
    Recursively enumerable sets of positive integers and their decision ...
    8. A set of positive integers is said to be recursively enumerable if there is a recursive function f{x) of one positive integral variable whose values, for ...
  25. [25]
    [PDF] The Turing Degrees: Global and Local Structure - Cornell Mathematics
    May 20, 2015 · This paper was the seminal paper on the structure of the Turing degrees. Kleene and Post proved, among others, Theorem. KP. 5.1.1, the ...
  26. [26]
    [PDF] RECURSIVE PREDICATES AND QUANTIFIERSC1) - PhilPapers
    This paper presents a theorem on quantifying recursive predicates, stating that for each predicate form, there's a non-expressible predicate, and that a ...<|control11|><|separator|>
  27. [27]
    [PDF] Turing Machine Variations - Rose-Hulman
    May 7, 2018 · To define the Universal Turing Machine U we need to: 1. Define an encoding scheme for TMs. 2. Describe the operation of U when it is given input.
  28. [28]
    [PDF] An Unsolvable Problem of Elementary Number Theory Alonzo ...
    Mar 3, 2008 · Observe that the formulas 1,2,3,. . . are all in principal normal form. Alonzo Church and J. B. Rosser, "Some properties of conversion,'? ...
  29. [29]
    [PDF] λ-Calculus: The Other Turing Machine
    Jul 25, 2015 · In 1932, Alonzo Church at Princeton described his λ-calculus as a formal system for mathematical logic,and in 1935 argued that any function on ...
  30. [30]
    [PDF] COMPUTABILITY AND λ-DEFINABILITY - Cloudinary
    Volume 2, Number 4, December 1937. COMPUTABILITY AND λ-DEFINABILITY. A. M. TURING. Several definitions have been given to express an exact meaning correspond ...
  31. [31]
    [PDF] Recursive Unsolvability of Post's Problem of "Tag" and other Topics ...
    May 14, 2007 · Recursive Unsolvability of Post's Problem of "Tag" and other Topics in Theory of. Turing Machines. Marvin L. Minsky. The Annals of Mathematics ...
  32. [32]
    Computability of Recursive Functions | Journal of the ACM
    First page of PDF. Formats available. You can view the full content in the ... Index Terms. Computability of Recursive Functions. Computing methodologies.
  33. [33]
    Introduction to Metamathematics: Stephen Cole Kleene, Michael ...
    Introduction to Metamathematics. ISBN-13: 978-0923891572, ISBN-10: 0923891579 ... recursion equations, and developed the modern machinery of partial recursive ...
  34. [34]
    [PDF] On Non-Computable Functions - Gwern.net
    The construction of non-computable functions used in this paper is based on the principle that a finite, non-empty set of non-negative integers has a.Missing: original | Show results with:original
  35. [35]
    A VARIANT OF A RECURSIVELY UNSOLVABLE PROBLEM
    A VARIANT OF A RECURSIVELY UNSOLVABLE PROBLEM. EMIL L. POST. By a string on a, 6 we mean a row of a's and 6's such as baabbbab. It may involve only a, or 6 ...
  36. [36]
    [PDF] On the Computational Complexity of Algorithms Author(s)
    Hartmanis and R. E. Stearns. Source: Transactions of the American Mathematical Society, Vol. 117 (May, 1965), pp. 285-306. Published by: American Mathematical ...
  37. [37]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · This paper laid out the definitions of quantified time and space complexity on multitape Turing machines and showed the first results of the ...
  38. [38]
    Relationships between nondeterministic and deterministic tape ...
    The amount of storage needed to simulate a nondeterministic tape bounded Turingmachine on a deterministic Turing machine is investigated.
  39. [39]
    Turing Machines - ArielOrtiz.info
    The machine consists of a finite control (equivalent to a CPU in modern day computers), which can be in any of a finite set of states. There is a tape ...General Overview · Additional Terminology · Decidable And Recognizable...
  40. [40]
    [PDF] Introduction 1 Course Overview 2 Machine Model - cs.wisc.edu
    For a given t : N → N and s : N → N, we define: DTIME(t(n)) as the class of languages accepted in O(t(n)) time by a random access Turing machine, and DSPACE(s(n)) ...<|control11|><|separator|>
  41. [41]
    Chapter 1. Models of Computation - Zoo | Yale University
    But we will see that the Turing machine and the RAM are equivalent from many points of view; what is most important, the same functions are computable on Turing ...
  42. [42]
    [PDF] Lecture 5: Computational Models 1 Motivation 2 Turing Machines
    Here is an informal definition. You have an infinite tape with symbols on it ... It is well known that a circuit C with size s can be evaluated by a Turing ...<|control11|><|separator|>
  43. [43]
    Turing Completeness
    Sep 10, 2025 · A programming language is said to be Turing complete or computationally universal if it can be used to simulate arbitrary Turing machines.
  44. [44]
    Hypercomputation and the Physical Church‐Turing Thesis
    We review the main approaches to computation beyond Turing definability ('hypercomputation'): supertask, non‐well‐founded, analog, quantum, and retrocausal ...
  45. [45]
    Practical intractability: a critique of the hypercomputation movement
    Oct 11, 2012 · I present a more mathematically concrete challenge to hypercomputability, and will show that one is immediately led into physical impossibilities.
  46. [46]
    Infinite Time Turing Machines: Supertask Computation - math - arXiv
    Dec 3, 2002 · Infinite time Turing machines extend the operation of ordinary Turing machines into transfinite ordinal time. By doing so, they provide a natural model of ...
  47. [47]
    The Extent of Computation in Malament–Hogarth Spacetimes
    The purpose of this note is to address this question, and further show that MH spacetimes can compute far beyond the arithmetic: effectively Borel statements ( ...Missing: hypercomputation | Show results with:hypercomputation
  48. [48]
    Analog computation via neural networks - ScienceDirect.com
    We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research.Missing: hypercomputation critique
  49. [49]
    (PDF) The Myth of Hypercomputation - ResearchGate
    Feb 10, 2015 · Under the banner of “hypercomputation” various claims are being made for the feasibility of modes of computation that go beyond what is permitted by Turing ...Missing: critiques | Show results with:critiques
  50. [50]
    [cs/0412022] Zeno machines and hypercomputation - arXiv
    Jan 11, 2006 · Abstract: This paper reviews the Church-Turing Thesis (or rather, theses) with reference to their origin and application and considers some ...Missing: physical impossibility
  51. [51]
    Quantum Hypercomputation—Hype or Computation?
    Jan 1, 2022 · Quantum algorithms may outperform classical algorithms in some cases, but so far they retain the classical (recursion-theoretic) notion of ...<|control11|><|separator|>
  52. [52]
    Charles Babbage: His Life and Contributions
    He called it the Analytical Engine, and it was the first machine ever designed with the idea of programming. Babbage started working on this engine when work on ...
  53. [53]
    George Boole - Stanford Encyclopedia of Philosophy
    Apr 21, 2010 · His algebraic method of analyzing hypothetical syllogisms was to transform each of the hypothetical premises into an elective equation, and then ...The Context and Background... · The Mathematical Analysis of... · Boole's Methods
  54. [54]
    [PDF] Mathematical Problems
    A reprint of appears in Mathematical Developments Arising from Hilbert Problems edited by Felix. Brouder, American Mathematical Society, 1976. The original ...
  55. [55]
    The Rise and Fall of the Entscheidungsproblem
    The Entscheidungsproblem is solved once we know a procedure that allows us to decide, by means of finitely many operations, whether a given logical expression ...
  56. [56]
    Recursive Functions - Stanford Encyclopedia of Philosophy
    Apr 23, 2020 · An equivalent question can also be formulated in terms of the partial recursive functions: ... Turing equivalence: Definition 3.11: If \(A ...
  57. [57]
    Gödel's Incompleteness Theorems
    Nov 11, 2013 · The article was published in January 1931 (Gödel 1931; helpful introductions to Gödel's original paper are Kleene 1986 and Zach 2005). The ...
  58. [58]
    [PDF] Alan Turing, Enigma, and the Breaking of German Machine Ciphers ...
    Bletchley Park-designed replicas of the German Tunny machines could be configured with newly discovered settings. Cryptanalysis could then decipher all of ...
  59. [59]
    [PDF] Alan Turing – Computer Designer
    May 17, 2012 · ○ At Bletchley Park, Turing saw what was needed to manage a data ... What happened to Turing's design? ○ ACE went through many design cycles after ...
  60. [60]
    Turing's Main Hardware Design After World War II, the Automatic ...
    Various successful implementations of the ACE design were produced. The electronic computer design that Turing proposed was essentially the same as what came to ...Missing: 1937-1950 | Show results with:1937-1950
  61. [61]
    10 First Draft of a Report on the EDVAC (1945) - IEEE Xplore
    Alan Turing cites this report but not his own theoretical work in his plan for an Automatic Calculating Engine (Turing, 1945). Article #:. ISBN Information:.
  62. [62]
    [PDF] Lessons from a 1946 Computer Design Turing and ACE
    though Turing had seen the draft report on EDVAC by von Neumann, the ACE design the world's first complete design for a stored—program electronic computer, ACE.
  63. [63]
    [PDF] Kleene's amazing second recursion theorem.
    Kleene's Second Recursion Theorem states that for a set V, if certain conditions hold, then for every recursive partial function f, there is a total recursive ...
  64. [64]
    Four Small Universal Turing Machines | Request PDF - ResearchGate
    Aug 5, 2025 · We present universal Turing machines with state-symbol pairs of (5, 5), (6, 4), (9, 3) and (15, 2). These machines simulate our new variant ...
  65. [65]
    [PDF] Lecture 1 1 Turing Machines - UMD Computer Science
    (Students who have not may want to look at Sipser's book [3].) A Turing machine is defined by an integer k ≥ 1, a finite set of states Q, an alphabet Γ, and a ...
  66. [66]
    [PDF] Quantum theory, the Church-Turing principle and the universal ...
    I have described elsewhere (Deutsch 1985; cf. also Albert 1983) how it would be possible to make a crucial experimental test of the Everett ('many-universes') ...
  67. [67]
    [PDF] Implementing a nondeterministic universal Turing machine using DNA
    One Sentence Summary: We experimentally demonstrate a Nondeterministic Universal Turing. Machine (NUTM), NUTMs have exponential speedup over conventional and ...