A Turing reduction, also known as Turing reducibility, is a preorder relation in computability theory that compares the relative computational complexity of decision problems or sets of natural numbers by allowing one problem to be solved using an oracle for the other. Formally, a set A is Turing reducible to a set B, denoted A \leq_T B, if there exists a Turing machine equipped with an oracle for B—a hypothetical device that instantly answers membership queries about B—that can decide membership in A for any input. This concept enables the classification of problems based on how much "computational power" is required to solve them relative to each other, distinguishing it from stricter reductions like many-one reducibility by permitting multiple, adaptive oracle queries during computation.[1][2]Introduced by Alan Turing in his 1938 PhD thesis Systems of Logic Based on Ordinals, the notion arose from his exploration of oracle machines as a way to extend the limits of standard Turing machines beyond undecidable problems like the halting problem, aiming to analyze hierarchies of formal logical systems. Turing reducibility forms the basis for Turing degrees, equivalence classes of sets under the relation A \equiv_T B (where A \leq_T B and B \leq_T A), which structure the "degrees of unsolvability" in a partial order; the minimal degree is that of the recursive (computable) sets, denoted $0. A key operation is the jump A', the Turing degree of the halting problem relative to A, satisfying A <_T A' and enabling the construction of increasingly complex degrees, as developed further by Emil Post and Stephen Kleene in the 1940s.[3][1][2]The importance of Turing reductions lies in their role in proving undecidability: if A \leq_T B and A is undecidable, then so is B, providing a systematic method to establish the unsolvability of problems like Hilbert's tenth problem or the word problem for groups by reducing the halting problem to them. This framework has profoundly influenced recursion theory, with results like the Friedberg-Muchnik theorem (1956) showing the existence of incomparable recursively enumerable degrees, and continues to underpin modern areas such as descriptive set theory and reverse mathematics. Properties of Turing reducibility include its reflexivity (A \leq_T A) and transitivity, but it does not preserve recursively enumerable sets downward, highlighting its focus on total computability rather than partial functions.[1][2]
Definition and Formalism
Formal Definition
A Turing reduction from a decision problem A to a decision problem B is a procedure computable by a Turing machine that determines whether an input x \in A by making queries to an oracle for B, such that x \in A if and only if the oracle reports affirmatively on the relevant instances of B.[4] This procedure effectively transforms the problem of deciding A into a series of decisions about B, allowing the relative computability of A to B to be established.[5]An oracleTuring machine formalizes this reduction: it is a standard Turing machine augmented with an additional oracle tape and the ability to query it.[4] Upon entering a designated query state q_Q, the machine writes a string w on the oracle tape; the oracle then instantaneously replaces w with 1 if w \in B or 0 if w \notin B, after which computation resumes from the next state.[4] Formally, if M^B denotes the oracle Turing machine M equipped with oracle B, then A is Turing reducible to B, denoted A \leq_T B, if there exists such an M where M^B decides A.[5]Turing reducibility is transitive: if A \leq_T B and B \leq_T C, then A \leq_T C.[6] To see this, suppose M_1^B decides A and M_2^C decides B; construct M_3^C that simulates M_1 on input x but, whenever M_1 would query its oracle for B on some y, M_3 instead runs M_2^C on y to obtain the answer from the C-oracle and feeds it back to the simulation of M_1.[4] Since simulations of Turing machines are computable, M_3^C decides A, establishing the reduction.[6]
Relation to Oracle Machines
An oracle Turing machine extends the standard model of a Turing machine by incorporating an additional oracle tape and a corresponding oracle head, along with designated query states that allow the machine to consult an external oracle for decision problems.[7] This augmentation enables the machine to solve problems beyond its native computational power by treating the oracle as a black-box subroutine that decides membership in a fixed language.[8]To make a query, the oracle Turing machine first writes the instance—typically a string representing the input to the oracle set—onto the oracletape, positioning the oracle head at the beginning of this string. The machine then enters a query state, at which point the oracle instantaneously inspects the tape content and provides a yes/no answer by either overwriting a designated symbol (such as replacing a query marker with '1' for yes or '0' for no) or transitioning the machine to one of two response states, allowing computation to continue based on the result.[9][7] This process models the subroutine call in a Turing reduction, where the oracle acts as an idealized decider for the target problem without revealing its internal mechanism.The concept of Turing reducibility, denoted A \leq_T B, is precisely captured by oracle Turing machines: a language A is Turing reducible to B if there exists an oracle Turing machine that, with B as its oracle, decides membership in A.[9] This equivalence holds because any Turing reduction procedure, which involves a finite number of adaptive queries to B followed by local computation, can be simulated by the oracle machine's tape operations and state transitions, ensuring the machine halts with the correct output for every input in A.[8]Relativization refers to the practice of interpreting computability and complexity results relative to an arbitrary oracle set, where classes like the recursively enumerable languages or recursive languages are defined with respect to the oracle's power.[9] For instance, the class \text{RE}_A consists of languages accepted by nondeterministic oracle Turing machines with oracle A, highlighting how oracle models preserve structural properties of computation independently of the specific oracle chosen.[8]
Examples
Canonical Example with Halting Problem
The halting problem, denoted H, is the set of all pairs \langle P, x \rangle, where P is the description of a Turing machine and x is a finite input string, such that P halts on input x.[10]A basic illustration of a Turing reduction involves reducing H to itself, which is trivial: an oracle Turing machine M^H decides membership in H by querying its oracle for H directly on the input \langle P, x \rangle and outputting the oracle's yes or no response.[11] This single-query procedure demonstrates how access to an oracle for a problem enables its own solution via Turing reduction.Another canonical construction reduces the general halting problem H to the blank-tape halting problem H_\emptyset = \{ \langle P \rangle \mid P halts on the empty tape \}, showing H \leq_T H_\emptyset. On input \langle P, x \rangle, an oracle Turing machine M^{H_\emptyset} proceeds as follows:
Compute the encoding \langle Q \rangle of a new Turing machine Q, constructed such that Q, when started on any input (including the blank tape), first erases the tape, then writes the fixed string x onto it (hardcoding x into Q's finite description, which is possible since x is given and finite), positions the tape head appropriately, and finally simulates P starting from that configuration.
Query the oracle for H_\emptyset on \langle Q \rangle.
If the oracle returns yes (Q halts on blank tape), accept (P halts on x); otherwise, reject.
This algorithm uses one oracle query and halts on all inputs, correctly deciding H because Q on blank tape replicates the computation of P on x and thus halts if and only if P does.[12]Such reductions underscore that H cannot be Turing reducible to any decidable set D. If H \leq_T D for decidable D, the oracle machine deciding H relative to D could be transformed into a standard Turing machine by replacing each oracle query to D with a direct simulation using D's decider (which always halts); this would yield a decider for H, contradicting the undecidability of H.[10][11]
Example in Post's Correspondence Problem
Post's Correspondence Problem (PCP) is a decision problem introduced by Emil Post in 1946. Given two finite lists of strings U = (u_1, \dots, u_k) and V = (v_1, \dots, v_k) over an alphabet \Sigma with |\Sigma| \ge 2, the question is whether there exists a sequence of indices i_1, i_2, \dots, i_n (with n \ge 1 and repetitions allowed) such that the concatenation u_{i_1} u_{i_2} \dots u_{i_n} = v_{i_1} v_{i_2} \dots v_{i_n}.A Turing reduction from the halting problem to PCP provides a concrete example of how such reductions establish undecidability. The reduction constructs a Turing machine R that, on input \langle M, w \rangle (where M is a Turing machine and w is a string), computes a PCP instance P encoding the possible computations of M on w, and then queries an oracle for PCP on P. If the oracle indicates that P has a solution, R outputs that M halts on w; otherwise, it outputs no. Since the halting problem is undecidable, this reduction shows that PCP is undecidable.[13]The detailed construction encodes the PCP instance P so that a matching sequence of tiles corresponds exactly to a valid computation history of M on w that reaches the halting state. The tiles are designed as follows: The alphabet includes symbols for tape contents, states, and head position. There is an initial tile representing the starting configuration of M on w (top and bottom match the initial instantaneous description, or ID). For each possible transition rule of M (read symbol a, write b, move direction D, new state q), there are corresponding tiles that "advance" the computation: the top string encodes the current ID, while the bottom encodes the subsequent ID after applying the transition. Additional tiles handle the head movement and symbol writing to ensure alignments only occur for valid steps. A special set of tiles allows matching only upon reaching a halting configuration, with no further extendable match possible otherwise. A solution to P thus exists if and only if there is a finite computation path from the initial ID to a halting ID, i.e., M halts on w. This encoding was originally developed by Dana Scott.[13]
Properties
Closure Properties
The Turing degrees are the equivalence classes of sets of natural numbers under Turing equivalence, denoted A \equiv_T B, which holds if and only if A \leq_T B and B \leq_T A. This partitions the power set of the natural numbers into degrees that measure relative computational difficulty.[14]The relation \leq_T on sets induces a partial order on the Turing degrees that is reflexive, as every set is Turing reducible to itself via the identity functional, and antisymmetric, since mutual Turing reducibility implies equivalence and thus membership in the same degree.The poset of Turing degrees forms an upper semi-lattice under the join operation, where for degrees \mathbf{a} = \deg_T(A) and \mathbf{b} = \deg_T(B), the least upper bound \mathbf{a} \vee \mathbf{b} = \deg_T(A \cup B). To see closure under union, note that if C \leq_T D and E \leq_T D, then C \cup E \leq_T D, as one can compute C \cup E from D by applying the respective Turing functionals for C and E and taking their union; coding C and E into disjoint effective ranges ensures recoverability from the union. Oracle machines facilitate this by simulating computations over the union to decide membership in either component.[14]The Turing degrees do not form a full lattice, as not every pair admits a greatest lower bound (meet). This non-closure under intersection manifests particularly in the recursively enumerable (r.e.) degrees, where meets may exist but are often trivial (degree \mathbf{0}).
Degrees and Comparability
The Turing degrees, denoted \mathcal{D}, form an upper semi-lattice under the partial order induced by Turing reducibility, where the degree of a set X, written \mathbf{deg}(X), represents the equivalence class of all sets Turing equivalent to X. This structure classifies the relative computational complexity of unsolvable problems, with \mathbf{0} as the least degree containing all recursive sets.[15]The arithmetic hierarchy organizes sets into levels \Sigma_n^0 and \Pi_n^0 based on the number of alternations of quantifiers in arithmetic formulas defining them, corresponding directly to Turing degrees via iterated jumps of the empty set. Specifically, Post's theorem establishes that a set A belongs to \Sigma_n^0 if and only if A \leq_T \emptyset^{(n)}, the nth Turing jump of the empty set, and similarly A \in \Pi_n^0 if and only if A \leq_T \emptyset^{(n)}. These levels form increasing chains in the Turing degrees, with \mathbf{0}^{(n)} = \mathbf{deg}(\emptyset^{(n)}) strictly above \mathbf{0}^{(n-1)}, capturing the progressive increase in unsolvability. The union of all finite levels yields the arithmetic sets, all of which are Turing reducible to some finite jump \emptyset^{(k)}.Extending beyond the arithmetic hierarchy, the hyperarithmetic hierarchy classifies sets through transfinite iterations of the Turing jump up to the Church-Kleene ordinal \omega_1^{CK}, the smallest ordinal not representable by a computable well-ordering. A set is hyperarithmetic if it is Turing reducible to \emptyset^{(\alpha)} for some computable ordinal \alpha < \omega_1^{CK}, with levels defined analogously to the arithmetic case but relativized to ordinal notations. Kleene introduced this hierarchy to capture the effective analogue of projective sets, showing that hyperarithmetic sets coincide with the \Delta_1^1 sets in descriptive set theory, all lying below the hyperjump \mathbf{0}^{(\omega_1^{CK})}. This structure reveals a countable hierarchy of degrees denser than the arithmetic one but still properly contained in the full \mathcal{D}.The Turing degrees exhibit incomparability, meaning the partial order is not a total order; for any nonzero degree \mathbf{a}, there exists \mathbf{b} such that neither \mathbf{a} \leq_T \mathbf{b} nor \mathbf{b} \leq_T \mathbf{a}. The Friedberg-Muchnik theorem proves this by constructing two recursively enumerable sets A and B that partition the halting set K (i.e., A \cup B = K, A \cap B = \emptyset) such that \mathbf{deg}(A) and \mathbf{deg}(B) are incomparable, resolving Post's problem on the existence of non-trivial r.e. degrees. These minimal degrees, where no degree strictly between \mathbf{0} and the minimal one exists, further illustrate the branching structure.The poset \mathcal{D} is dense: for any degrees \mathbf{a} < \mathbf{b}, there exists \mathbf{c} such that \mathbf{a} < \mathbf{c} < \mathbf{b}. This property, established through priority constructions and forcing techniques, underscores the intricate, non-linear nature of the degrees, with infinitely many degrees between any comparable pair, contrasting with linear hierarchies like the arithmetic one.[15]
Applications
Role in Undecidability Proofs
Turing reductions are instrumental in undecidability proofs within computability theory, enabling the transfer of undecidability from a known undecidable problem to another. The core strategy posits that if a problem A Turing-reduces to the halting problem H (denoted A \leq_T H), and H is undecidable, then A must also be undecidable; for if A were decidable, one could construct a decider for H by composing the decider for A with the reduction, yielding a contradiction.[10] This contraposition leverages the constructive nature of Turing reductions, where queries to an oracle for A simulate computations relative to H. The halting problem H, defined as the set of pairs (M, w) where Turing machine M halts on input w, serves as the foundational undecidable set in this framework.[10]A landmark application appears in Alan Turing's 1936 proof of the undecidability of the Entscheidungsproblem, Hilbert's question of whether there exists a general algorithm to determine the provability of statements in first-order logic. Turing reduced this problem to the halting problem by encoding machine computations as logical formulas and showing that the provability of a formula U_n (asserting that machine n eventually prints a specific symbol) corresponds exactly to whether the machine halts. Assuming a decision procedure for provability would then decide halting, which Turing had already shown impossible via diagonalization, thus establishing the result.[10]Rice's theorem extends this reduction technique to a broad class of problems, stating that for any non-trivial property C of the languages recognized by Turing machines (i.e., C contains some but not all recursively enumerable sets), the index set \{e \mid L(\phi_e) \in C\} is undecidable. The proof constructs a Turing reduction from the halting problem to this index set by modifying a universal Turing machine to check membership in C after simulating the relevant computation, implying that decidability of the index set would decide halting.[16]Another significant application is the undecidability of Hilbert's tenth problem, which asks whether there exists an algorithm to determine if a given Diophantine equation (polynomial equation with integer coefficients) has integer solutions. In 1970, Yuri Matiyasevich, building on work by Martin Davis, Hilary Putnam, and Julia Robinson, proved this by showing that every recursively enumerable set is Diophantine, allowing a Turing reduction from the halting problem to the solvability of such equations.[17]The word problem for groups—determining whether a given word in the generators of a finitely presented group represents the identity element—was also shown undecidable using Turing reductions from the halting problem. Pyotr Novikov (1955) and William Boone (1959) independently constructed finitely presented groups whose word problems are undecidable, embedding computations of Turing machines into group relations.[18]Diagonalization, as employed by Turing to prove the halting problem undecidable, synergizes with Turing reductions to construct universal undecidable sets, such as the diagonal halting set K = \{e \mid \phi_e(e) \downarrow\}, which captures all recursively enumerable sets via appropriate reductions and serves as a complete set in the Turing degrees. This combination allows for the systematic derivation of undecidability for myriad problems by reducing them to K, highlighting the hierarchical structure of undecidable sets.[10]
Use in Complexity Theory
In computational complexity theory, polynomial-time Turing reductions, denoted as \leq_T^p, extend the notion of many-one reductions by permitting a polynomial-time Turing machine to make multiple adaptive queries to an oracle for the target problem. These reductions are particularly useful for establishing completeness in resource-bounded classes like PSPACE, where a language L is PSPACE-complete if L \in PSPACE and for every A \in PSPACE, A \leq_T^p L. Unlike many-one reductions, which map an instance to a single instance of the target problem, Turing reductions enable the solving machine to adapt its strategy based on oracle responses, facilitating the handling of problems with inherent branching or alternating structures.[19]A representative example of their application appears in the analysis of the true quantified Boolean formulas problem (TQBF), which is PSPACE-complete.[20]Turing reductions are preferred over many-one reductions in certain PSPACE contexts because adaptive queries capture dependencies that a single mapping cannot, such as iterative refinements in sparse set reductions or under assumptions separating deterministic and randomized completeness. For instance, under the assumption that PSPACE contains dense hard sets, there exist languages that are complete under \leq_T^p but not under polynomial-time many-one reductions (\leq_m^p).[19][21]However, polynomial-time Turing reductions are not typically used to define NP-completeness, where Karp reductions (\leq_m^p) remain the standard. This is because Turing reductions are weaker (more permissive), potentially equating a language with its complement via answer flipping, which disrupts NP's asymmetry with coNP; many-one reductions provide finer granularity and align with Cook's original theorem for circuit satisfiability.[22]
Related Concepts
Stronger Reductions
A set A \subseteq \omega is many-one reducible to a set B \subseteq \omega, denoted A \leq_m B, if there exists a total computable function f: \omega \to \omega such that for all x \in \omega, x \in A if and only if f(x) \in B.[23] This form of reduction is stricter than a Turing reduction because it relies solely on a computable pre-processing step to produce a single query to B, with membership in A determined directly by the oracle response without additional computation.[23]Truth-table reducibility provides an intermediate level of restriction. A set A is truth-table reducible to B, denoted A \leq_{tt} B, if there exists a Turing functional \Phi such that \Phi^B(x) \downarrow for all x, and for each x, \Phi computes (without oracle access) a finite sequence of queries y_1, \dots, y_k \in \omega and a computable Boolean function h: \{0,1\}^k \to \{0,1\} (a truth table of size at most $2^k) such that x \in A if and only if h(\chi_B(y_1), \dots, \chi_B(y_k)) = 1, where \chi_B is the characteristic function of B. Unlike Turing reductions, which permit adaptive queries where each question may depend on prior answers, truth-table reductions require all queries to be fixed in advance and the decision to follow a pre-specified truth table on the responses.Every many-one reduction is a truth-table reduction, as a single-query reduction corresponds to a truth table of size 2 where membership holds exactly when the oracle response is affirmative. Similarly, every truth-table reduction is a Turing reduction, since the fixed list of queries can be posed non-adaptively to the oracle for B, followed by evaluation of the truth table. Thus, \leq_m \subseteq \leq_{tt} \subseteq \leq_T as relations on sets. These inclusions are strict: for \leq_m \subsetneq \leq_{tt}, consider the complement of the halting set \overline{K} = \{e \in \omega \mid \varphi_e(e) \uparrow \}, where K = \{e \in \omega \mid \varphi_e(e) \downarrow \}; \overline{K} \leq_{tt} K via the single query to e with truth table accepting on response 0, but \overline{K} \not\leq_m K since the latter would imply \overline{K} is recursively enumerable, contradicting that \overline{K} is not r.e. while K is. For \leq_{tt} \subsetneq \leq_T, Post constructed pairs of sets A, B such that A \leq_T B but A \not\leq_{tt} B, establishing the existence of Turing degrees incomparable under truth-table reducibility.Many-one reducibility preserves recursively enumerability: if A \leq_m B and B is r.e., then A is r.e., as the preimage under the computable f of an enumeration of B yields an enumeration of A.[23] Truth-table reducibility does not preserve r.e.-ness in general, as the example \overline{K} \leq_{tt} K demonstrates, with K r.e. but \overline{K} not. These preservation properties highlight why many-one reductions are considered stronger, enabling finer distinctions in the structure of unsolvability degrees compared to the broader Turing reductions.
Weaker Reductions
Weaker reductions impose additional constraints on the oracle usage in a Turing reduction, limiting its expressive power compared to the full adaptive querying allowed in standard Turing reductions. These variants are useful for studying finer structures in the Turing degrees, such as separating sets based on query patterns or information flow properties. Unlike full Turing reductions, weaker forms may fail to transfer undecidability in cases where the oracle provides no affirmative answers, highlighting their restricted applicability.[24]Bounded Turing reductions, also known as weak truth-table (wtt) reductions, restrict the computation such that the set of queried strings (the "use") is computable from the input alone, effectively bounding the number of oracle queries by a computable function of the input size. This ensures that the oracle answers influence the computation in a controlled manner, without unbounded adaptive branching based on prior responses. For instance, disjunctive truth-table reductions compute the output as the disjunction of oracle answers to a computable list of queries, forming a subclass where the decision accepts if at least one query is affirmative. These reductions are instrumental in analyzing degrees where computational resources are limited, as explored in foundational work on recursive function theory.Positive reductions further restrict oracle queries to instances that are guaranteed to be positive (i.e., members of the oracle set), ensuring that the reduction never queries potential non-members. Formally, a set A is positive Turing reducible to B if there exists a Turing reduction from A to B such that for every input x, the queries generated depend only on affirmative oracle responses, preserving "positive information flow" from B to A. This notion arises in studies of immunity and hyperimmunity in recursion theory, where it characterizes sets avoiding infinite computable subsets while maintaining directional information properties. Stronger variants, like strong positive reducibilities, extend this by requiring monotonicity in query generation, aiding proofs of non-trivial degrees below the halting problem.[25]A key limitation of weaker reductions is their inability to fully capture undecidability transfers in trivial cases. For instance, there is no bounded Turing reduction from the halting problem H to the empty set ∅, as any such reduction would generate only computably many queries answered negatively, yielding a computable procedure overall—contradicting the undecidability of H. This failure underscores how query bounds prevent simulating the full adaptive power needed for non-trivial oracle uses, even though full Turing reductions also cannot reduce H to ∅ for similar reasons. Positive variants exhibit analogous shortcomings when the oracle lacks sufficient affirmative information to resolve undecidable queries.