Fact-checked by Grok 2 weeks ago

Turing reduction

A Turing reduction, also known as Turing reducibility, is a preorder relation in computability theory that compares the relative computational complexity of decision problems or sets of natural numbers by allowing one problem to be solved using an oracle for the other. Formally, a set A is Turing reducible to a set B, denoted A \leq_T B, if there exists a Turing machine equipped with an oracle for B—a hypothetical device that instantly answers membership queries about B—that can decide membership in A for any input. This concept enables the classification of problems based on how much "computational power" is required to solve them relative to each other, distinguishing it from stricter reductions like many-one reducibility by permitting multiple, adaptive oracle queries during computation. Introduced by Alan Turing in his 1938 PhD thesis Systems of Logic Based on Ordinals, the notion arose from his exploration of oracle machines as a way to extend the limits of standard Turing machines beyond undecidable problems like the halting problem, aiming to analyze hierarchies of formal logical systems. Turing reducibility forms the basis for Turing degrees, equivalence classes of sets under the relation A \equiv_T B (where A \leq_T B and B \leq_T A), which structure the "degrees of unsolvability" in a partial order; the minimal degree is that of the recursive (computable) sets, denoted $0. A key operation is the jump A', the Turing degree of the halting problem relative to A, satisfying A <_T A' and enabling the construction of increasingly complex degrees, as developed further by Emil Post and Stephen Kleene in the 1940s. The importance of Turing reductions lies in their role in proving undecidability: if A \leq_T B and A is undecidable, then so is B, providing a systematic method to establish the unsolvability of problems like or the by reducing the to them. This framework has profoundly influenced recursion theory, with results like the Friedberg-Muchnik theorem (1956) showing the existence of incomparable recursively enumerable degrees, and continues to underpin modern areas such as descriptive set theory and . Properties of Turing reducibility include its reflexivity (A \leq_T A) and , but it does not preserve recursively enumerable sets downward, highlighting its focus on total computability rather than partial functions.

Definition and Formalism

Formal Definition

A Turing reduction from a A to a B is a procedure computable by a that determines whether an input x \in A by making queries to an for B, such that x \in A if and only if the oracle reports affirmatively on the relevant instances of B. This procedure effectively transforms the problem of deciding A into a series of decisions about B, allowing the relative of A to B to be established. An formalizes this reduction: it is a standard augmented with an additional oracle tape and the ability to query it. Upon entering a designated query state q_Q, the machine writes a string w on the oracle tape; the oracle then instantaneously replaces w with 1 if w \in B or 0 if w \notin B, after which computation resumes from the next state. Formally, if M^B denotes the oracle M equipped with B, then A is Turing reducible to B, denoted A \leq_T B, if there exists such an M where M^B decides A. Turing reducibility is transitive: if A \leq_T B and B \leq_T C, then A \leq_T C. To see this, suppose M_1^B decides A and M_2^C decides B; construct M_3^C that simulates M_1 on input x but, whenever M_1 would query its for B on some y, M_3 instead runs M_2^C on y to obtain the from the C-oracle and feeds it back to the simulation of M_1. Since simulations of Turing machines are computable, M_3^C decides A, establishing the reduction.

Relation to Oracle Machines

An Turing machine extends the standard model of a Turing machine by incorporating an additional oracle tape and a corresponding oracle head, along with designated query states that allow the machine to consult an external for decision problems. This augmentation enables the machine to solve problems beyond its native computational power by treating the oracle as a black-box subroutine that decides membership in a fixed . To make a query, the oracle Turing machine first writes the instance—typically a representing the input to the set—onto the , positioning the oracle head at the beginning of this . The machine then enters a query , at which point the oracle instantaneously inspects the tape content and provides a yes/no answer by either overwriting a designated (such as replacing a query marker with '1' for yes or '0' for no) or transitioning the machine to one of two response states, allowing to continue based on the result. This process models the subroutine call in a Turing reduction, where the oracle acts as an idealized decider for the target problem without revealing its internal mechanism. The concept of Turing reducibility, denoted A \leq_T B, is precisely captured by oracle Turing machines: a language A is Turing reducible to B if there exists an oracle Turing machine that, with B as its oracle, decides membership in A. This equivalence holds because any Turing reduction procedure, which involves a finite number of adaptive queries to B followed by local computation, can be simulated by the oracle machine's tape operations and state transitions, ensuring the machine halts with the correct output for every input in A. Relativization refers to the practice of interpreting and results relative to an arbitrary set, where es like the recursively enumerable languages or recursive languages are defined with respect to the 's power. For instance, the \text{RE}_A consists of languages accepted by nondeterministic Turing machines with A, highlighting how models preserve structural properties of independently of the specific chosen.

Examples

Canonical Example with Halting Problem

The halting problem, denoted H, is the set of all pairs \langle P, x \rangle, where P is the description of a Turing machine and x is a finite input string, such that P halts on input x. A basic illustration of a Turing reduction involves reducing H to itself, which is trivial: an oracle Turing machine M^H decides membership in H by querying its oracle for H directly on the input \langle P, x \rangle and outputting the oracle's yes or no response. This single-query procedure demonstrates how access to an oracle for a problem enables its own solution via Turing reduction. Another canonical construction reduces the general halting problem H to the blank-tape halting problem H_\emptyset = \{ \langle P \rangle \mid P halts on the empty tape \}, showing H \leq_T H_\emptyset. On input \langle P, x \rangle, an oracle Turing machine M^{H_\emptyset} proceeds as follows:
  1. Compute the encoding \langle Q \rangle of a new Turing machine Q, constructed such that Q, when started on any input (including the blank tape), first erases the tape, then writes the fixed string x onto it (hardcoding x into Q's finite description, which is possible since x is given and finite), positions the tape head appropriately, and finally simulates P starting from that configuration.
  2. Query the for H_\emptyset on \langle Q \rangle.
  3. If the oracle returns yes (Q halts on blank tape), accept (P halts on x); otherwise, reject.
This algorithm uses one oracle query and halts on all inputs, correctly deciding H because Q on blank tape replicates the computation of P on x and thus halts if and only if P does. Such reductions underscore that H cannot be Turing reducible to any decidable set D. If H \leq_T D for decidable D, the oracle machine deciding H relative to D could be transformed into a standard Turing machine by replacing each oracle query to D with a direct simulation using D's decider (which always halts); this would yield a decider for H, contradicting the undecidability of H.

Example in Post's Correspondence Problem

Post's Correspondence Problem (PCP) is a introduced by Emil Post in 1946. Given two finite lists of strings U = (u_1, \dots, u_k) and V = (v_1, \dots, v_k) over an alphabet \Sigma with |\Sigma| \ge 2, the question is whether there exists a sequence of indices i_1, i_2, \dots, i_n (with n \ge 1 and repetitions allowed) such that the concatenation u_{i_1} u_{i_2} \dots u_{i_n} = v_{i_1} v_{i_2} \dots v_{i_n}. A from the to PCP provides a concrete example of how such reductions establish undecidability. The reduction constructs a R that, on input \langle M, w \rangle (where M is a Turing machine and w is a string), computes a PCP instance P encoding the possible computations of M on w, and then queries an for PCP on P. If the oracle indicates that P has a , R outputs that M halts on w; otherwise, it outputs no. Since the halting problem is undecidable, this reduction shows that PCP is undecidable. The detailed construction encodes the PCP instance P so that a matching sequence of tiles corresponds exactly to a valid computation history of M on w that reaches the halting state. The tiles are designed as follows: The includes symbols for tape contents, states, and head position. There is an initial tile representing the starting of M on w (top and bottom match the initial instantaneous description, or ). For each possible transition rule of M (read symbol a, write b, move direction D, new state q), there are corresponding tiles that "advance" the : the top string encodes the current , while the bottom encodes the subsequent after applying the transition. Additional tiles handle the head movement and symbol writing to ensure alignments only occur for valid steps. A special set of tiles allows matching only upon reaching a halting , with no further extendable match possible otherwise. A solution to P thus exists there is a finite computation path from the initial to a halting , i.e., M halts on w. This encoding was originally developed by .

Properties

Closure Properties

The Turing degrees are the equivalence classes of sets of natural numbers under Turing equivalence, denoted A \equiv_T B, which holds if and only if A \leq_T B and B \leq_T A. This partitions the power set of the natural numbers into degrees that measure relative computational difficulty. The relation \leq_T on sets induces a partial order on the Turing degrees that is reflexive, as every set is Turing reducible to itself via the functional, and antisymmetric, since mutual Turing reducibility implies and thus membership in the same degree. The poset of Turing degrees forms an upper semi-lattice under the join operation, where for degrees \mathbf{a} = \deg_T(A) and \mathbf{b} = \deg_T(B), the least upper bound \mathbf{a} \vee \mathbf{b} = \deg_T(A \cup B). To see closure under union, note that if C \leq_T D and E \leq_T D, then C \cup E \leq_T D, as one can compute C \cup E from D by applying the respective Turing functionals for C and E and taking their union; coding C and E into disjoint effective ranges ensures recoverability from the union. Oracle machines facilitate this by simulating computations over the union to decide membership in either component. The Turing degrees do not form a full , as not every pair admits a greatest lower bound (meet). This non-closure under intersection manifests particularly in the recursively enumerable (r.e.) degrees, where meets may exist but are often trivial (degree \mathbf{0}).

Degrees and Comparability

The Turing degrees, denoted \mathcal{D}, form an upper semi- under the partial order induced by Turing reducibility, where the degree of a set X, written \mathbf{deg}(X), represents the of all sets Turing equivalent to X. This structure classifies the relative of unsolvable problems, with \mathbf{0} as the least degree containing all recursive sets. The arithmetic hierarchy organizes sets into levels \Sigma_n^0 and \Pi_n^0 based on the number of alternations of quantifiers in arithmetic formulas defining them, corresponding directly to Turing degrees via iterated jumps of the . Specifically, Post's theorem establishes that a set A belongs to \Sigma_n^0 if and only if A \leq_T \emptyset^{(n)}, the nth Turing jump of the , and similarly A \in \Pi_n^0 if and only if A \leq_T \emptyset^{(n)}. These levels form increasing chains in the Turing degrees, with \mathbf{0}^{(n)} = \mathbf{deg}(\emptyset^{(n)}) strictly above \mathbf{0}^{(n-1)}, capturing the progressive increase in unsolvability. The union of all finite levels yields the arithmetic sets, all of which are Turing reducible to some finite jump \emptyset^{(k)}. Extending beyond the arithmetic hierarchy, the hyperarithmetic hierarchy classifies sets through transfinite iterations of the Turing jump up to the Church-Kleene ordinal \omega_1^{CK}, the smallest ordinal not representable by a computable well-ordering. A set is hyperarithmetic if it is Turing reducible to \emptyset^{(\alpha)} for some computable ordinal \alpha < \omega_1^{CK}, with levels defined analogously to the arithmetic case but relativized to ordinal notations. Kleene introduced this hierarchy to capture the effective analogue of projective sets, showing that hyperarithmetic sets coincide with the \Delta_1^1 sets in descriptive set theory, all lying below the hyperjump \mathbf{0}^{(\omega_1^{CK})}. This structure reveals a countable hierarchy of degrees denser than the arithmetic one but still properly contained in the full \mathcal{D}. The Turing degrees exhibit incomparability, meaning the partial order is not a ; for any nonzero degree \mathbf{a}, there exists \mathbf{b} such that neither \mathbf{a} \leq_T \mathbf{b} nor \mathbf{b} \leq_T \mathbf{a}. The Friedberg-Muchnik theorem proves this by constructing two recursively enumerable sets A and B that partition the halting set K (i.e., A \cup B = K, A \cap B = \emptyset) such that \mathbf{deg}(A) and \mathbf{deg}(B) are incomparable, resolving Post's problem on the existence of non-trivial r.e. degrees. These minimal degrees, where no degree strictly between \mathbf{0} and the minimal one exists, further illustrate the branching structure. The poset \mathcal{D} is dense: for any degrees \mathbf{a} < \mathbf{b}, there exists \mathbf{c} such that \mathbf{a} < \mathbf{c} < \mathbf{b}. This property, established through constructions and forcing techniques, underscores the intricate, non-linear nature of the degrees, with infinitely many degrees between any comparable pair, contrasting with linear hierarchies like the arithmetic one.

Applications

Role in Undecidability Proofs

Turing reductions are instrumental in undecidability proofs within , enabling the transfer of undecidability from a known to another. The core strategy posits that if a problem A Turing-reduces to the H (denoted A \leq_T H), and H is undecidable, then A must also be undecidable; for if A were decidable, one could construct a decider for H by composing the decider for A with the reduction, yielding a . This leverages the constructive nature of Turing reductions, where queries to an for A simulate computations relative to H. The H, defined as the set of pairs (M, w) where M halts on input w, serves as the foundational undecidable set in this framework. A landmark application appears in Alan Turing's 1936 proof of the undecidability of the , Hilbert's question of whether there exists a general to determine the provability of statements in . Turing reduced this problem to the by encoding machine computations as logical formulas and showing that the provability of a formula U_n (asserting that machine n eventually prints a specific symbol) corresponds exactly to whether the machine halts. Assuming a decision procedure for provability would then decide halting, which Turing had already shown impossible via , thus establishing the result. Rice's theorem extends this reduction technique to a broad class of problems, stating that for any non-trivial property C of the languages recognized by Turing machines (i.e., C contains some but not all recursively enumerable sets), the \{e \mid L(\phi_e) \in C\} is undecidable. The proof constructs a Turing reduction from the to this index set by modifying a to check membership in C after simulating the relevant computation, implying that decidability of the index set would decide halting. Another significant application is the undecidability of , which asks whether there exists an to determine if a given (polynomial equation with integer coefficients) has integer solutions. In 1970, , building on work by Martin Davis, , and , proved this by showing that every recursively enumerable set is Diophantine, allowing a Turing reduction from the to the solvability of such equations. The —determining whether a given word in the generators of a finitely presented group represents the —was also shown undecidable using Turing reductions from the . Pyotr Novikov (1955) and William Boone (1959) independently constructed finitely presented groups whose word problems are undecidable, embedding computations of Turing machines into group relations. , as employed by Turing to prove the undecidable, synergizes with Turing reductions to construct universal undecidable sets, such as the diagonal halting set K = \{e \mid \phi_e(e) \downarrow\}, which captures all recursively enumerable sets via appropriate reductions and serves as a complete set in the Turing degrees. This combination allows for the systematic derivation of undecidability for myriad problems by reducing them to K, highlighting the hierarchical structure of undecidable sets.

Use in Complexity Theory

In , polynomial-time , denoted as \leq_T^p, extend the notion of many-one by permitting a polynomial-time to make multiple adaptive queries to an for the target problem. These are particularly useful for establishing in resource-bounded classes like , where a language L is if L \in and for every A \in , A \leq_T^p L. Unlike many-one , which map an instance to a single instance of the target problem, enable the solving machine to adapt its strategy based on responses, facilitating the handling of problems with inherent branching or alternating structures. A representative example of their application appears in the analysis of the true quantified Boolean formulas problem (TQBF), which is . Turing reductions are preferred over many-one reductions in certain contexts because adaptive queries capture dependencies that a single mapping cannot, such as iterative refinements in sparse set reductions or under assumptions separating deterministic and randomized completeness. For instance, under the assumption that contains dense hard sets, there exist languages that are complete under \leq_T^p but not under polynomial-time many-one reductions (\leq_m^p). However, polynomial-time Turing reductions are not typically used to define , where Karp reductions (\leq_m^p) remain the standard. This is because Turing reductions are weaker (more permissive), potentially equating a language with its complement via answer flipping, which disrupts NP's asymmetry with ; many-one reductions provide finer granularity and align with Cook's original theorem for circuit satisfiability.

Stronger Reductions

A set A \subseteq \omega is many-one reducible to a set B \subseteq \omega, denoted A \leq_m B, if there exists a total computable function f: \omega \to \omega such that for all x \in \omega, x \in A if and only if f(x) \in B. This form of reduction is stricter than a Turing reduction because it relies solely on a computable pre-processing step to produce a single query to B, with membership in A determined directly by the oracle response without additional computation. Truth-table reducibility provides an intermediate level of restriction. A set A is truth-table reducible to B, denoted A \leq_{tt} B, if there exists a Turing functional \Phi such that \Phi^B(x) \downarrow for all x, and for each x, \Phi computes (without oracle access) a finite sequence of queries y_1, \dots, y_k \in \omega and a computable Boolean function h: \{0,1\}^k \to \{0,1\} (a truth table of size at most $2^k) such that x \in A if and only if h(\chi_B(y_1), \dots, \chi_B(y_k)) = 1, where \chi_B is the characteristic function of B. Unlike Turing reductions, which permit adaptive queries where each question may depend on prior answers, truth-table reductions require all queries to be fixed in advance and the decision to follow a pre-specified truth table on the responses. Every is a , as a single-query reduction corresponds to a truth table of size 2 where membership holds exactly when the oracle response is affirmative. Similarly, every truth-table reduction is a , since the fixed list of queries can be posed non-adaptively to the oracle for B, followed by evaluation of the truth table. Thus, \leq_m \subseteq \leq_{tt} \subseteq \leq_T as relations on sets. These inclusions are strict: for \leq_m \subsetneq \leq_{tt}, consider the complement of the halting set \overline{K} = \{e \in \omega \mid \varphi_e(e) \uparrow \}, where K = \{e \in \omega \mid \varphi_e(e) \downarrow \}; \overline{K} \leq_{tt} K via the single query to e with truth table accepting on response 0, but \overline{K} \not\leq_m K since the latter would imply \overline{K} is recursively enumerable, contradicting that \overline{K} is not r.e. while K is. For \leq_{tt} \subsetneq \leq_T, Post constructed pairs of sets A, B such that A \leq_T B but A \not\leq_{tt} B, establishing the existence of Turing degrees incomparable under truth-table reducibility. Many-one reducibility preserves recursively enumerability: if A \leq_m B and B is r.e., then A is r.e., as the preimage under the computable f of an of B yields an enumeration of A. Truth-table reducibility does not preserve r.e.-ness in general, as the example \overline{K} \leq_{tt} K demonstrates, with K r.e. but \overline{K} not. These preservation properties highlight why many-one reductions are considered stronger, enabling finer distinctions in the structure of unsolvability degrees compared to the broader Turing reductions.

Weaker Reductions

Weaker impose additional constraints on the usage in a Turing reduction, limiting its expressive power compared to the full adaptive querying allowed in standard Turing reductions. These variants are useful for studying finer structures in the Turing degrees, such as separating sets based on query patterns or properties. Unlike full Turing reductions, weaker forms may fail to transfer undecidability in cases where the oracle provides no affirmative answers, highlighting their restricted applicability. Bounded Turing reductions, also known as weak truth-table (wtt) reductions, restrict the computation such that the set of queried strings (the "use") is computable from the input alone, effectively bounding the number of oracle queries by a computable function of the input size. This ensures that the oracle answers influence the computation in a controlled manner, without unbounded adaptive branching based on prior responses. For instance, disjunctive truth-table reductions compute the output as the disjunction of oracle answers to a computable list of queries, forming a subclass where the decision accepts if at least one query is affirmative. These reductions are instrumental in analyzing degrees where computational resources are limited, as explored in foundational work on theory. Positive reductions further restrict oracle queries to instances that are guaranteed to be positive (i.e., members of the oracle set), ensuring that the reduction never queries potential non-members. Formally, a set A is positive Turing reducible to B if there exists a Turing from A to B such that for every input x, the queries generated depend only on affirmative oracle responses, preserving "positive " from B to A. This notion arises in studies of immunity and hyperimmunity in recursion theory, where it characterizes sets avoiding infinite computable subsets while maintaining directional information properties. Stronger variants, like strong positive reducibilities, extend this by requiring monotonicity in query generation, aiding proofs of non-trivial degrees below the . A key limitation of weaker reductions is their inability to fully capture undecidability transfers in trivial cases. For instance, there is no bounded Turing reduction from the H to the ∅, as any such reduction would generate only computably many queries answered negatively, yielding a computable procedure overall—contradicting the undecidability of H. This failure underscores how query bounds prevent simulating the full adaptive power needed for non-trivial uses, even though full Turing reductions also cannot reduce H to ∅ for similar reasons. Positive variants exhibit analogous shortcomings when the lacks sufficient affirmative information to resolve undecidable queries.