In computational complexity theory, a polynomial-time reduction (often denoted as ≤_p) is a method for transforming an instance of one decision problem into an instance of another such that the answer to the original problem can be determined from the answer to the transformed problem, with the transformation computable in polynomial time. Formally, for languages L₁ and L₂ over some alphabet Σ, L₁ ≤_p L₂ if there exists a function f: Σ* → Σ* computable in polynomial time such that for every x ∈ Σ*, x ∈ L₁ if and only if f(x) ∈ L₂.[1] This equivalence ensures that solving the second problem allows solving the first without asymptotically increasing the time complexity beyond polynomial bounds.[2]Polynomial-time reductions play a central role in analyzing the relative hardness of computational problems, particularly within the class NP. If L₁ ≤_p L₂ and L₂ has a polynomial-time algorithm, then L₁ also has one, implying that L₂ is at least as hard as L₁.[3] Reductions are transitive: if L₁ ≤_p L₂ and L₂ ≤_p L₃, then L₁ ≤_p L₃, enabling the construction of chains of hardness relations.[1] They form the basis for defining key complexity classes, such as NP-hardness, where a problem B is NP-hard if every language in NP reduces to B in polynomial time, meaning B is at least as difficult as the hardest problems in NP.[4]A problem is NP-complete if it is both NP-hard and belongs to NP, making polynomial-time reductions essential for proving NP-completeness via the Cook-Levin theorem, which establishes satisfiability (SAT) as the first such problem, with subsequent reductions showing equivalence for others like 3-SAT or the traveling salesman problem.[4] These reductions not only classify problems but also guide algorithm design by suggesting that solving one NP-complete problem efficiently would imply P = NP, a major open question in theoretical computer science.[3]
Background Concepts
Reductions in computational complexity
In computational complexity theory, a reduction provides a systematic way to transform instances of one decision problem into instances of another while preserving the solution: yes-instances map to yes-instances, and no-instances map to no-instances.[5] This mapping allows researchers to compare the inherent difficulty of problems by showing how solving one might imply the ability to solve the other.[5]The notion of reduction traces its origins to computability theory, where Emil Post introduced it in the 1940s to explore degrees of unsolvability among recursively enumerable sets, formalizing how one unsolvable problem could be transformed into another of comparable "difficulty." In the 1970s, this idea was adapted to resource-bounded complexity by Stephen Cook, who used reductions to establish NP-completeness, and Richard Karp, who extended the technique to numerous combinatorial problems.[6][7]Reductions serve to demonstrate that one problem is no harder than another with respect to computational resources: if problem B reduces to problem A, and A can be solved using a limited amount of time, space, or other resources, then B can likewise be solved by composing the reduction with A's solution procedure.[5] For example, the Boolean satisfiability problem (SAT) trivially reduces to itself through the identity function, preserving all instances unchanged.[6] In contrast to the efficient reductions examined later, a non-polynomial example arises in computability, such as reducing one instance of the halting problem to another via a construction that may require unbounded computation time, highlighting how resource demands can vary.A fundamental result is that reductions preserve membership in complexity classes under appropriate resource bounds: if a reduction from problem A to problem B is computable within resources compatible with a class C, and B belongs to C, then A also belongs to C.[5] This preservation property underpins much of complexity theory, enabling the classification of problems relative to known solvable or hard sets.
Polynomial time and computability
In computational complexity theory, polynomial time refers to the computational resources required by a deterministic Turing machine to decide a language, bounded by a polynomial in the length of the input string. Specifically, a language L \subseteq \Sigma^* is decided in polynomial time if there exists a deterministic Turing machine M and a constant k > 0 such that for every input x \in \Sigma^* of length n = |x|, M halts on x within O(n^k) steps and accepts x if and only if x \in L.[8] This bound captures algorithms whose running time grows reasonably with input size, avoiding the exponential growth that renders many problems practically intractable.Formally, the time complexity class \mathsf{DTIME}(f(n)) consists of all languages decidable by a deterministic Turing machine in at most O(f(n)) steps on inputs of length n. The class \mathsf{P} of polynomial-time decidable languages is then defined as \mathsf{P} = \bigcup_{k \geq 1} \mathsf{DTIME}(n^k).[9][8] These classes provide a rigorous framework for analyzing the efficiency of decision procedures, with \mathsf{P} serving as the canonical model for tractable problems in complexity theory.Polynomial-time computable functions extend this notion to total computable functions beyond decision problems. A function f: \Sigma^* \to \Sigma^* is polynomial-time computable if there exists a deterministic Turing machine M that, on every input x \in \Sigma^* of length n, halts in O(n^k) steps for some constant k > 0 and outputs f(x). The class \mathsf{FP} denotes the set of all such functions. For example, the addition and multiplication of two n-bit integers can be computed in O(n^2) time using standard schoolbook methods, placing these operations in \mathsf{FP}.[10]Key properties of \mathsf{P} and \mathsf{FP} include closure under composition: if f \in \mathsf{FP} with time bound O(n^{k_1}) and g \in \mathsf{FP} with time bound O(n^{k_2}), then the composition g \circ f is computable in O(n^{k_1 k_2}) time, remaining in \mathsf{FP}.[10] This closure ensures that polynomial-time procedures can be reliably combined without exceeding the efficiency bound.Polynomial time models feasible computation because it aligns with practical resource limits in real-world systems, where exponential-time algorithms become infeasible for even moderate input sizes (e.g., n = [50](/page/50)). This perspective, known as Cobham's thesis, posits that problems solvable in polynomial time on standard models like Turing machines are efficiently computable across equivalent models, providing a machine-independent notion of tractability.[11]
Formal Definition
Many-one polynomial-time reductions
A many-one polynomial-time reduction, also known as a Karp reduction, is a fundamental tool in computational complexity theory for comparing the hardness of decision problems. Formally, for languages A, B \subseteq \Sigma^*, a total function f: \Sigma^* \to \Sigma^* computed by a deterministic Turing machine in polynomial time constitutes a many-one polynomial-time reduction from A to B if it preserves membership exactly: for every string x \in \Sigma^*,x \in A \iff f(x) \in B.This equivalence is denoted by A \leq_p^m B, indicating that A is polynomial-time many-one reducible to B.[12][13]Such reductions are inherently deterministic and non-adaptive, relying on a single, fixed computation to map an instance of the source problem to exactly one instance of the target problem without intermediate queries or adjustments based on oracle responses. This preserves the exact membership structure of the languages, ensuring that yes-instances map to yes-instances and no-instances to no-instances. The concept was popularized by Richard M. Karp in his seminal 1972 paper "Reducibility Among Combinatorial Problems," where he employed these reductions to establish the NP-completeness of 21 combinatorial problems by chaining them from 3-SAT.[7][14]A classic example is the polynomial-time many-one reduction from Vertex Cover to Independent Set. The Vertex Cover problem asks whether a graph G = (V, E) has a subset S \subseteq V of size at most k such that every edge in E is incident to at least one vertex in S. To reduce to Independent Set—which asks whether G has a subset T \subseteq V of size at least m with no edges between vertices in T—map the instance (G, k) to (G, |V| - k). This works because a set S is a vertex cover of size at most k if and only if its complement V \setminus S is an independent set of size at least |V| - k; the mapping is computable in polynomial time by simply subtracting k from the number of vertices.[15]Another illustrative reduction, due to Karp, transforms 3-SAT to Clique. The 3-SAT problem determines whether a Booleanformula in conjunctive normal form, with each clause containing exactly three literals, is satisfiable. For a formula \phi with m clauses, construct a graph with $3m vertices, one for each literal in the clauses (e.g., for clause i with literals l_{i1}, l_{i2}, l_{i3}, create vertices v_{i1}, v_{i2}, v_{i3}). Add an edge between vertices from different clauses if their corresponding literals are consistent (i.e., neither is the negation of the other). The target instance is this graph with clique size k = m. Then, \phi is satisfiable if and only if the graph contains a clique of size m, as such a clique selects one consistent literal per clause, enabling a satisfying truth assignment; the graph construction runs in polynomial time proportional to the formula size.[7][16]
Turing polynomial-time reductions
A Turing polynomial-time reduction, also known as a Cook reduction, allows one decision problem to be solved using a polynomial-time algorithm that has access to an oracle solving another decision problem. Formally, a language A is polynomial-time Turing reducible to a language B, denoted A \leq_p^T B, if there exists a deterministic oracle Turing machine M and a polynomial p such that for every input x, M with oracle B (denoted M^B) halts in at most p(|x|) steps and accepts x if and only if x \in A.[6][5]In the oracle model, the underlying machine is a standard deterministic Turing machine augmented with a dedicated read-only oracle tape and three special states: a query state to initiate a query, and yes and no states to receive the oracle's response. To query the oracle for B, the machine writes a string y on the oracle tape (initially blank), enters the query state, and the oracle responds by writing 1 if y \in B or 0 otherwise, moving the head to the right end of the query string—all in a single unit-time step without altering the query content. The total computation, including all oracle queries and machine steps, must run in polynomial time relative to the input size. This model enables the machine to make polynomially many queries, which may be adaptive (each query potentially depending on prior oracle answers).[5][17]A classic application arises in NP-completeness proofs, where self-reducibility of problems like SAT allows any NP-complete problem to be polynomial-time Turing reduced to another. For example, to decide if a 3-CNF formula \phi is satisfiable using an oracle for general CNF-SAT, an algorithm can adaptively query modified formulas by fixing variables one by one (e.g., setting a variable to true and checking the resulting subformula), using O(n) queries where n is the number of variables, all within polynomial time. This leverages the structure of the problem to search for a satisfying assignment via oracle calls.[6][18]Turing polynomial-time reductions generalize many-one reductions, as any many-one reduction can be simulated by a Turing reduction with at most one query whose answer directly determines acceptance. However, the converse fails: there exist languages, such as certain sparse sets (with at most polynomially many strings up to length n), to which some languages are Turing reducible but not many-one reducible under polynomial time. Non-deterministic variants exist, where the oracle machine is non-deterministic but still runs in polynomial time, leading to relativized complexity classes like \mathbf{[NP](/page/NP)}^B. These reductions are transitive and form a preorder on languages, enabling comparisons of computational hardness beyond non-oracle settings.[6]
Advanced Types
Truth-table polynomial-time reductions
Truth-table polynomial-time reductions represent a non-adaptive form of oracle access positioned between many-one and Turing reductions in computational complexity theory. In this reduction, a polynomial-time Turing machine first computes a fixed set of queries to the oracle set B and specifies a Boolean function—encoded as a truth table—that determines membership in the original set A solely based on the oracle's responses to those queries, without any further interaction. This approach ensures that the entire query process is predetermined and parallelizable, distinguishing it from adaptive methods that depend on intermediate results.[19]Formally, a language A is polynomial-time truth-table reducible to B, denoted A \leq_p^{tt} B, if there exist polynomial-time computable functions g and e such that for every input x, x \in A if and only if e(g(x), \chi_B) = 1, where \chi_B is the characteristic function of B. The function g(x) outputs a list of queries q_1, \dots, q_k (with k polynomial in |x|) and an encoding \Sigma of a Boolean formula over k variables, while e evaluates \Sigma on the vector of oracle answers (\chi_B(q_1), \dots, \chi_B(q_k)). Equivalently, for queries q_1, \dots, q_k generated by a polynomial-time function and a Boolean formula \phi specified by g(x),x \in A \iff \phi(q_1 \in B, \dots, q_k \in B) = \text{true},where \phi is evaluated in polynomial time. The truth-table degree in this context refers to the mapping from query strings to their membership vector in B, which encapsulates the oracle's responses as a binary string used by the evaluator. These reductions are computable by polynomial-time functions that output the truth table directly, ensuring the process remains efficient.[19]A representative example is the reduction of languages in coNP to NP-complete problems like SAT. For the coNP-complete language UNSAT (unsatisfiability of CNF formulas), a truth-table reduction to SAT queries SAT on the input formula \phi as the single query q_1 = \phi, with the Boolean formula \phi defined as the negation of the oracle answer (\neg (q_1 \in \text{SAT})). This inverts the oracle response using the evaluator, confirming \phi is unsatisfiable if and only if the query returns false, thus placing coNP under polynomial-time truth-table reductions to NP. Such reductions preserve parallelism, as the fixed set of queries can be posed simultaneously in parallel models like NC, without sequential dependencies on answers, enabling efficient parallel computation of oracle-dependent decisions.[19][20]Key properties include non-adaptivity, where all queries are computed upfront before any oracle consultation, and closure under composition, ensuring transitivity: if A \leq_p^{tt} B and B \leq_p^{tt} C, then A \leq_p^{tt} C. Truth-table reductions are also equivalent to polynomial-time reductions augmented with oracle circuit evaluations, where the Boolean formula is realized as a polynomial-size circuit taking the oracle answers as inputs to compute the final decision. This circuit-based view highlights their utility in circuit complexity and parallel query models.[19][21]
Disjunctive truth-table reductions
Disjunctive truth-table reductions, also known as dtt-reductions, are a restricted form of truth-table reductions in computational complexity theory, where the decision for membership in the source language depends on the disjunctive (OR) combination of oracle query outcomes. Formally, a language A is polynomial-time disjunctive truth-table reducible to a language B (denoted A \leq_{\text{dtt}}^p B) if there exists a polynomial-time Turing machine M that, on input x, non-adaptively outputs a list of at most polynomially many strings y_1, y_2, \dots, y_k (with k = |x|^{O(1)}) such that x \in A if and only if there exists at least one i (for $1 \leq i \leq k) with y_i \in B. This reduction type was introduced as part of a broader comparison of polynomial-time reducibilities, emphasizing non-adaptive query models with specific acceptance conditions.[22]Unlike general truth-table reductions, which allow arbitrary Boolean functions (truth tables) on the oracle responses, disjunctive truth-table reductions limit the computation to a simple OR gate on the yes/no answers, prohibiting negation or more complex combinations. This makes them strictly weaker than Turing reductions but stronger than many-one reductions, as multiple parallel queries are permitted without adaptation. Such reductions preserve membership in polynomial-time uniformly, ensuring that if B is decidable in polynomial time relative to an oracle, then A is as well under this model.[22]Disjunctive truth-table reductions play a key role in characterizing levels of the boolean hierarchy within \Delta_2^p, particularly in showing closures and separations under sparse oracles. For example, they help define classes like the second level of the boolean hierarchy, involving unions of NP sets and complements. They are also transitive, meaning if A \leq_{\text{dtt}}^p B and B \leq_{\text{dtt}}^p C, then A \leq_{\text{dtt}}^p C, which facilitates composing reductions in proofs of completeness for classes like UP or FewP. Seminal results demonstrate that sets reducible via dtt to sparse sets often imply collapses in the polynomial hierarchy, underscoring their structural importance.[22][23]
Properties and Characterizations
Transitivity and composition
Polynomial-time reductions, whether many-one, truth-table, or Turing, exhibit the property of transitivity. Specifically, if language A \leq_p^m B and B \leq_p^m C, then A \leq_p^m C; the same holds for truth-table reductions (A \leq_p^{tt} B and B \leq_p^{tt} C imply A \leq_p^{tt} C) and Turing reductions (A \leq_p^T B and B \leq_p^T C imply A \leq_p^T C).[5]This transitivity arises from the composition of reductions, which preserves polynomial-time computability. For many-one reductions, suppose there exists a polynomial-time function f' such that x \in A if and only if f'(x) \in B, and a polynomial-time function g such that y \in B if and only if g(y) \in C. The composed function f = g \circ f' satisfies x \in A if and only if f(x) \in C, and f is computable in polynomial time. If f' runs in O(n^k) time and g in O(m^l) time, then f runs in O(n^{k l}) time, which remains polynomial.[5]For truth-table reductions, composition involves non-adaptive queries: the truth-table for A to C is derived by substituting the truth-table queries from B to C into the queries from A to B, ensuring the total number of queries and computation time stay polynomial. Turing reductions compose similarly, where a polynomial-time oracle machine M_A for A using oracle B is modified to simulate oracle calls to B via a polynomial-time oracle machine M_B for B using oracle C; although this may introduce query blowup (e.g., each query to B expands to multiple queries to C), the overall time remains polynomial if the original machines bound queries and per-query time polynomially.[5]A classic example of transitivity in action is the chain of many-one reductions establishing NP-completeness: 3-SAT \leq_p^m Hamiltonian Cycle \leq_p^m TSP, where Karp showed these reductions propagate hardness from SAT (proven complete by Cook) to graph and optimization problems, enabling proofs of completeness via composition without direct reductions from the base problem.[5]In non-uniform settings, such as reductions using polynomial-sized advice, transitivity may fail, as the advice for the composed reduction might not be derivable polynomially from the individual advices.
Degrees and equivalence
Polynomial-time reductions induce a preorder on the collection of decision problems, where problem A is less than or equal to problem B (denoted A ≤_p B) if there exists a polynomial-time reduction from A to B, using either many-one or Turing reductions. This preorder forms a partial order on the quotient structure known as the polynomial-time degrees, capturing the relative computational hardness of problems up to polynomial-time equivalence.[24]Two problems A and B are said to be p-equivalent (A ≡_p B) if A ≤_p B and B ≤_p A, partitioning the degrees into equivalence classes. A prominent example is the class of NP-complete problems, all of which are p-equivalent under many-one polynomial-time reductions, as each reduces to any other via reductions through a canonical complete problem like SAT.[7] The polynomial hierarchy, defined via alternating quantifiers over polynomial-time predicates, relativizes under Turing reductions, meaning its levels Σ_k^p and Π_k^p are preserved when computations are relativized to any oracle.[25] Degrees under polynomial-time reductions also separate P from EXP, as the relativized time hierarchy theorem ensures that for any oracle A, there exist languages in EXP^A not in P^A, establishing distinct degrees across these classes.Ladner's theorem demonstrates that if P ≠ NP, then the structure of degrees within NP is rich: there exist infinitely many distinct NP degrees strictly between the degree of P and the NP-complete degree, implying no total order on the degrees and the existence of NP-intermediate problems. Examples of such structural richness include sparse sets, which possess distinct degrees under various polynomial-time reductions, as equivalence to a sparse set implies reducibility to another sparse set for certain reducibilities like bounded-truth-table.[24] Similarly, tally languages (unary languages) exhibit a hierarchy of degrees under truth-table reductions, where Kolmogorov complexity measures distinguish classes like E_p^m(TALLY) from higher truth-table degrees.[26] In general, polynomial-time degrees characterize oracle separations, such that the degree of a set A corresponds to separations like P^A versus NP^A, where relativized worlds can collapse or separate these classes.The study of these degrees and equivalences developed in the 1970s and 1980s, with foundational work by Ladner on the structure of NP degrees and extensions by Selman on enumeration reducibility, alongside Hemachandra's contributions to structural complexity and reducibility notions like positive and monotone reductions.[27][28]
Applications
Establishing completeness
In computational complexity theory, a language A is complete for a complexity class \mathcal{C} under polynomial-time reductions if A \in \mathcal{C} and every language B \in \mathcal{C} is polynomial-time reducible to A, denoted B \leq_p^r A, where r specifies the reduction type—typically many-one for classes like NP.[6]The foundational result establishing completeness in NP is the Cook-Levin theorem, which proves that the satisfiability problem for Boolean formulas in conjunctive normal form (SAT) is NP-complete using Turing reductions. Specifically, for any nondeterministic Turing machine M deciding a language in NP within polynomial time p(n), there exists a polynomial-time Turing reduction to a CNF formula \phi of size polynomial in p(n) such that M accepts an input x if and only if \phi is satisfiable; the reduction simulates M's computation by encoding configurations into Boolean variables and clauses that enforce valid transitions, ensuring the formula captures all accepting paths.[6]Building on this, Richard Karp extended the result by demonstrating that 21 combinatorial problems, including Clique, Vertex Cover, and Hamiltonian Cycle, are NP-complete under many-one polynomial-time reductions from SAT or 3-SAT (the restriction to clauses with at most three literals). These reductions transform instances of the known complete problem into equivalent instances of the target problem in polynomial time, preserving satisfiability and thereby transferring hardness.[29]To establish that a language L is NP-complete, one must prove membership in NP—typically by exhibiting a polynomial-time verifier—and NP-hardness by showing that every problem in NP reduces to L via a known complete problem like 3-SAT. For example, in the many-one reduction from 3-SAT to Subset Sum, for a 3-CNF formula with n variables and m clauses, for each variable x_i create two numbers: a_i (for x_i true) = $10^{m+i} + \sum_{j: c_j \text{ contains } x_i} 10^j, and b_i (for false) = $10^{m+i} + \sum_{j: c_j \text{ contains } \bar{x}_i} 10^j; add c_j = d_j = 10^j for j=1 to m; target sum W = \sum_{i=1}^n 10^{m+i} + 3 \sum_{j=1}^m 10^j. A subset sums to W if and only if the formula is satisfiable, as the high digits (positions m+1 to m+n) select exactly one choice per variable, and the low digits (1 to m) allow exactly three 1's per position via the c_j, d_j, corresponding to clause satisfaction. This is computable in polynomial time.[30]Formally, for NP, a language L is NP-complete if L \in NP and for all languages B \in NP, B \leq_p^m L.A key implication is that if any NP-complete problem lies in P, then P = NP, as the reductions would imply all of NP is solvable in polynomial time.[6]
Defining relativized complexity classes
Relativized complexity classes extend standard complexity classes by incorporating oracle access, allowing Turing machines to query an auxiliary oracle set A \subseteq \{0,1\}^* for membership in constant time. Formally, for a complexity class \mathcal{C}, the relativized version \mathcal{C}^A consists of all languages L such that there exists a machine M in \mathcal{C} that decides L when augmented with oracle access to A. In particular, \mathrm{P}^A = \{ L \mid \exists \text{ deterministic poly-time TM } M \text{ s.t. } M^A \text{ decides } L \}, and \mathrm{NP}^A = \{ L \mid \exists \text{ nondeterministic poly-time TM } N \text{ s.t. } N^A \text{ decides } L \}, where the nondeterminism allows branching but oracle queries remain deterministic. This framework captures how additional computational resources affect class inclusions and separations.[31]Polynomial-time Turing reductions are central to the relativized setting, as they induce monotonicity in class inclusions. If A \leq_p^T B, meaning there is a deterministic poly-time oracle machine that computes membership in A using oracle access to B, then \mathrm{P}^A \subseteq \mathrm{P}^B, \mathrm{NP}^A \subseteq \mathrm{NP}^B, and more generally \mathrm{MA}^A \subseteq \mathrm{MA}^B for the Merlin-Arthur class \mathrm{MA}, where the verifier uses probabilistic polynomial-time computation with oracle queries after receiving a Merlin proof. These inclusions arise because any computation relative to A can simulate its oracle queries by invoking the reduction to B, preserving polynomial-time bounds. Such properties enable the construction of oracles that create barriers to proving unrelativized separations, demonstrating that standard proof techniques may fail outside the relativized world.[31]The Baker-Gill-Solovay theorem illustrates the flexibility of relativization by constructing explicit oracles that yield contrasting behaviors for relativized classes. There exists an oracle A such that \mathrm{P}^A = \mathrm{NP}^A, constructed by making A powerful enough to resolve nondeterministic choices in polynomial time; an oracle B such that \mathrm{P}^B \neq \mathrm{NP}^B, achieved by encoding a hard language into B while limiting its utility for nondeterminism; and an oracle C such that \mathrm{P}^C = \mathrm{NP}^C = \mathrm{coNP}^C, where C allows efficient decision for both a language and its complement. These results, from 1975, show that both the equality \mathrm{P} = \mathrm{NP} and its negation relativize, implying that resolving \mathrm{P} vs. \mathrm{NP} requires non-relativizing techniques.[31]The polynomial hierarchy (PH) relativizes in a structured manner, preserving its levels under oracle access. The k-th level is defined as \Sigma_k^A = \{ L \mid \exists \text{ poly-time } R \text{ s.t. } L(x) \iff \exists y_1 \forall y_2 \exists y_3 \cdots Q y_k \, R(x, y_1, \dots, y_k)^A = 1 \}, where Q alternates quantifiers starting with existential, and R is a poly-time relation with oracle A; then \mathrm{PH}^A = \bigcup_k \Sigma_k^A = \bigcup_k \Pi_k^A. Polynomial-time reductions maintain these levels in the relativized setting, ensuring that complete problems for \Sigma_k remain complete relative to A. This preservation allows constructions of oracles that separate PH from other classes, such as PSPACE. For example, Furst, Saxe, and Sipser constructed an oracle D such that \mathrm{PH}^D \subsetneq \mathrm{PSPACE}^D, relying on circuit lower bounds for parity functions to ensure the hierarchy does not collapse into PSPACE relative to D. Cai extended this probabilistically, showing that for a random oracle E, \mathrm{PH}^E \subsetneq \mathrm{PSPACE}^E with probability 1 over the choice of E. These separations underscore relativization's role in revealing limitations of proof techniques.[32]Relativized separations have implications for non-relativizing results. For instance, the equality \mathrm{[IP](/page/IP)} = \mathrm{[PSPACE](/page/PSPACE)}, proven by Shamir in 1992 using arithmetization over finite fields, does not relativize, as there exist oracles F and G such that \mathrm{IP}^F \neq \mathrm{PSPACE}^F and \mathrm{IP}^G \neq \mathrm{PSPACE}^G. This non-relativization arises because interactive proofs involve global properties of computation that oracle models cannot capture uniformly, highlighting the need for techniques beyond reductions and oracles to prove such inclusions. Despite these advances, relativization's limitations persist: it cannot settle core questions like \mathrm{P} vs. \mathrm{NP}, as both outcomes are consistent with relativized worlds.[31]