Fact-checked by Grok 2 weeks ago

Complexity class

In , a complexity class is a set of computational problems that can be solved within specified bounds on resources such as time or , measured as functions of the input size. These classes categorize problems based on their intrinsic computational difficulty, distinguishing tractable ones solvable efficiently from intractable ones requiring resources. The theory originated in the and , with foundational work on resource-bounded Turing machines providing the formal framework for defining such classes. The class comprises all decision problems that can be solved by a deterministic in time, meaning the running time is bounded by O(n^k) for some constant k, where n is the input length. Examples include integer addition, which runs in linear time O(n), and , achievable in O(n^{2.373}) time using advanced algorithms. Problems in are considered efficiently solvable on modern computers for practical input sizes. Another central class is NP, consisting of decision problems for which a yes-instance has a polynomial-length certificate verifiable in polynomial time by a deterministic . Introduced by in his 1971 paper "The Complexity of Theorem-Proving Procedures," NP includes P as a subclass and captures problems like the Boolean satisfiability (SAT) problem, where verifying a satisfying assignment is straightforward but finding one may be hard. The P versus NP question—whether every problem in NP is also in P—remains unsolved and is one of the seven , with profound implications for fields like , where NP-hard problems underpin security.

Fundamentals

Computational Problems

Computational problems in the theory of are abstract tasks or questions concerning finite mathematical objects, where an input instance is encoded as a finite over a finite , and the goal is to compute a specific output or determine a property of the input. These problems serve as the foundational objects studied in , which seeks to classify them based on the resources required for their solution. Decision problems constitute the primary focus of complexity theory, representing a subset of computational problems where the output is binary: yes or no, indicating whether the input satisfies a given property. Formally, each decision problem corresponds to a language over a finite alphabet, typically the binary alphabet \{0,1\}, where the language L \subseteq \{0,1\}^* consists of all input strings for which the answer is yes. Thus, solving a decision problem is equivalent to determining membership in such a language. A canonical example of a is the (SAT), which asks whether there exists a truth assignment to the variables of a given that makes the formula true. SAT exemplifies the structure of decision problems and plays a central role in exploring nondeterministic polynomial-time computation. Beyond decision problems, computational problems encompass other types, including search problems that require producing a or solution (if it exists), optimization problems that demand the best solution under a specified objective, and counting problems that seek the number of valid solutions. These distinctions highlight varying output complexities, with decision problems providing the simplest yes/no resolution, while the others build upon similar input structures but require more elaborate responses.

Models of Computation

A Turing machine (TM) is a foundational abstract model of computation introduced by Alan Turing in 1936. It consists of an infinite, one-dimensional tape divided into cells that can hold symbols from a finite alphabet Γ (including a blank symbol), a read/write head that scans one cell at a time and can move left or right, a finite set of states Q, and a finite set of transition rules specified by a partial function \delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R\}. The machine starts in an initial state with the input on the tape and the head positioned at the beginning; it halts when it enters a designated halting state, accepting or rejecting the input based on whether it reaches an accepting or rejecting state. In a deterministic (), the transition function [\delta](/page/Delta) is defined such that for every state-symbol pair, there is at most one possible next state, symbol to write, and head direction, ensuring a unique path from any starting . This model captures the essence of sequential without branching choices. DTMs form the basis for defining deterministic complexity classes by bounding resources such as the number of steps taken. A (NTM) extends the by allowing the transition function \delta to map each state-symbol pair to a of possible actions (multiple next states, symbols, and directions) or to include ε-transitions that move the head without reading or writing. Computation proceeds along a tree of possible paths, and the input is accepted if at least one path reaches an accepting state, regardless of other paths. NTMs model computations with inherent nondeterminism, such as guessing solutions. DTMs and NTMs are equivalent in computational power: they recognize exactly the same class of recursively enumerable languages, as any NTM can be simulated by a that systematically explores all possible paths, though this simulation may require exponential time in the worst case. In , other equivalent models include the (), which abstracts a computer with direct access to memory registers and is suitable for analyzing algorithms with logarithmic cost per operation, and Boolean circuits, which represent computations as acyclic graphs of gates for nonuniform measures. These models align with Turing machines for defining standard classes, differing mainly in how resources like time are measured.

Resource Measures

In , measures the computational effort required by a to solve a problem, quantified as the number of steps it takes to halt on inputs of a given size. For a M that halts on all inputs, the is defined by the function T: \mathbb{N} \to \mathbb{N}, where T(n) is the maximum number of steps M executes over all inputs of length n. This worst-case measure captures the upper limit of the machine's runtime behavior as the input size grows. Space complexity, analogously, assesses the memory usage of a during , focusing on the extent of the work accessed rather than the input . For a deterministic M, the is given by a f: \mathbb{N} \to \mathbb{N}, where f(n) represents the maximum number of tape cells scanned by M on any input of length n. For nondeterministic machines, the definition extends to ensure no branch exceeds this bound. This metric is crucial for understanding storage limitations in resource-bounded models. To bound these resource functions precisely, asymptotic notation provides a framework for describing growth rates without exact constants. The Big-O notation, O(g(n)), denotes an upper bound: a function f(n) is O(g(n)) if there exist constants c > 0 and n_0 > 0 such that f(n) \leq c \cdot g(n) for all n \geq n_0. The Theta notation, \Theta(g(n)), offers a tight bound by combining upper and lower estimates: f(n) = \Theta(g(n)) if f(n) = O(g(n)) and f(n) = \Omega(g(n)), where \Omega indicates a lower bound. These notations abstract away lower-order terms and constants, emphasizing dominant behavior in complexity analysis. The intuition behind resource hierarchies arises from the principle that increased computational resources enable solving more challenging problems. If a is allocated strictly more time or , it can decide a broader class of languages than one with fewer resources, as the extra capacity allows exploration of more configurations or computations. This separation underscores why complexity classes are structured around varying resource bounds. Formally, a language L belongs to the time complexity class \mathsf{TIME}(T(n)) if there exists a Turing machine deciding L with running time O(T(n)), meaning the machine halts within c \cdot T(n) + c steps for some constant c \geq 1 on inputs of length n. This definition encapsulates problems solvable within the specified time bound, forming the basis for classifying computational difficulty.

Deterministic Complexity Classes

Time-Based Classes

Deterministic time complexity classes are defined based on the running time of deterministic Turing machines (s), which follow a single computational path. The class DTIME(f(n)) consists of all decision problems solvable by a in at most f(n) steps on inputs of length n, where f(n) is a time bound function. These classes categorize problems by the computational resources required under deterministic computation. The class , or deterministic polynomial time, is the over all constants k of DTIME(n^k), encompassing problems solvable by a in time. As noted in the , P includes problems like and that are efficiently solvable. For larger time bounds, is the over all constants k of DTIME(2^{n^k}), comprising problems solvable by a in exponential time; this class includes problems like determining the winner in generalized chess on an n x n board. The time hierarchy theorem states that for suitable time-constructible functions f and g with g(n) = o(f(n)), DTIME(f(n)) is strictly contained in DTIME(g(n)), establishing separations like P \subsetneq EXPTIME.

Space-Based Classes

Deterministic space complexity classes are defined based on the space used by deterministic Turing machines on the work tape, excluding the input tape. The class DSPACE(f(n)) consists of all decision problems solvable by a DTM using at most f(n) space for inputs of length n, with the machine required to halt. The class L, or deterministic log-space, denoted DSPACE(O(\log n)), comprises decision problems solvable using O(\log n) space. A simple example is recognizing palindromes: given a , check if it reads the same forwards and backwards, which can be done by comparing characters from both ends using logarithmic space to track positions. The class , or polynomial space, is the union over constants c of (n^c), including problems solvable using polynomial space. An example is solving quantified Boolean formulas (QBF), which is . For exponential space, is the union over constants c of (2^{n^c}). A result is the , which provides separations like L \subsetneq for appropriate bounds. Additionally, relates nondeterministic and deterministic : for space-constructible s(n) \geq \log n, NSPACE(s(n)) \subseteq DSPACE(s(n)^2), implying = NPSPACE.

Nondeterministic Complexity Classes

Time-Based Classes

Nondeterministic time complexity classes are defined based on the running time of nondeterministic Turing machines (NTMs), which can explore multiple computational paths in parallel. The class NTIME(f(n)) consists of all decision problems solvable by an NTM in at most f(n) steps on inputs of length n, where f(n) is a time bound function. These classes capture problems where solutions can be verified efficiently, even if finding them is hard, by leveraging nondeterminism to guess potential solutions and then checking them deterministically. The class , or nondeterministic time, is the union over all constants k of NTIME(n^k), encompassing problems solvable by an NTM in time. Equivalently, a is in NP if there exists a deterministic verifier that runs in time and accepts an input paired with a -length when the input is in the , and rejects otherwise; this verifier model highlights NP's role in efficient verification. It is known that P ⊆ NP, as deterministic -time computations are a special case of nondeterministic ones. A prominent example of a problem in is the decision version of the Traveling Salesman Problem (TSP), which asks whether there exists a tour visiting each of n cities exactly once and returning to the start with total distance at most k; given a proposed tour as a , its validity and length can be verified in O(n^2) time. NP-completeness identifies the hardest problems in under polynomial-time reductions. The Cook-Levin theorem establishes that the (SAT)—determining if there exists an assignment of truth values to variables making a given formula true—is , as any problem reduces to SAT in polynomial time; SAT was the first such problem identified in 1971. For larger time bounds, is the union over all constants k of NTIME(2^{n^k}), comprising problems solvable by an NTM in exponential time; this class extends analogously to how extends in the deterministic setting.

Space-Based Classes

Nondeterministic space complexity classes are defined analogously to their deterministic counterparts but allow nondeterministic choices during computation, measured by the space used on the work tape. The class NL, denoted NSPACE(O(\log n)), comprises decision problems solvable by a nondeterministic Turing machine that uses at most O(\log n) space for inputs of length n, with the machine required to halt on all branches. A canonical example in NL is the st-connectivity problem (STCON), which asks whether there is a directed path from vertex s to vertex t in a given directed graph; STCON is complete for NL under log-space reductions. The class NPSPACE, or NSPACE(n^{O(1)}), includes problems solvable nondeterministically using space, specifically the union over constants c of (n^c). Similarly, NEXPSPACE, defined as (2^{n^{O(1)}}), captures problems solvable nondeterministically in exponential space, forming the union over constants c of (2^{n^c}). A key result relating nondeterministic and deterministic space classes is , proved in 1970, which states that if s(n) is a space-constructible function with s(n) \geq \log n, then (s(n)) \subseteq DSPACE(s(n)^2). This simulation shows that nondeterminism provides at most a quadratic increase in space requirements. Consequently, for space, NPSPACE \subseteq PSPACE; combined with the trivial inclusion PSPACE \subseteq NPSPACE, that NPSPACE = PSPACE. An analogous argument yields NEXPSPACE = EXPSPACE.

Properties of Complexity Classes

Closure Operations

Closure operations in complexity classes refer to the properties that determine whether applying certain language operations—such as , , , or complement—to languages within a class results in a language still belonging to that class. These provide insight into the structural robustness of the classes and are fundamental to understanding their behavior under basic set operations. For instance, deterministic time-bounded classes like \mathbf{DTIME}(T(n)) exhibit under when the time bound T(n) is closed under addition, meaning that if L_1, L_2 \in \mathbf{DTIME}(T(n)), then L_1 \cup L_2 \in \mathbf{DTIME}(T(n)) by simulating the two machines sequentially, with the total time bounded by T(n) + T(n), which fits within T(n) if T satisfies the necessary monotonicity and addition . The class \mathbf{P}, consisting of languages decidable in polynomial time by deterministic Turing machines, is closed under complement, , , and . Closure under complement follows directly from the deterministic nature of the computations, where the machine can simply invert its acceptance decision without exceeding the polynomial time bound. For and , a single machine can nondeterministically branch or deterministically check both conditions within polynomial time, while allows sequential simulation of the two machines, preserving the overall polynomial bound. In contrast, the nondeterministic polynomial-time class \mathbf{[NP](/page/NP)} is closed under and but not under complement, with the complement class denoted \mathbf{[coNP](/page/Co-NP)}. For , a nondeterministic machine can guess a witness for either and verify it in time; similarly for , it verifies witnesses for both. However, if \mathbf{[NP](/page/NP)} were closed under complement, then \mathbf{[NP](/page/NP)} = \mathbf{[coNP](/page/Co-NP)}, which remains an open question and would imply significant collapses in the . Logarithmic space classes also demonstrate notable closure behaviors. The deterministic logarithmic space class \mathbf{L} is closed under complement, as deterministic space-bounded machines can reverse their acceptance by flipping the output bit while reusing the same space configuration. For the nondeterministic variant \mathbf{NL}, closure under complement was established by the Immerman–Szelepcsényi theorem, which shows \mathbf{NL} = \mathbf{coNL} through a clever inductive counting argument that simulates problems without extra space. Finally, the class \mathbf{PSPACE} is closed under complement. This follows from , which equates nondeterministic and deterministic (\mathbf{NPSPACE} = \mathbf{PSPACE}), allowing complementation by deterministically simulating the nondeterministic machine using a recursive procedure that squares the space bound but remains .

Reductions

In , reductions provide a formal for establishing relative computational difficulty between decision problems by transforming instances of one problem into instances of another while preserving membership status, typically under resource constraints such as time or logarithmic . These transformations enable the classification of problems based on their hardness within specific complexity classes, facilitating proofs of equivalence or intractability. A (also called a Karp reduction) from a A to a B is a f computable in time such that for every input x, x \in A f(x) \in B. This type of reduction maps each instance of A directly to a single instance of B without intermediate queries, preserving the structure of the problem in a straightforward manner; it was introduced as a key tool for demonstrating in Richard Karp's 1972 paper on combinatorial problems. In contrast, a (or Cook reduction) from A to B allows a polynomial-time algorithm for A to make adaptive queries to an that decides B, potentially solving A by accessing multiple instances of B; this more flexible notion, which permits oracle calls during computation, originates from Stephen Cook's 1971 work on theorem-proving complexity. A language A is hard for a complexity class \mathcal{C} (under polynomial-time reductions) if every language in \mathcal{C} reduces to A, meaning A is at least as difficult as the hardest problems in \mathcal{C}. A language is complete for \mathcal{C}, or \mathcal{C}-complete, if it is both in \mathcal{C} and \mathcal{C}-hard; for instance, the 3SAT problem—determining satisfiability of a Boolean formula in conjunctive normal form with exactly three literals per clause—is NP-complete under polynomial-time many-one reductions, as established by Cook in 1971. In space-bounded settings, log-space reductions are employed for classes such as L (deterministic log-space) and NL (nondeterministic log-space); these are functions computable by a deterministic Turing machine using O(\log n) space on its work tapes (with read-only input and write-only output tapes), ensuring the reduction does not inflate space requirements significantly.

Relationships and Hierarchies

Inclusion Relations

The inclusion relations among key complexity classes establish a foundational hierarchy in computational complexity theory, reflecting the relative computational power of deterministic and nondeterministic models bounded by time and space resources. Specifically, the deterministic logarithmic space class L is contained in the nondeterministic logarithmic space class NL, NL is contained in the deterministic polynomial time class P, P is contained in the nondeterministic polynomial time class NP, NP is contained in the deterministic polynomial space class PSPACE, PSPACE is contained in the nondeterministic polynomial space class NPSPACE, NPSPACE equals PSPACE (by Savitch's theorem, which implies nondeterministic polynomial space can be simulated deterministically within polynomial space), PSPACE is contained in the deterministic exponential time class EXPTIME, EXPTIME is contained in the nondeterministic exponential time class NEXPTIME, and NEXPTIME is contained in the deterministic exponential space class EXPSPACE.<grok:render type="render_inline_citation"> 43 </grok:render> https://theory.cs.princeton.edu/complexity/ComplexityBook.pdf[](https://www.sciencedirect.com/science/article/pii/S002200007080006X) Further inclusions link space-bounded classes to time-bounded ones: holds because any logarithmic space computation has at most exponentially many configurations, allowing simulation in polynomial time without exceeding polynomial space usage, and follows from the Immerman–Szelepcsényi theorem, which demonstrates that nondeterministic logarithmic space is closed under complementation (i.e., = co****) and thus can be simulated deterministically in polynomial time.<grok:render type="render_inline_citation"> 75 </grok:render> https://theory.cs.princeton.edu/complexity/ComplexityBook.pdf[](http://www.cs.umass.edu/~immerman/pub/space.pdf)[](https://link.springer.com/article/10.1007/BF00299636) Despite these established inclusions, several equalities remain open problems central to the field. The question of whether = NP is a Millennium Prize Problem, as solving it would resolve whether nondeterminism provides any advantage for polynomial-time computation.<grok:render type="render_inline_citation"> 35 </grok:render> https://www.claymath.org/wp-content/uploads/2022/06/pvsnp.pdf Similarly, whether = and whether = are unresolved, with the former probing the power of nondeterminism in logarithmic space and the latter examining the gap between time and space in polynomial bounds.<grok:render type="render_inline_citation"> 75 </grok:render> https://theory.cs.princeton.edu/complexity/ComplexityBook.pdf

Separation Theorems

Separation theorems in establish strict inclusions between complexity classes, demonstrating that additional computational resources enable the solution of strictly more problems. These results, primarily obtained through arguments, show that hierarchies exist among time and space classes, ruling out collapses under certain resource bounds. They provide foundational evidence that complexity classes form infinite ascending chains rather than collapsing into finite levels. The time hierarchy theorem asserts that for time-constructible functions f(n) and g(n) satisfying f(n) \log f(n) = o(g(n)), the deterministic time class \mathsf{DTIME}(f(n)) is a proper of \mathsf{DTIME}(g(n)). This separation arises from a construction where a simulates all machines within the lower time bound and differs from at least one on some input, requiring slightly more time to account for overhead. The logarithmic factor accounts for the cost of universal simulation in multitape s. In contrast, the space hierarchy theorem achieves separations without such overhead for sufficiently large space bounds. Specifically, for space-constructible functions s(n) and t(n) with s(n) \geq \log n and s(n) = o(t(n)), \mathsf{DSPACE}(s(n)) is a proper of \mathsf{DSPACE}(t(n)). A concrete instance is \mathsf{DSPACE}(n) \subsetneq \mathsf{DSPACE}(n^2), highlighting that space solves problems beyond linear space capabilities. This stronger result stems from more efficient space-bounded techniques that avoid the logarithmic penalty present in time hierarchies. These theorems yield key implications for standard classes, such as \mathsf{P} \subsetneq \mathsf{EXPTIME}, since polynomial time with logarithmic factors is asymptotically smaller than exponential time, and \mathsf{L} \subsetneq \mathsf{PSPACE}, as logarithmic space is o(n^k) for any constant k > 0. Such separations underscore the robustness of resource-based hierarchies in classical computation models.

Alternative Computation Models

Randomized Models

Randomized Turing machines extend the standard deterministic model by incorporating access to a source of unbiased random bits, allowing the machine to flip fair coins during computation to make probabilistic decisions. This randomness enables algorithms that trade correctness for efficiency in certain cases, capturing practical computation where small error probabilities are tolerable. Formally, a randomized Turing machine runs in polynomial time if, for input size n, it halts after at most p(n) steps, where p is a polynomial, and uses at most p(n) random bits. The class RP (randomized polynomial time) consists of decision problems where a randomized Turing machine accepts inputs in the language with probability at least $2/3 if the answer is yes, and rejects all no-instances with probability 1 (one-sided error). This model suits problems where false positives are avoided, but false negatives may occur with bounded probability, allowing error amplification by repetition to reduce the failure chance exponentially. RP was introduced by John Gill in 1977 to formalize algorithms that err only on one side. BPP (bounded-error probabilistic polynomial time) generalizes RP to allow two-sided error, where the machine accepts yes-instances and rejects no-instances each with probability at least $2/3. The error bound of $1/3 is arbitrary and can be amplified to $2^{-\ell} for any \ell by running O(\ell) times and taking the majority vote, at a quadratic cost in runtime. BPP captures a broad class of practical randomized algorithms, such as those for primality testing. ZPP (zero-error probabilistic polynomial time) includes languages decided by randomized Turing machines that always output the correct answer when they halt, but may run indefinitely with probability at most $1/2; the expected runtime is . These are known as algorithms, which never err but use to potentially speed up computation on average. ZPP corresponds to the intersection of RP and coRP, emphasizing reliability without error tolerance. Known inclusions establish ZPP \subseteq \subseteq BPP \subseteq , reflecting increasing tolerance for or usage; the first follows from converting the zero-error algorithm to one with one-sided via until decision or timeout, while BPP \subseteq arises from deterministically simulating the probabilistic by enumerating all $2^{p(n)} possible random strings in via recursive of accepting paths. Whether P \subseteq BPP holds remains an open question, central to understanding the power of in efficient . Derandomization seeks to show that randomized classes collapse to deterministic ones under certain assumptions. A seminal result proves that if there exists a in time () requiring -size circuits, then BPP \subseteq , implying efficient deterministic simulations using non-uniform ; this connects in average-case complexity to derandomization via pseudorandom generators that fool probabilistic tests. Such conditional results highlight the interplay between and computational , though unconditional derandomization of BPP remains unresolved.

Quantum Models

Quantum Turing machines (QTMs) extend the classical model by incorporating , allowing the machine's state, tape contents, and head position to exist in superpositions of classical configurations. In a QTM, the computation proceeds unitarily via a step operator that evolves the system while preserving quantum coherence, enabling phenomena such as where quantum amplitudes can constructively or destructively combine to amplify correct solutions. Measurement at the end of the computation collapses the superposition into a classical outcome, with the probability of measuring a particular state determined by the squared magnitude of its amplitude, introducing inherent probabilistic elements to the model. The complexity class , or bounded-error quantum polynomial time, consists of decision problems solvable by a QTM in polynomial time with success probability at least \frac{2}{3} for yes instances and at most \frac{1}{3} for no instances. Introduced by and Vazirani, BQP captures the power of quantum computation under this error-bounded framework, where classical probabilistic classes like BPP are contained within it due to the ability of QTMs to simulate classical randomness efficiently. A seminal example is , which Shor's 1994 algorithm solves in BQP by using quantum Fourier transforms to find the period of a modular , exponentially faster than known classical algorithms. Known inclusion relations place BQP between classical complexity classes: \mathbf{P} \subseteq \mathbf{BQP} \subseteq \mathbf{PSPACE}, reflecting that quantum polynomial-time computation can simulate deterministic polynomial time while being contained within polynomial-space bounded classical machines via amplitude estimation techniques. The relationship between BQP and NP remains an open question, with no proven inclusions or separations, though relativized separations suggest quantum computers may not efficiently solve NP-complete problems in general. QMA, or quantum -, is the quantum analogue of , comprising languages where a quantum prover () sends a quantum witness to a quantum verifier () that accepts yes instances with high probability in polynomial time while rejecting no instances with high probability. Formally defined by Kitaev in 1999, QMA involves a verifier QTM that, upon receiving a witness in superposition, performs a polynomial-time quantum to decide based on outcomes. The local Hamiltonian problem, where one determines if the ground state energy of a k-local lies below or above a , is QMA-complete, establishing its hardness analogous to SAT for . As of 2025, no major collapses of quantum complexity classes relative to classical ones have occurred, maintaining the believed separations such as BQP not equal to P, though ongoing research explores hybrid classical-quantum models like promise-BQP variants that integrate quantum oracles with classical verification to address noisy intermediate-scale quantum devices.

Problem Variants

Counting Problems

Counting problems in focus on determining the number of solutions to instances of decision problems, rather than merely deciding existence. A canonical example is #SAT, which, given a formula in , asks for the number of variable assignments that satisfy it. This contrasts with the decision problem SAT, which only checks for at least one satisfying assignment, and highlights how counting introduces additional computational challenges beyond mere verification. The complexity class #P, introduced by Valiant in 1979, formalizes these counting problems as the set of functions computable in polynomial time by the accepting paths of a . Formally, a f: \Sigma^* \to \mathbb{N} is in #P if there exists a nondeterministic polynomial-time M such that for every input x, f(x) equals the number of accepting computation paths of M on x. This makes #P a functional analog to the decision class NP, where acceptance requires at least one path rather than enumerating all. Valiant demonstrated that #SAT is #P-complete under parsimonious reductions, meaning it is both in #P and every problem in #P reduces to it while preserving the exact number of solutions. Associated with #P is the decision class PP (probabilistic polynomial time), defined by in as the set of languages where a probabilistic polynomial-time accepts an input if more than half of its computation paths accept. PP can be viewed as the decision counterpart to #P, accepting when the number of accepting paths exceeds $2^{p(n)-1} for some polynomial p, effectively deciding if a #P function exceeds half the possible paths. PP contains and , and its closure under complement (PP = coPP) distinguishes it from many other probabilistic classes. #P-complete problems are considered harder than those in (the class of polynomial-time computable functions), as solving any #P-complete problem in would imply P = NP, leading to a collapse of the . A landmark result by Toda in 1991 establishes that the entire PH is contained in P^{#P}, the class of problems solvable in polynomial time with access to a #P ; specifically, PH ⊆ P^{#P}. This inclusion underscores the power of oracles, showing that #P captures the hardness of alternating quantifiers in PH, and provides evidence that is a fundamental source of computational difficulty beyond decision problems.

Function Problems

In , function problems are tasks that require computing an output as a function of the input, where the output is typically a of length in the input size. These differ from decision problems, which merely accept or reject inputs, by demanding the production of a or solution. A representative example is the search version of the (SAT-search), which, given a Boolean formula φ, outputs a satisfying assignment if one exists. The complexity class FP comprises all function problems solvable by a deterministic in time in the input length. This class serves as the functional counterpart to the decision class , capturing computations that are feasibly efficient on classical models. Formally, a total f: {0,1}^* → {0,1}^* belongs to FP if there exists a p and a deterministic M such that for every input x, M halts in at most p(|x|) steps and outputs f(x). The characterization of FP via bounded recursion on notation underscores its robustness across reasonable computational models. FNP extends to encompass function problems where solutions can be verified efficiently, mirroring the nondeterministic structure of for decisions. Specifically, a R ⊆ {0,1}^* × {0,1}^* defines a function problem in FNP if, for every x, there exists y with |y| ≤ poly(|x|) such that (x, y) ∈ R, and membership in R is decidable in time. SAT-search exemplifies an FNP problem, as a purported satisfying assignment can be checked by substitution in time. Unlike , computing solutions in FNP is not known to be feasible unless P = . A notable subclass of FNP is TFNP, which includes only total function problems—those where a solution is guaranteed to exist for every input, often by combinatorial existence theorems. TFNP thus captures search problems with inherent totality, such as those derived from the or fixed-point theorems. For instance, the problem of finding a collision in a (under certain assumptions) or computing a equilibrium in a bimatrix game falls into TFNP, as solutions exist by mathematical guarantees but may not be efficiently computable. TFNP was formalized by Papadimitriou to highlight such problems lying strictly between FP and FNP, with subclasses like PPP (pigeonhole principle) partitioning its structure based on proof principles. Key relationships among these classes include the inclusion ⊆ FNP, since any polynomial-time admits a trivial polynomial-time verifier that recomputes the output and checks equality. Additionally, for self-reducible problems like SAT, search-to-decision reductions exist: a polynomial-time for the decision version (e.g., deciding ) can be queried to construct a solution by iteratively fixing variables and testing subformulas, using at most polynomially many calls. This equivalence implies that separating from would separate FP from FNP for such problems.

Promise Problems

In , a problem is a of a where the input is restricted to belong to a promised subset of all possible instances, specifically partitioned into of yes-instances and no-instances, with the algorithm required to output yes or no correctly only on these promised inputs and arbitrary behavior otherwise. This formulation allows the problem to be undefined on inputs outside the promise, enabling the study of partial rather than total functions. Unlike standard complexity classes, which are defined for total decision problems (languages) where every input must be classified correctly, promise problems permit partial definitions by restricting the input domain via the promise, thus avoiding the need to specify behavior on all strings. For instance, standard classes like or require algorithms that are functions, whereas promise problems focus solely on distinguishing yes from no within the promised subset, which can represent special cases such as unique solutions or gap problems that functions cannot capture directly. Promise problems extend to various complexity classes by adapting the promise framework to specific computational models. Promise-BPP consists of promise problems solvable by probabilistic polynomial-time algorithms with bounded error (at least 2/3 probability of correctness) on promised inputs, analogous to BPP but without requiring total functions. Similarly, Promise-QMA is the class of promise problems verifiable by a polynomial-time quantum verifier using a quantum proof state, with completeness and soundness bounds of 2/3 and 1/3 respectively, serving as the quantum analogue of under the promise restriction. Levin's theory of average-case complexity formalizes the distinction between worst-case and average-case hardness using distributional problems, which can be viewed as problems where the promise enforces a over instances, assigning negligible probability to atypical (hard) cases. This approach highlights that problems hard in the worst case may be easy on average under certain distributions, enabling that preserve average-case feasibility and avoiding the pitfalls of worst-case analysis alone. In , promise problems underpin assumptions by modeling average-case difficulty, such as distinguishing ciphertexts generated from the same key (promised inputs) or factoring integers promised to be the product of two primes of similar size, where worst-case is insufficient for but average-case under distributions provides a robust .