Fact-checked by Grok 2 weeks ago

Computational complexity theory

Computational complexity theory is a subfield of theoretical computer science that classifies computational problems according to the resources, such as time and space, required to solve them, aiming to determine what can be computed efficiently and what cannot. It emerged in the 1960s and 1970s, building on foundational work in computability by figures like Alan Turing, and was advanced by pioneers including Stephen Cook, Richard Karp, and Leonid Levin. Central to the field are complexity classes, which group problems based on the computational resources needed under specific models like Turing machines or boolean circuits. The class P consists of decision problems solvable in polynomial time by a deterministic Turing machine, representing efficiently solvable tasks such as sorting or shortest-path finding. In contrast, NP includes problems where a proposed solution can be verified in polynomial time, encompassing challenges like the traveling salesman problem or satisfiability (SAT). Other notable classes include coNP (complements of NP problems), PSPACE (polynomial space), and EXP (exponential time), which help delineate hierarchies of computational difficulty. A cornerstone open question is the P versus NP problem, which asks whether every problem in NP is also in P; resolving it would profoundly impact fields from cryptography to optimization, as many real-world problems are NP-complete—meaning they are as hard as the hardest problems in NP, reducible to one another via polynomial-time transformations. Formulated by Cook in 1971, P vs NP is one of the seven Millennium Prize Problems, with a $1 million reward for its solution. Techniques like diagonalization, reductions, and probabilistic methods are employed to prove separations between classes and establish completeness, while barriers such as relativization and natural proofs highlight the challenges in progress. Beyond classical computation, the theory extends to randomized algorithms (BPP), quantum computing (BQP), interactive proofs, and circuit complexity, influencing practical areas like algorithm design, secure systems, and even physics simulations. By formalizing the limits of efficient computation, it underscores inherent barriers, such as why some problems resist fast algorithms despite extensive efforts.

Computational Problems

Decision Problems

In computational complexity theory, a decision problem is formally modeled as a language L over a finite alphabet \Sigma, where L \subseteq \Sigma^* consists of all finite strings that encode the "yes" instances of the problem, and strings not in L correspond to "no" instances. This representation abstracts the problem as a question with a binary outcome: given an input string x \in \Sigma^*, determine whether x \in L. The finite alphabet \Sigma typically includes symbols like \{0,1\} for binary encodings, ensuring all inputs are finite and discrete. This formulation connects directly to formal language theory, where decision problems are identified with languages, and solving the problem equates to deciding membership in L. In this framework, the theory originated from efforts to classify problems based on the computational resources needed to resolve language membership, building on foundational work in computability. Prominent examples illustrate this concept. The Boolean satisfiability problem (SAT) is the set of all Boolean formulas in conjunctive normal form for which there exists a satisfying variable assignment. Graph k-coloring asks whether the vertices of a given graph can be assigned at most k colors such that no adjacent vertices share the same color. The halting problem comprises all pairs (M, w) where Turing machine M halts on input w. Decision problems focus solely on yes/no answers, distinguishing them from search problems that require producing a solution (e.g., a satisfying assignment for SAT) or optimization problems that seek extremal values (e.g., minimum colors for graph coloring); nevertheless, search and optimization variants often reduce to decision versions via techniques like binary search on the objective.

Function Problems

In computational complexity theory, function problems extend the notion of decision problems by requiring the computation of an output value rather than a binary acceptance or rejection. Formally, a function problem is defined by a total function f: \Sigma^* \to \Gamma^*, where \Sigma and \Gamma are finite alphabets, and the task is to produce f(x) given an input string x \in \Sigma^*. This mapping from inputs to outputs captures a wide range of algorithmic tasks, such as optimization and search, where the solution must be explicitly constructed. Unlike decision problems, which only query properties of inputs, function problems emphasize the efficiency of generating results that may vary in length with the input size. Prominent examples include the integer factorization problem, where the input is an integer n encoded in binary and the output is a list of its prime factors in non-decreasing order. This function problem is in the class FNP, as verifying a proposed factorization can be done in polynomial time, but no polynomial-time algorithm is known for computing it, placing it outside FP under standard assumptions. Another example is the single-source shortest path problem in a weighted graph with non-negative edge weights, which computes the minimum-distance path from a source vertex to all others; this is solvable in polynomial time using algorithms like Dijkstra's, thus residing in FP. Counting problems form another key category, exemplified by #SAT, which outputs the number of satisfying truth assignments for a given Boolean formula in conjunctive normal form. The class #P, introduced by Valiant in 1979, encompasses such functions computable as the number of accepting paths in a nondeterministic polynomial-time Turing machine, with #SAT being #P-complete. Function problems often relate to decision problems through corresponding search or verification tasks. For many NP decision problems, there exists a function problem counterpart in FNP that outputs a witness verifiable in polynomial time, such as producing a satisfying assignment for a satisfiable formula in SAT. However, hardness can diverge: while the decision version of integer factorization (checking for a factor in a given range) is in NP, the function version requires outputting the factors, and its presumed difficulty stems from the lack of efficient constructive methods despite efficient verification. Decision problems can be seen as special cases of function problems with binary outputs, but solving the function variant does not necessarily imply polynomial-time decidability in the reverse direction if P ≠ NP. To model function computation, Turing machines are adapted with multiple tapes for clarity and efficiency in analysis. A standard formulation uses a multi-tape deterministic Turing machine with a read-only input tape holding x, a write-only output tape where f(x) is inscribed upon halting, and one or more read-write work tapes for intermediate computations. The machine begins with heads positioned at the start of their respective tapes and must halt with the output tape containing exactly f(x) and the work tapes blank. Multi-tape machines simulate single-tape ones with at most a quadratic time overhead, preserving equivalence while allowing separation of input, output, and computation phases for precise resource measurement.

Instance Representation and Measurement

In computational complexity theory, problem instances are represented as finite strings over a finite alphabet, typically binary strings consisting of 0s and 1s, to facilitate analysis on models like Turing machines. This encoding ensures that any discrete object—such as numbers, graphs, or formulas—can be uniformly processed by a computational device. For integers, the standard binary representation is used, where the value k is encoded in \lceil \log_2 (k+1) \rceil bits, providing a compact form that reflects the logarithmic space required. Graphs, on the other hand, are commonly encoded using an adjacency matrix, which for a graph with n vertices requires n^2 bits to specify all pairwise connections, or an adjacency list that uses O(n + m) bits where m is the number of edges. A key requirement for these encodings is that they be "reasonable," meaning the encoding and decoding procedures must be computable in polynomial time relative to the input size, ensuring that the representation does not artificially inflate or deflate complexity measures. For instance, encoding a pair of strings \langle x, y \rangle can be done by concatenating them with a separator symbol (e.g., x \# y) and then converting to binary, which is verifiable in linear time. Pathological encodings, such as unary representations for large integers where k is encoded as a string of k 1s (resulting in size k + 2), are avoided because they lead to exponential growth in input length compared to the numerical value, potentially misrepresenting algorithmic efficiency. The size of an instance, denoted n = |x|, is measured as the length of its binary string encoding, which serves as the fundamental parameter for scaling resource bounds. This bit-length measure is crucial because complexity functions, such as time or space, are expressed asymptotically in terms of n; for example, an algorithm running in time O(n^k) for some constant k is considered polynomial-time efficient only under this logarithmic-scale encoding for numerical inputs. In contrast, unary encoding would make the input size linear in the value (e.g., size k for value k), turning what appears as polynomial time in unary into exponential time in binary, highlighting why binary is the default for avoiding such distortions. For graphs, the choice between adjacency matrix (n^2 bits) and edge list (O(n + m) bits) affects the effective n, but both are reasonable as long as m = O(n^2), ensuring polynomial interconvertibility. This representation framework underpins the analysis of decision problems, where yes/no instances are encoded similarly to enable uniform complexity comparisons. By standardizing on bit-length size, the theory ensures that polynomial-time solvability corresponds to practical feasibility, independent of minor encoding variations as long as they remain polynomially equivalent.

Models of Computation

Turing Machines

A Turing machine (TM) is a mathematical model of computation introduced by Alan Turing to formalize the notion of algorithmic processes. It consists of an infinite tape divided into cells, each capable of holding a single symbol from a finite tape alphabet Γ, which includes a blank symbol. The machine has a finite set of states Q, a read-write head that moves left or right along the tape, and a transition function that dictates the next state, symbol to write, and head movement based on the current state and symbol read. Formally, a deterministic single-tape TM is defined as a 7-tuple M = (Q, \Sigma, \Gamma, \delta, q_0, q_{\text{accept}}, q_{\text{reject}}), where Σ ⊆ Γ is the input alphabet, q₀ ∈ Q is the initial state, q_accept and q_reject are accepting and rejecting halting states, and the transition function is \delta: Q \times \Gamma \to Q \times \Gamma \times \{L, R\}, specifying the next state, symbol to write, and head direction (left or right). The computation begins with the input string on the tape, the head at the leftmost symbol, and the machine in q₀; it halts when entering q_accept or q_reject, or if δ is undefined for the current configuration. Turing machines come in variants that extend the basic model while preserving computational power. The single-tape TM uses one infinite tape for both input and working storage. In contrast, a multi-tape TM employs k ≥ 2 tapes, each with its own independent read-write head, where the transition function is \delta: Q \times \Gamma^k \to Q \times \Gamma^k \times \{L, R, N\}^k, with N denoting no movement. Nondeterministic Turing machines (NTMs) generalize determinism by allowing the transition function \delta: Q \times \Gamma \to \mathcal{P}(Q \times \Gamma \times \{L, R\}), where \mathcal{P} denotes the power set, enabling multiple possible next configurations from any given one; an NTM accepts an input if at least one computation path reaches an accepting state. These variants were developed to model different computational paradigms, with nondeterminism first formalized for finite automata and extended to Turing machines to explore decision problems efficiently. Despite structural differences, these variants are equivalent in computational capability. Any multi-tape TM can be simulated by a single-tape TM with at most a quadratic overhead in time complexity: if the multi-tape machine runs in time t(n), the single-tape simulator operates in O(t(n)^2) steps by encoding multiple tape contents onto one tape using markers and sweeping back and forth to mimic head movements. This simulation preserves the halting behavior and output, ensuring that the single-tape model suffices as the canonical reference for sequential computation. Nondeterministic TMs, while more powerful in an existential sense, can also be simulated deterministically, though with exponential time overhead in general. Turing machines play a foundational role in establishing the limits of computation, particularly through the Church-Turing thesis, which posits that any function intuitively computable by a human clerk following an algorithm is computable by a Turing machine. This thesis, linking Turing's 1936 model to Alonzo Church's λ-calculus from the same year, underpins the undecidability results, such as the halting problem, by demonstrating that no TM can solve certain problems for all inputs. The thesis remains a cornerstone, justifying TMs as the standard for defining undecidable problems and the boundaries of effective computability.

Alternative Models

Alternative models of computation extend beyond the standard Turing machine framework, providing abstractions that facilitate the analysis of specific resource aspects like random access or parallelism while preserving computational equivalence. These models are particularly valuable for studying time and space complexities in scenarios where sequential tape access proves cumbersome. The Random Access Machine (RAM) serves as a foundational alternative, modeling a computer with an unlimited array of registers that can store integers of arbitrary size and support operations such as loading, storing, addition, subtraction, multiplication by constants, division by constants, and zero-testing. In this model, the input is typically encoded as a sequence of integers on an input tape, with the word size assumed to be \Theta(\log n) bits for inputs of length n, enabling efficient indexing into arrays of size polynomial in n. RAM variants differ in their cost measures for operations. The unit-cost RAM assigns a time cost of 1 to each instruction, irrespective of operand size, which aligns well with high-level algorithm design but may overestimate efficiency for bit-level operations on large numbers. In contrast, the logarithmic-cost (or log-cost) RAM charges time proportional to the bit length of the operands—specifically, O(\log |x|) for operations on integer x—yielding a model closer to the bit complexity of Turing machines and avoiding artificial accelerations in arithmetic. Parallel models address concurrent computation. The Parallel Random Access Machine (PRAM) extends the RAM by incorporating p identical processors that operate synchronously on a shared memory, with each processor executing RAM-like instructions in lockstep. Access conflicts are resolved via variants like EREW (exclusive-read, exclusive-write), where no two processors read or write the same location simultaneously; CREW (concurrent-read, exclusive-write), allowing multiple reads; or CRCW (concurrent-read, concurrent-write), with rules for write conflicts such as priority or summation. Circuit families, another parallel abstraction, consist of a sequence of Boolean circuits \{C_n\}_{n \geq 1}, where C_n has n inputs and computes the function for inputs of length n, using gates like AND, OR, and NOT with fan-in 2 and unbounded depth. Key equivalence results link these models to Turing machines. A RAM (under logarithmic cost) simulates a multi-tape Turing machine of time complexity T(n) in O(T(n)) time by representing tapes as arrays and using direct register access for head movements and symbol updates. Conversely, a Turing machine simulates a logarithmic-cost RAM of time T(n) in O(T(n) \log T(n)) time. For circuits, a language belongs to P/poly if and only if it is accepted by a family of polynomial-size circuits (i.e., |C_n| = n^{O(1)}), as the circuits encode polynomial-time computation with non-uniform advice. Non-uniform models like circuit families exhibit limitations when analyzing space-bounded classes, such as L or NL, because their pre-wired structure bypasses the dynamic, space-constrained generation of computation steps required in uniform models; simulating space s(n) on circuits would demand nonuniformity that does not align with the tape-limited reconfiguration in Turing machines.

Complexity Measures

Time and Space Resources

In computational complexity theory, the primary resources measured are time and space, which quantify the computational effort required by models like Turing machines to solve problems. The input size n refers to the length of the binary encoding of the input instance. The time complexity T(n) of a Turing machine M is defined as the maximum number of steps M takes to halt on any input of length n. For deterministic Turing machines, this measures the worst-case steps across all possible computations. For nondeterministic Turing machines, the time complexity is the maximum steps over all accepting computation paths, providing a brief introduction to nondeterministic resources (with fuller details in later sections on complexity classes). Similarly, the space complexity S(n) is the maximum number of tape cells visited by M during its computation on inputs of length n. In the deterministic case, this captures the memory usage across the entire computation; for nondeterministic machines, it is the maximum over accepting paths. These measures define complexity classes such as \mathsf{DTIME}(f(n)), the set of languages L for which there exists a deterministic Turing machine deciding L in O(f(n)) time. \mathsf{DTIME}(f(n)) = \{ L \mid \exists \text{ deterministic TM } M \text{ deciding } L \text{ in } O(f(n)) \text{ time} \} Analogously, \mathsf{NTIME}(f(n)) uses nondeterministic machines, \mathsf{DSPACE}(f(n)) for deterministic space, and \mathsf{NSPACE}(f(n)) for nondeterministic space. A key trade-off between time and space is given by Savitch's theorem, which states that nondeterministic space can be simulated deterministically with a quadratic overhead: \mathsf{NSPACE}(s(n)) \subseteq \mathsf{DSPACE}(s(n)^2) for space bounds s(n) \geq \log n. This result, proved using a recursive reachability algorithm on the configuration graph of the nondeterministic machine, highlights how determinism can emulate nondeterminism at the cost of increased space.

Case-Based Analysis

In computational complexity theory, case-based analysis evaluates algorithmic performance by considering different scenarios of input instances, providing a more nuanced understanding than uniform measures alone. The worst-case analysis determines the maximum resource usage, such as time or space, over all possible inputs of a given size n, offering a guarantee on the upper bound for the algorithm's behavior in the most adverse conditions. This approach is foundational for defining complexity classes like P, where an algorithm is deemed polynomial-time if its worst-case running time is bounded by a polynomial in n. The best-case analysis, in contrast, examines the minimum resource usage over all inputs of size n, highlighting the algorithm's efficiency under ideal conditions. For instance, in linear search, the best case occurs when the target element is at the first position, requiring only a constant number of operations, O(1). While informative for understanding potential optimizations, best-case analysis is less emphasized in complexity theory because it does not capture typical or guaranteed performance, often serving more as a theoretical lower bound rather than a practical predictor. Average-case analysis addresses the expected resource usage under a specified probability distribution over inputs, aiming to model real-world scenarios more realistically. Leonid Levin pioneered this framework in the 1980s by introducing distributional problems—pairs of a language and a sequence of probability ensembles—and defining average polynomial-time as the existence of a polynomial-time algorithm whose expected running time is polynomial with respect to the distribution. This theory enables the study of average-case completeness, analogous to worst-case NP-completeness, and has been extended to show reductions between distributional problems in NP. However, average-case analysis has notable limitations. It inherently depends on the choice of probability distribution, which may not accurately represent natural input instances, leading to debates over "reasonable" distributions. Additionally, issues with pseudorandomness arise, as derandomizing average-case assumptions often requires hardness against pseudorandom generators, complicating connections to worst-case complexity and potentially undermining the robustness of results if the distribution is indistinguishable from random but hard to analyze. These challenges highlight why worst-case remains the standard for formal complexity classes, while average-case informs practical algorithm design.

Upper and Lower Bounds

In computational complexity theory, upper bounds on the resources required to solve a problem are established constructively by designing algorithms that achieve a specified performance guarantee, typically in terms of time or space complexity. For instance, the merge sort algorithm sorts n elements in O(n \log n) time in the comparison model, where each comparison reveals one bit of information about the input order. This bound is tight in the worst case for comparison-based sorting, as demonstrated by matching lower bounds derived from information theory. Lower bounds, in contrast, provide proofs of impossibility, showing that no algorithm can perform better than a certain threshold on a given computational model. A fundamental technique for establishing such bounds is diagonalization, which constructs a problem instance that differs from all potential solutions within a restricted resource class, ensuring separation between complexity levels. The time hierarchy theorem exemplifies this approach: for any time-constructible function t(n) = o(T(n)) where T(n) is at least linear, there exists a language decidable in O(T(n)) time but not in o(t(n)) time on a deterministic Turing machine. This theorem implies strict separations in the time complexity hierarchy, such as DTIME(n) \subsetneq DTIME(n \log n). Another powerful method for lower bounds is communication complexity, which analyzes the minimum information exchange needed between parties to compute a function, often yielding separations in models like circuits or streaming algorithms. For example, the parity function, which computes the XOR of n bits, requires \Omega(n) time in the deterministic Turing machine model because any algorithm must examine all input bits to determine the output correctly. In the constant-depth circuit model (AC^0), parity cannot be computed at all, as proven using the switching lemma, which shows that random restrictions simplify such circuits to constant size while preserving the function's hardness. A classic information-theoretic lower bound applies to comparison-based sorting: any such algorithm requires at least \lceil \log_2(n!) \rceil = \Omega(n \log n) comparisons in the worst case, since there are n! possible permutations and each comparison branches the decision tree by at most a factor of 2. Proving strong lower bounds, particularly for separating P from NP, faces significant barriers. The relativization barrier, introduced by Baker, Gill, and Solovay, demonstrates that there exist oracles A and B such that \mathrm{P}^A = \mathrm{NP}^A and \mathrm{P}^B \neq \mathrm{NP}^B, implying that any proof technique relativizing to oracles (including many diagonalization arguments) cannot resolve the P vs. NP question. Similarly, the natural proofs barrier by Razborov and Rudich shows that most known lower bound techniques for explicit Boolean functions are "natural" proofs—combining largeness (distinguishing hard functions), constructivity (efficient computability), and usefulness (applying to pseudorandom generators)—but such proofs would imply the existence of efficient pseudorandom generators against P/poly if one-way functions exist, contradicting cryptographic assumptions. These barriers highlight why general lower bounds remain challenging, directing research toward non-relativizing or non-natural proof methods.

Complexity Classes

Definitions and Basic Classes

In computational complexity theory, a complexity class is a set of decision problems, formalized as languages over a finite alphabet, that can be decided by a computational model—typically a Turing machine—within specified bounds on resources such as time or space. These classes provide a framework for classifying problems according to their inherent computational difficulty, focusing on asymptotic resource requirements as input size grows. The class P (polynomial time) consists of all languages that can be decided by a deterministic Turing machine in time polynomial in the input length n. Formally, \mathbf{P} = \bigcup_{k \geq 0} \mathbf{DTIME}(n^k), where \mathbf{DTIME}(f(n)) denotes the set of languages decidable in O(f(n)) time on a deterministic Turing machine. Problems in P are considered "efficiently solvable" or tractable, as polynomial time aligns with feasible computation for practical input sizes. A prominent example is the primality testing problem: given an integer n, determine if it is prime. This was shown to be in P by the AKS algorithm, a deterministic procedure running in time \tilde{O}(\log^6 n). The class NP (nondeterministic polynomial time) includes all languages for which a "yes" instance has a polynomial-length certificate verifiable in polynomial time by a deterministic Turing machine, or equivalently, languages decidable in polynomial time by a nondeterministic Turing machine. This captures problems where solutions can be checked efficiently, even if finding them is hard. Formally, \mathbf{NP} = \bigcup_{k \geq 0} \mathbf{NTIME}(n^k), with \mathbf{NTIME}(f(n)) defined analogously for nondeterministic machines. A classic example is the circuit satisfiability problem: given a Boolean circuit, does there exist an input assignment that makes the output true? A proposed satisfying assignment serves as a certificate verifiable by evaluating the circuit in linear time relative to its size. The class co-NP comprises the complements of languages in NP; that is, for a language L \in \mathbf{NP}, its complement \overline{L} = \{ x \mid x \notin L \} is in co-NP. Thus, co-NP problems are those where "no" instances have polynomial-time verifiable certificates. If \mathbf{P} = \mathbf{NP}, then co-NP would equal NP, but this remains an open question. Examples include tautology verification for Boolean formulas, the complement of circuit satisfiability.

Advanced Classes and Hierarchies

Beyond the basic polynomial-time classes such as P and NP, computational complexity theory defines higher-level classes that capture exponential resource bounds, forming layered structures with proven separations. The class EXP consists of decision problems solvable by a deterministic Turing machine in time bounded by $2^{n^{O(1)}}, where n is the input length. Similarly, NEXP includes problems solvable by a nondeterministic Turing machine in exponential time, defined as the union over constants c of NTIME($2^{cn}). These classes extend the polynomial framework to superpolynomial resources, with NEXP properly containing NP, as proven by the nondeterministic time hierarchy theorem. The class PSPACE encompasses problems decidable using polynomial space on a deterministic Turing machine, equivalent to NPSPACE by Savitch's theorem, which shows that nondeterministic polynomial space can be simulated deterministically with quadratic space overhead. Hierarchy theorems establish strict inclusions among these classes by demonstrating that increased computational resources enable solving strictly more problems. The deterministic time hierarchy theorem states that if f(n) and g(n) are time-constructible functions with f(n) \log f(n) = o(g(n)), then DTIME(f(n)) is properly contained in DTIME(g(n)). This implies, for example, that P is strictly contained in EXP, as the time functions n^k and $2^n satisfy the conditions. A nondeterministic version extends this to NTIME, separating NP from NEXP. The space hierarchy theorem provides an analogous result: for space-constructible f(n) and g(n) with f(n) = o(g(n)), SPACE(f(n)) \subsetneq SPACE(g(n)). Consequently, L \subsetneq PSPACE \subsetneq EXPSPACE, highlighting how space bounds create distinct computational powers without the logarithmic factor required in time hierarchies. The polynomial hierarchy (PH) builds a infinite tower of classes above P and NP using alternating existential and universal quantifiers over polynomial-time predicates. It is defined inductively: \Sigma_0^P = \Pi_0^P = \Delta_0^P = \mathrm{P}, and for k \geq 1, \Sigma_k^P consists of languages where membership is witnessed by a polynomial-time verifier with k alternating quantifiers starting with existential, \Pi_k^P starts with universal, and \Delta_k^P = \mathrm{P}^{\Sigma_k^P}. NP corresponds to \Sigma_1^P and coNP to \Pi_1^P, with the full PH as the union over all k. If \Sigma_k^P = \Pi_k^P for some k, the hierarchy collapses to that level, but no such collapse is known, preserving the suspected strict inclusions \Sigma_k^P \subsetneq \Sigma_{k+1}^P. Oracle separations reveal the limitations of relativizing techniques in proving inclusions like P versus NP. There exists an oracle A such that \mathrm{P}^A = \mathrm{NP}^A, where superscript denotes oracle access, showing that some proof strategies fail relative to A. Conversely, an oracle B exists where \mathrm{P}^B \neq \mathrm{NP}^B, indicating that relativization alone cannot resolve the P versus NP question, as any proof must transcend oracle models to distinguish these cases. These results underscore the nuanced structure of complexity classes beyond absolute separations.

Reductions and Completeness

In computational complexity theory, reductions provide a formal method to compare the relative hardness of decision problems by transforming instances of one problem into instances of another while preserving the yes/no answer. A many-one reduction, also known as a Karp reduction, from problem A to problem B is a polynomial-time computable function f such that for any input x, x is a yes-instance of A if and only if f(x) is a yes-instance of B. This ensures that if B can be solved efficiently, so can A, as the transformation and solution of the reduced instance can be done in polynomial time. In contrast, a Turing reduction, or Cook reduction, allows a polynomial-time algorithm for A to query an oracle for B multiple times, potentially adaptively, to determine the answer for x. Turing reductions are more powerful than many-one reductions, as they permit interactive use of the oracle, but both are used to establish hardness relationships within polynomial-time bounded computations. The concept of completeness builds on reductions to identify the hardest problems in a complexity class. A problem C is complete for a class \mathcal{K} (e.g., NP) via polynomial-time reductions if (1) C \in \mathcal{K} and (2) every problem in \mathcal{K} reduces to C under the chosen reduction type. For NP, completeness is defined using polynomial-time many-one reductions. The foundational result is the Cook-Levin theorem, which proves that the Boolean satisfiability problem (SAT)—determining if there exists an assignment of truth values to variables that makes a given Boolean formula true—is NP-complete under polynomial-time many-one reductions. The proof reduces any language in NP to SAT via a polynomial-time many-one reduction by constructing a Boolean formula that is satisfiable if and only if the nondeterministic verifier accepts the input. The formula encodes possible computation paths of the verifier. Subsequent work established many-one reductions from SAT to other problems, yielding additional NP-complete problems. For instance, 3-SAT, the restriction of SAT to formulas in conjunctive normal form with exactly three literals per clause, is NP-complete via a polynomial-time many-one reduction from general SAT that introduces auxiliary variables to handle clauses of other sizes. Similarly, the Clique problem—deciding if a graph contains a clique of size at least k—and the Vertex Cover problem—deciding if a graph has a vertex cover of size at most k—are NP-complete via polynomial-time many-one reductions from 3-SAT, as detailed in Karp's classification of 21 NP-complete problems. These reductions transform logical clauses into graph structures, such as gadgets representing variable assignments or clause implications, preserving satisfiability. While complete problems capture the hardest cases in NP, not all problems in NP fall into P or the complete ones, assuming P ≠ NP. Ladner's theorem demonstrates the existence of NP-intermediate problems—those in NP but neither in P nor NP-complete under polynomial-time many-one reductions—by constructing a hierarchy of problems with varying densities of yes-instances, using diagonalization over the complete problems. This result highlights the rich structure within NP and motivates the study of problem hierarchies beyond completeness. Reductions also propagate membership in complexity classes: if A reduces to B via a polynomial-time reduction and B \in \mathrm{P}, then A \in \mathrm{P}, since the reduction combined with a polynomial-time algorithm for B yields one for A. Conversely, if A is NP-hard (every NP problem reduces to A) and A \in \mathrm{P}, then \mathrm{P} = \mathrm{NP}. This transitivity underpins the power of reductions in classifying problem difficulty.

Open Questions

P versus NP

The P versus NP problem is the fundamental question in computational complexity theory of whether the complexity class P equals the class NP, where P consists of decision problems solvable by a deterministic Turing machine in polynomial time, and NP consists of decision problems verifiable by a deterministic Turing machine in polynomial time given a suitable certificate. Equivalently, it asks whether every problem for which a proposed solution can be checked efficiently can also be solved efficiently. This formulation captures the distinction between search (finding solutions) and verification (checking solutions), as problems in NP have short proofs of membership that can be verified quickly, but finding such proofs may be hard. If P = NP, the implications would be profound across computer science and beyond. In cryptography, the collapse would render many public-key systems insecure, as they depend on the computational hardness of NP problems like integer factorization or the discrete logarithm problem, which would become solvable in polynomial time. Similarly, optimization would undergo a revolution, enabling efficient algorithms for NP-hard problems such as the traveling salesman problem or graph coloring, transforming fields like logistics, scheduling, and operations research by making intractable global optima computable. Conversely, proving P ≠ NP would confirm the inherent difficulty of certain problems, justifying approximations and heuristics in practice. Partial results provide indirect evidence and conditional separations but no unconditional resolution. It is known that P = NP implies the non-existence of one-way functions, which are efficiently computable functions that are hard to invert on a significant fraction of inputs; since one-way functions underpin modern cryptography and are widely believed to exist, their presumed existence supports P ≠ NP, though unconditionally proving their existence remains open. Another open question is whether NP ⊆ BPP, where BPP is the class of problems solvable by a probabilistic Turing machine in polynomial time with bounded error; resolving this affirmatively would place NP problems in randomized polynomial time but is considered unlikely without collapsing other complexity hierarchies. The problem's status as one of the seven Millennium Prize Problems, established by the Clay Mathematics Institute in 2000, underscores its importance, offering a $1,000,000 prize for a correct solution. Significant barriers hinder progress, notably the Baker–Gill–Solovay theorem, which demonstrates that relativization techniques—proofs that hold relative to any oracle—are insufficient to separate P and NP, as there exist oracles A where P^A = NP^A and oracles B where P^B \neq NP^B. This relativization barrier implies that any resolution must exploit non-relativizing properties of computation, such as arithmetization or interactive proofs.

Intermediate Problems and Class Separations

In computational complexity theory, intermediate problems are those in NP that are neither in P nor NP-complete, assuming P ≠ NP. Ladner's theorem establishes the existence of such problems by constructing a language in NP that is reducible to an NP-complete problem like SAT but avoids both P and NP-completeness through a diagonalization argument that "punches holes" in the complete problem. A prominent candidate for an NP-intermediate problem was the graph isomorphism problem, which asks whether two given graphs are isomorphic; it was long suspected to be intermediate until László Babai developed a quasi-polynomial time algorithm in 2015, placing it in \tilde{O}(n^{\mathrm{polylog} n}), though this remains slower than polynomial time and does not resolve its status relative to P. Other natural candidates include integer factorization, which decides whether a given integer has a factor in a specified range, and the discrete logarithm problem, which determines if a logarithm exists in a finite field; both are in NP ∩ co-NP due to efficient verifiability of solutions and their complements, making NP-completeness unlikely, and they are widely believed to be intermediate as no polynomial-time algorithm is known despite extensive study in cryptography. Regarding class separations, a key result equates interactive proofs with polynomial space: the class IP, consisting of languages with polynomial-time probabilistic verifiers interacting with unbounded provers, equals PSPACE, as shown by Shamir building on earlier work by Lund, Fortnow, Karloff, and Nisan that placed the polynomial hierarchy PH inside IP. Whether PH is a proper subset of PSPACE remains open, with relativized separations existing relative to random oracles but no unconditional proof. Hierarchy theorems provide separations between time and space classes; the time hierarchy theorem shows that TIME(n) \subsetneq TIME(n^2), implying strict separations like DTIME(n) \subsetneq DTIME(n \log n), while the space hierarchy theorem yields DSPACE(n) \subsetneq DSPACE(n^2), demonstrating that more space solves strictly more problems. As of 2025, no major collapses have occurred among these classes, though derandomization efforts continue to make progress toward placing BPP, the class of problems solvable in polynomial time with bounded error probability, inside P; recent advances include improved pseudorandom generators under hardness assumptions, but unconditional derandomization of BPP remains elusive.

Intractability and Practical Implications

NP-Completeness and Hardness

NP-completeness identifies the hardest problems within the class NP, meaning that if any NP-complete problem admits a polynomial-time algorithm, then every problem in NP can be solved in polynomial time, collapsing the distinction between verification and search in nondeterministic polynomial time. This equivalence arises because NP-complete problems are interreducible via polynomial-time reductions, so solving one efficiently allows transforming and solving all others in NP. The concept was introduced by showing that the Boolean satisfiability problem (SAT) is NP-complete, establishing a benchmark for hardness in combinatorial optimization. Beyond NP, analogous notions of completeness exist for higher complexity classes, capturing problems believed to require substantially more resources. In PSPACE, the class of problems solvable in polynomial space, the quantified Boolean formulas (QBF) problem is PSPACE-complete; QBF extends SAT by allowing alternating existential and universal quantifiers over Boolean variables, modeling more expressive logical queries that demand exponential exploration in the worst case. Similarly, in EXP (deterministic exponential time), the problem of determining the winner in generalized chess on an n × n board—where queens can move any distance and the game follows standard rules otherwise—is EXP-complete, highlighting the vast search spaces in strategic games with unbounded board sizes. To address intractability when certain parameters remain small, parameterized complexity theory distinguishes fixed-parameter tractable (FPT) problems, solvable in time f(k) · n^c where k is the parameter, n the input size, f arbitrary, and c constant, from those that are W-hard, presumed not FPT under standard conjectures. This framework reveals tractable cases for NP-hard problems; for instance, vertex cover is FPT when parameterized by solution size k, admitting a 2^k · n algorithm, while the clique problem is W-complete, resisting such efficiency even for small cliques. NP-hard problems pervade practical applications due to their modeling power for combinatorial decisions under constraints, often leading to exponential blowups in real scenarios. In artificial intelligence, classical planning—formulated as finding a sequence of actions to reach a goal state in a propositional domain without delete effects—is NP-complete, as it reduces to SAT for verifying plan existence, capturing the challenge of goal-directed reasoning in automated systems. In logistics, the traveling salesman problem (TSP), which seeks the shortest tour visiting a set of cities, is NP-hard, modeling route optimization where the number of possible paths grows factorially with cities, complicating supply chain and delivery scheduling.

Coping with Intractable Problems

When problems are known to be intractable in the worst case, such as those that are NP-hard, researchers and practitioners turn to approximation algorithms to find solutions that are close to optimal within polynomial time. These algorithms guarantee a solution whose cost is within a specified factor of the optimum, trading exactness for efficiency. A polynomial-time approximation scheme (PTAS) is a family of algorithms that, for any fixed ε > 0, computes a (1 + ε)-approximation in time polynomial in the input size, though the degree of the polynomial may depend on 1/ε. For example, in the Euclidean traveling salesman problem (TSP), where cities are points in the plane and distances satisfy the triangle inequality, Sanjeev Arora developed a PTAS that achieves arbitrarily good approximations for fixed dimensions. However, not all intractable problems admit PTAS; some are APX-hard, meaning there is no PTAS unless P = NP, but constant-factor approximations may still exist. The metric TSP, a classic NP-hard problem, is APX-hard, yet the Christofides algorithm provides a 3/2-approximation by combining a minimum spanning tree with a minimum matching on odd-degree vertices. For the more restricted (1,2)-TSP with edge weights of 1 or 2, Papadimitriou and Yannakakis proved APX-hardness and provided a 7/6-approximation algorithm by successively merging cycles of a triangle-free matching. These techniques highlight how geometric or metric constraints enable practical approximations despite underlying hardness. For problems where even constant-factor guarantees are elusive or insufficient, heuristics and metaheuristics offer problem-specific or general strategies to find good solutions quickly, without optimality assurances. Heuristics, like the nearest-neighbor method for TSP, greedily build solutions but can perform poorly in the worst case. Metaheuristics extend these by incorporating mechanisms to escape local optima, such as genetic algorithms, which mimic natural evolution: solutions (individuals) are encoded as chromosomes, populations evolve through selection, crossover, and mutation to optimize a fitness function. John Holland's foundational work established genetic algorithms as robust for NP-hard combinatorial optimization, with empirical success in scheduling and routing. Simulated annealing, another prominent metaheuristic, draws from statistical mechanics to probabilistically accept worse solutions at high "temperatures" to explore the search space, gradually cooling to converge. Kirkpatrick, Gelatt, and Vecchi introduced this method in 1983, demonstrating its effectiveness on NP-hard problems like circuit partitioning and TSP, where it often yields near-optimal tours faster than exhaustive search. These approaches are widely used in operations research, balancing computational cost with solution quality on real-world instances. Fixed-parameter tractability (FPT) addresses intractability by exploiting small parameters in the problem instance, yielding algorithms with runtime f(k) · n^c, where k is the parameter, n the input size, f arbitrary, and c constant. This makes problems solvable efficiently when k is small, even if NP-hard overall. Kernelization, a key FPT technique, preprocesses the input to produce an equivalent instance of size bounded by a function of k, solvable via faster methods. For the vertex cover problem—finding a minimum set of vertices incident to all edges—Nemhauser and Trotter's 1975 theorem provides a 2k-sized kernel by solving a linear program and partitioning vertices into three sets based on fractional values, enabling exact FPT algorithms like O(1.2738^k + kn) time branching. Downey and Fellows formalized FPT theory in their 1999 monograph, showing vertex cover's tractability when parameterized by solution size. Empirical advances in solvers for intractable problems often surpass theoretical predictions through engineered heuristics. Modern SAT solvers, tackling the NP-complete satisfiability problem, employ conflict-driven clause learning (CDCL), which learns new clauses from conflicts during backtracking to prune the search space. Introduced by Marques-Silva and Sakallah in their 1999 GRASP solver, CDCL has enabled scaling to industrial instances with millions of variables, such as in hardware verification and software testing, where solvers like MiniSat or Glucose resolve formulas in seconds that theory deems exponential-time. This success stems from variable ordering (e.g., VSIDS) and clause activity tracking, allowing practical resolution of problems far beyond brute-force limits.

Specialized Areas

Continuous and Real Computation

Computational complexity theory traditionally focuses on discrete models like Turing machines operating on finite strings, but continuous and real computation extends these ideas to models handling real numbers, motivated by applications in numerical analysis and scientific computing. The Blum-Shub-Smale (BSS) model provides a foundational framework for this extension, defining machines that perform exact arithmetic operations over the real numbers \mathbb{R}. A BSS machine consists of a finite control unit and an unbounded sequence of registers, each storing a single real number. The machine supports constant-time operations: addition, subtraction, multiplication, division (away from zero), and comparisons (equality or ordering) between register contents, with branching based on these comparisons. This model assumes exact real arithmetic, contrasting with practical floating-point implementations that introduce approximation errors. In the BSS model, complexity classes are defined analogously to discrete ones, but over \mathbb{R}. The class \mathbf{P}_\mathbb{R} comprises decision problems—subsets of \mathbb{R}^n for some n—solvable by a BSS machine in polynomial time, where time is measured by the number of arithmetic operations and the input size n determines polynomial bounds. The class \mathbf{NP}_\mathbb{R} includes problems where "yes" instances have a certificate (a polynomial-length sequence of reals) verifiable in polynomial time on a BSS machine; nondeterminism here allows existential quantification over reals in polynomial dimensions. These classes capture algebraic decision problems, such as determining whether a system of polynomial equations has a real solution. Seminal results establish \mathbf{NP}_\mathbb{R}-completeness for problems like testing the existence of real roots for univariate polynomials or deciding strict feasibility of semi-algebraic sets, highlighting hardness in continuous settings. The open question of whether \mathbf{P}_\mathbb{R} = \mathbf{NP}_\mathbb{R} remains unresolved, with relativization barriers showing separations in some oracles but no unconditional proof. Despite its theoretical elegance, real computation faces fundamental issues of undecidability and instability. Certain problems over the reals are undecidable even with unlimited computation, extending beyond discrete halting problems. For instance, Richardson's theorem proves that deciding whether the integral over [0,1] of an expression built from elementary functions (polynomials, exponentials, logarithms, sine, etc.) equals zero is undecidable; this follows from reducing the halting problem to such integral expressions via Diophantine approximation. In the BSS model, while algebraic problems are often decidable, embedding Turing undecidables via real encodings yields undecidable subsets, such as the real halting set. Additionally, stability under perturbations poses practical challenges: BSS assumes exact arithmetic, but real-world computations use finite precision, where small input perturbations (e.g., rounding errors) can drastically alter outputs for ill-conditioned problems, quantified by condition numbers measuring sensitivity. Smale's work on numerical condition numbers shows that problems like root-finding for polynomials can have exponentially large condition numbers in input size, implying instability in approximate models. Applications of real computation theory abound in numerical analysis and algebraic complexity. In numerical analysis, the BSS model bounds the complexity of algorithms for tasks like solving linear systems or eigenvalue computation, revealing that Gaussian elimination runs in \mathbf{P}_\mathbb{R} time, while more sensitive problems like matrix inversion require condition number considerations for stability. Algebraic complexity studies, such as evaluating the permanent of a real matrix—a sum over permutations analogous to the determinant but without signs—fall within this framework; while computable in exponential time naively, deeper analysis in BSS explores its hardness for decision variants, like positivity, connecting to optimization over semi-algebraic sets. These insights guide robust algorithm design, emphasizing average-case complexity and smoothed analysis to mitigate worst-case instability.

Nondeterminism and Randomness

Nondeterministic computation extends the deterministic model by allowing a machine to branch into multiple computational paths at each step, accepting an input if at least one path accepts it. The nondeterministic time complexity class NTIME(t(n)) consists of languages decidable by a nondeterministic Turing machine using at most t(n) steps on inputs of length n. In particular, the class NP corresponds to the union over k of NTIME(n^k), capturing problems where solutions can be verified in polynomial time if provided with a witness. For space complexity, the class NPSPACE includes languages decidable by nondeterministic Turing machines using polynomial space. A landmark result, Savitch's theorem, establishes that nondeterminism does not increase the power of polynomial space: for any space function s(n) ≥ log n, NSPACE(s(n)) ⊆ DSPACE(s(n)^2), implying NPSPACE = PSPACE. This simulation uses a recursive reachability procedure on the configuration graph of the nondeterministic machine, checking pairwise configurations to determine if an accepting path exists within the space bound. Probabilistic computation introduces randomness to the model, where a probabilistic Turing machine flips unbiased coins to guide its decisions. The class RP (randomized polynomial time) contains languages where, for inputs in the language, the machine accepts with probability at least 2/3, and for inputs not in the language, it always rejects; co-RP is the class of complements of RP languages. The broader class BPP (bounded-error probabilistic polynomial time) includes languages where the machine outputs the correct answer with probability at least 2/3 for all inputs, allowing two-sided error. Error probabilities in BPP can be amplified arbitrarily close to 0 or 1 through independent repetitions of the algorithm and majority voting, preserving polynomial time via Chernoff bounds. Relative to deterministic classes, BPP is contained in P/poly, the class of languages decidable by polynomial-size non-uniform circuit families, as shown by constructing circuits that enumerate over all possible random strings up to the error bound. Quantum computation further generalizes these models using quantum mechanics, where the state is a superposition of basis states evolving unitarily. The class BQP consists of languages decidable by a quantum Turing machine in polynomial time with bounded error probability (at most 1/3). A prominent example is Shor's algorithm, which factors large integers in polynomial time on a quantum computer by efficiently finding the period of a function using quantum Fourier transform, placing integer factorization in BQP. Regarding inclusions and separations, while P ⊆ BPP ⊆ BQP relativizes to all oracles, there exist oracles separating BQP from the polynomial hierarchy PH: specifically, relative to certain oracles, BQP is not contained in PH, demonstrating that quantum power cannot be captured by classical nondeterminism in the relativized world.

Historical Overview

Early Foundations

The foundations of computational complexity theory trace back to early 20th-century efforts in mathematical logic to formalize the limits of provability and decidability. In the 1920s, David Hilbert proposed his program for the foundations of mathematics, aiming to establish a complete and consistent axiomatic system for arithmetic from which all true statements could be mechanically derived. A key component was the Entscheidungsproblem, posed by Hilbert and Wilhelm Ackermann in 1928, which asked for an algorithm to determine whether any given mathematical statement is provable within a formal system. Kurt Gödel's incompleteness theorems of 1931 delivered a profound challenge to this vision, proving that any sufficiently powerful consistent formal system cannot prove all true arithmetic statements and is inherently incomplete. These results highlighted fundamental barriers to full mechanization of mathematical reasoning, shifting focus toward what could be effectively computed rather than universally proven. The pivotal year of 1936 marked the formal definition of computability through independent but equivalent models proposed by Alan Turing and Alonzo Church. Turing introduced the concept of a "computable number" via an abstract device now known as the Turing machine, which simulates algorithmic processes on an infinite tape, and applied it to show that the Entscheidungsproblem is undecidable. In the same year, Church developed lambda-definability as a notion of "effective calculability," proving it equivalent to Turing's model and using it to demonstrate the undecidability of the Entscheidungsproblem. Central to these advancements was the halting problem, where Turing proved no general algorithm exists to determine whether a given program on arbitrary input will eventually stop or run forever, establishing the first limits on mechanical computation. These works converged on the Church-Turing thesis, positing that their models capture the intuitive notion of effective computability, laying the groundwork for distinguishing computable from non-computable functions. Parallel developments in the 1930s involved recursive functions, formalized by logicians such as Emil Post and Gödel. Post's canonical systems and investigations into recursively enumerable sets provided an early framework for classes of computable predicates, emphasizing problems whose positive instances could be mechanically listed but whose full membership might not be decidable. These efforts defined primitive recursive functions—built from basic operations like successor and projection via composition and primitive recursion—as a robust subclass of computable functions, though incomplete without minimization to capture all computability. By the 1950s and 1960s, attention turned to resource-bounded computation, with Michael Rabin and Dana Scott's 1959 analysis of finite automata introducing nondeterministic models that enabled precise measures of space complexity for recognizing regular languages. Building on this, Juris Hartmanis and Richard E. Stearns formalized time complexity in 1965, defining hierarchies based on computational steps and tape usage to classify algorithms by efficiency, thus birthing complexity theory as a study of feasible computation.

Major Milestones and Figures

In 1971, Stephen Cook introduced the concept of NP-completeness by demonstrating that the Boolean satisfiability problem (SAT) is NP-complete, establishing a foundational framework for classifying computational problems based on their relative hardness. Independently, Leonid Levin developed a similar notion of universal search problems, proving that certain optimization and decision problems are complete for nondeterministic polynomial time, laying parallel groundwork for the theory. The following year, Richard Karp extended these ideas by showing that 21 natural combinatorial problems, including the traveling salesman problem and graph coloring, are NP-complete through polynomial-time reductions from SAT, dramatically illustrating the breadth of intractability across diverse domains. During the 1980s, Michael Sipser advanced the understanding of the polynomial hierarchy (PH) by proving that if parity requires exponential-size circuits, then the hierarchy does not collapse, providing key insights into the separation of complexity classes beyond P and NP. In a landmark result from the early 1990s, Adi Shamir established that the class of interactive proof systems with probabilistic polynomial-time verifiers (IP) equals PSPACE, demonstrating the power of interaction and randomness in capturing space-bounded computation. The 1990s and early 2000s saw significant progress in specialized areas of complexity. Peter Shor's 1994 quantum algorithm for integer factorization and discrete logarithms showed that these problems are solvable in polynomial time on a quantum computer, highlighting the potential computational advantages of quantum models over classical ones. In 1997, Russell Impagliazzo and Avi Wigderson proved that if exponential-time problems require large circuits, then BPP (probabilistic polynomial time) equals P, advancing derandomization techniques by constructing pseudorandom generators from hard functions. The 2002 AKS primality test by Manindra Agrawal, Neeraj Kayal, and Nitin Saxena placed the problem of determining primality in the class P with a deterministic polynomial-time algorithm, resolving a long-standing question in number-theoretic complexity. As of 2025, the central P versus NP question remains unresolved, with no proof separating or equating the classes despite extensive efforts. Key figures have shaped these developments. Stephen Cook's introduction of NP-completeness earned him the 1982 Turing Award, recognizing its profound impact on theoretical computer science. Richard Karp, co-recipient of the 1985 Turing Award, popularized NP-completeness through his reductions, influencing algorithm design and optimization. In 1992, Sanjeev Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy proved, as part of the PCP theorem, that NP = PCP(log n, 1), enabling inapproximability results for optimization problems; this and related work earned the authors the 2001 Gödel Prize. Madhu Sudan advanced complexity through his 1997 list-decoding algorithm for Reed-Solomon codes, which exceeded the traditional error-correction bound and supported hardness results in probabilistically checkable proofs.

References

  1. [1]
    ComputationalComplexityTheory
    Computational complexity theory (often just called "complexity theory" when there is no possibility of confusion with the Santa Fe Institute variety) is the ...
  2. [2]
    CS 535: Complexity Theory, Fall 2020
    The goal of computational complexity theory is to understand the capabilities and fundamental limitations of efficient computation.<|control11|><|separator|>
  3. [3]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · Decision problems are too limited. Some computational problems are not easily expressed as decision problems. Indeed, we will introduce ...
  4. [4]
    [PDF] A Working Knowledge of Computational Complexity for an Optimizer
    A “problem” versus a “problem instance”. 4. ▫A (decision) problem is a general description of a problem to be answered with yes or no. ▫Every decision ...
  5. [5]
    [PDF] Introduction to the Theory of Computation, 3rd ed.
    This is an electronic version of the print textbook. Due to electronic rights restrictions, some third party content may be suppressed.
  6. [6]
    [PDF] The Complexity of Theorem-Proving Procedures - Computer Science
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings means a ...
  7. [7]
    [PDF] REDUCIBILITY AMONG COMBINATORIAL PROBLEMS
    (Plenum Press, 1972). REDUCIBILITY AMONG COMBINATORIAL PROBLEMS. +. Richard M. Karp. University of California at Berkeley. Abstract: A large class of ...
  8. [8]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  9. [9]
    Computational Complexity Theory
    Jul 27, 2015 · Computational complexity theory is a subfield of theoretical computer science one of whose primary goals is to classify and compare the practical difficulty of ...
  10. [10]
    Is integer factorization an NP-complete problem? [duplicate]
    Aug 17, 2010 · No, its not known to be NP-complete, and it would be very surprising if it were. This is because its decision version is known to be in NP∩co-NP.<|separator|>
  11. [11]
    [PDF] Chapter 13 Shortest Paths
    In this chapter we will cover problems involving finding the shortest path between vertices in a graph with weights (lengths) on the edges.
  12. [12]
    The complexity of computing the permanent - ScienceDirect.com
    1979, Pages 189-201. Theoretical Computer Science. The complexity of computing the permanent. Author links open overlay panel L.G. Valiant. Show more. Add to ...
  13. [13]
    [PDF] Lecture 8. Complexity Classes and AGT. - Ioannis Panageas
    Nov 8, 2022 · Definition 2.2 (Complexity Class FP). The set of function problems for which some algorithm can provide an output/answer in polynomial time.
  14. [14]
  15. [15]
    [PDF] Inputs encoding - ENSIIE - Computational complexity theory
    The memory space used by k is log2(k) + 2. Unary encoding. Encode an integer k with unary encoding on a Turing machine consists in writing with k cells filled ...
  16. [16]
    [PDF] CS663 Theory of Computation 1 Introduction
    ... computational complexity, leading to many important ... reasonable encoding scheme is also polynomial under an- other reasonable encoding scheme.
  17. [17]
    [PDF] Chapter Computational Complexity
    Computational complexity theory attempts to understand the power of computation by providing ... reasonable encoding will have length bounded by a polynomial in ...
  18. [18]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    Apr 20, 2004 · The “computable” numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.Missing: doi. | Show results with:doi.
  19. [19]
    [PDF] Finite Automata and Their Decision Proble'ms#
    We need only find a nondeterministic machine 6 such that UV= T( Q) . We may assume that the sets S and T have no elements in common, and then equate. Q=(SyT ...
  20. [20]
    [PDF] On the Computational Complexity of Algorithms Author(s)
    Let Y be a multitape Turing machine with a one-way input tape which uses the symbols in A. Y is said to recognize R if and only if, for any input sequence a ...
  21. [21]
    [PDF] Time Bounded Random Access Machines
    In this paper we introduce a formal model for random access computers and argue that the model is a good one to use in the theory of computational complexity.
  22. [22]
    [PDF] Notes on Random Access Machine (RAM) Model
    Sep 13, 2025 · There are two plausible cost models for measuring time complexity of a RAM programme. • Unit cost model: each instruction takes 1 unit of time.<|control11|><|separator|>
  23. [23]
    [PDF] PARALLELISM IN RANDOM ACCESS MACHINES
    A model of computation based on random access machines operating in parallel and sharing a common memory is presented. The computational power of this model is ...
  24. [24]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · This chapter defines the class P/poly of languages computable by polynomial-sized boolean circuits and explores its relation to NP. We also ...
  25. [25]
    [PDF] Computational Complexity of Random Access Models - DTIC
    Feb 6, 1990 · Every unit-cost RAM of time complexity t can be simulated on-line by a d- dimensional Turing machine in time 0 (t (n)log t (n)). 18. Kolmogorov ...Missing: seminal paper
  26. [26]
    Beyond Worst-Case Analysis - Communications of the ACM
    Mar 1, 2019 · Worst-case analysis is a specific modeling choice in the analysis of algorithms, where the overall performance of an algorithm is summarized by its worst ...
  27. [27]
    [PDF] Average-Case Complexity - arXiv
    Aug 17, 2021 · Levin [Lev86] laid the foundations for a theory of the average-case tractability of problems in NP. Levin introduced the definition of average- ...
  28. [28]
    [PDF] PSEUDORANDOMNESS AND AVERAGE-CASE COMPLEXITY VIA ...
    In this paper, our goal is to provide uniform versions of the known nonuni- form trade-offs between worst-case complexity of problems in EXP, average- case ...
  29. [29]
    [PDF] Comparison-based Lower Bounds for Sorting
    In this lecture we discuss the notion of lower bounds, in particular for the problem of sorting. We show that any deterministic comparison-based sorting ...
  30. [30]
    [PDF] 1 Time Hierarchy Theorem - Duke Computer Science
    To separate two complexity classes we need to exhibit a machine in one class that is different (namely, gives a different answer on some input) from every.Missing: paper | Show results with:paper
  31. [31]
    [PDF] Lecture 8: Relativizations. Baker-Gill-Solovay Theorem - cs.wisc.edu
    The goal of this lecture is to show that such a proof technique is unlikely to resolve the NP vs P ques- tion. We shall prove the Baker-Gill-Solovay theorem, ...Missing: barrier | Show results with:barrier
  32. [32]
    [PDF] Lower Bounds in Communication Complexity
    This monograph survey lower bounds in the field of communication complexity. Our focus is on lower bounds that work by first representing the communication ...
  33. [33]
    [PDF] PARITY /∈ AC Contents 1 Introduction - People | MIT CSAIL
    For the lower bound, note that for d = 2, it suffices to show a lower bound for DNF and CNF circuits (because if there are two gates of the same type such that ...
  34. [34]
    [PDF] Natural Proofs - UMD Computer Science
    We introduce the notion of natural proof. We argue that the known proofs of lower bounds on the complexity of explicit Boolean functions in non-monotone mod-.
  35. [35]
    Complexity classes - ACM Digital Library
    Typically, a complexity class is defined by (1) a model of computation, (2) a resource (or collection of resources), and (3) a function known as the complexity ...
  36. [36]
    [PDF] PRIMES is in P - Microsoft
    research/btp2002/primality.html. [KSS]. A. Kalai, A. Sahai, and M. Sudan, Notes on primality test and analysis of AKS,. Private communication, August 2002 ...
  37. [37]
    [PDF] Completeness - Computational Complexity: A Modern Approach
    Jan 8, 2007 · In this chapter, we define the complexity class NP that aims to capture the set of problems whose solutions can be efficiently verified. The ...
  38. [38]
    The complexity of theorem-proving procedures
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings means a ...
  39. [39]
    The Polynomial-Time Hierarchy | Semantic Scholar
    The polynomial-time hierarchy was introduced for the classification of problems that are probably more complex than those in NP and the language accepted by ...
  40. [40]
    (PDF) Hierarchies of Memory Limited Computations - ResearchGate
    Oct 22, 2025 · This paper investigates the computational complexity of binary sequences as measured by the rapidity of their generation by multitape Turing machines.
  41. [41]
    The polynomial-time hierarchy for Theoretical Computer Science
    Jan 1, 1976 · The polynomial-time hierarchy is that subrecursive analog of the Kleene arithmetical hierarchy in which deterministic (nondeterministic) polynomial time plays ...
  42. [42]
    Relativizations of the P = ? NP Question - SIAM Publications Library
    Relativizations of the P = ? N P Question. Authors: Theodore Baker, John Gill, and Robert SolovayAuthors Info & Affiliations. https://doi.org/10.1137/0204037.
  43. [43]
    On the Structure of Polynomial Time Reducibility
    The two definitions of polynomial time reducibility of Karp and Cook are just time bounded versions of many-one reducibility ( < ,~) and Turing reducibility ( _ ...
  44. [44]
    P vs NP - Clay Mathematics Institute
    The P vs NP question asks if it's easy to check a solution if it's also easy to solve. P problems are easy to find, NP problems are easy to check.Missing: equivalents | Show results with:equivalents
  45. [45]
    [PDF] The P versus NP problem - Clay Mathematics Institute
    The P versus NP problem is to determine whether every language accepted by some nondeterministic algorithm in polynomial time is also accepted by some ( ...
  46. [46]
    [PDF] P = NP - Scott Aaronson
    NP problem, considered one of the great open problems of science. Here I survey the status of this problem in 2017, for a broad audience of mathematicians, ...
  47. [47]
    [PDF] On One-way Functions from NP-Complete Problems
    Apr 19, 2021 · Thus, even w.r.t. black-box reductions, the question of whether OWFs can be based on the assumption that NP 6⊆ BPP, is wide open. In this ...<|separator|>
  48. [48]
    [1512.03547] Graph Isomorphism in Quasipolynomial Time - arXiv
    Dec 11, 2015 · Authors:László Babai. View a PDF of the paper titled Graph Isomorphism in Quasipolynomial Time, by L\'aszl\'o Babai. View PDF. Abstract:We show ...
  49. [49]
    Guest Column: New ways of studying the BPP = P conjecture
    Jun 14, 2023 · In this survey we will describe new approaches to the BPP = P conjecture from recent years, as well as new questions, algorithmic approaches, and ways of ...
  50. [50]
    Word problems requiring exponential time(Preliminary Report)
    Meyer, A.R. and L.J. Stockmeyer. The Equivalence Problem for Regular ... Word problems requiring exponential time(Preliminary Report). Theory of ...
  51. [51]
    Fixed-Parameter Tractability and Completeness I: Basic Results
    We establish the main results of a completeness program which addresses the apparent fixed-parameter intractability of many parameterized problems.
  52. [52]
    [PDF] Polynomial Time Approximation Schemes for Euclidean Traveling ...
    In this paper, we show that Euclidean TSP has a PTAS. For every fixed c . 1, a randomized version of this algorithm computes a (1 1 1/c)-approximation to the ...<|separator|>
  53. [53]
    [PDF] Optimization by Simulated Annealing S. Kirkpatrick - Stat@Duke
    Nov 5, 2007 · Simulated annealing uses the Metropolis algorithm, connecting statistical mechanics to optimization, providing a framework for complex systems, ...Missing: seminal | Show results with:seminal
  54. [54]
    [PDF] 1 On The Unreasonable Effectiveness of SAT Solvers - Rice University
    The success of SAT solvers can be attributed to the fact that engineers have designed and implemented highly scalable CDCL SAT solving algorithms (or simply, ...
  55. [55]
    On a theory of computation and complexity over the real numbers: $NP
    Lenore Blum, Mike Shub, Steve Smale · DOWNLOAD PDF + SAVE TO MY LIBRARY. Bull. Amer. Math. Soc. (N.S.) 21(1): 1-46 (July 1989). ABOUT; FIRST PAGE; CITED BY ...
  56. [56]
    Complexity and Real Computation - SpringerLink
    Computational complexity theory provides a framework for understanding the cost of solving computational problems, as measured by the requirement for resources ...
  57. [57]
    Some Undecidable Problems Involving Elementary Functions ... - jstor
    The integration problem for (E, E*) is the problem of deciding, given A in E ... < S. THEOREM Two. For any real numbers x1, . -,'X, and any 8 > 0 ...
  58. [58]
    Computational Complexity of Probabilistic Turing Machines
    A palindrome-like language is described that can be recognized faster by one-tape probabilistic Turing machines than by one-tape deterministic Turing machines.
  59. [59]
    Relationships between nondeterministic and deterministic tape ...
    The amount of storage needed to simulate a nondeterministic tape bounded Turingmachine on a deterministic Turing machine is investigated.
  60. [60]
    Two theorems on random polynomial time - IEEE Xplore
    Two theorems on random polynomial time ; Article #: ; Date of Conference: 16-18 October 1978 ; Date Added to IEEE Xplore: 18 July 2008.
  61. [61]
    Quantum Complexity Theory | SIAM Journal on Computing
    In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing ...
  62. [62]
    Algorithms for quantum computation: discrete logarithms and factoring
    This paper presents Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer, which are hard on classical computers.
  63. [63]
    [PDF] Hilbert's Program: 1917-1922 - Carnegie Mellon University
    Hilbert's finitist program was not created at the beginning of the twenties solely to counteract Brouwer's intuitionism, but rather emerged out of broad ...
  64. [64]
    The Rise and Fall of the Entscheidungsproblem
    A first blow was dealt [to the “Hilbert decision-programme”] by Gödel's incompleteness theorem (1931), which made it clear that truth or falsehood of \(A ...
  65. [65]
    Alonzo Church, A note on the entscheidungsproblem - PhilPapers
    Abstract. In a recent paper the author has proposed a definition of the commonly used term “effectively calculable” and has shown on the basis of this ...
  66. [66]
    Recursive Functions - Stanford Encyclopedia of Philosophy
    Apr 23, 2020 · The recursive functions are a class of functions on the natural numbers studied in computability ... Post, Emil L., 1944, “Recursively Enumerable ...The Origins of Recursive... · The Primitive Recursive... · The Partial Recursive...
  67. [67]
    The complexity of theorem-proving procedures - ACM Digital Library
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.
  68. [68]
    Reducibility among Combinatorial Problems - SpringerLink
    We show that a large number of classic unsolved problems of covering, matching, packing, routing, assignment and sequencing are equivalent.
  69. [69]
    IP = PSPACE | Journal of the ACM
    In this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those ...Missing: original | Show results with:original
  70. [70]
    [PDF] P=BPP unless E has sub-exponential circuits: Derandomizing the ...
    This paper addresses the relationship between three central questions in complexity theory. First, to what extent can a problem be eas- ier to solve for ...<|control11|><|separator|>
  71. [71]
    Probabilistic checking of proofs: a new characterization of NP
    Probabilistic checking of proofs: a new characterization of NP. Authors: Sanjeev Arora ... PCP and approximation problems. Manuscript. Google Scholar. [7].