Fact-checked by Grok 2 weeks ago

Computational complexity

Computational complexity theory is a subfield of theoretical computer science that classifies computational problems according to the resources, such as time and space, required to solve them, and examines the relationships between these classifications to understand the intrinsic difficulty of computation. The primary goals of the field include defining what constitutes feasible or tractable computation—typically polynomial-time solvability—and comparing the practical difficulties of different problems about finite objects, such as determining primality or satisfiability of logical formulas. Central to this study are complexity classes like P, which encompasses decision problems solvable in polynomial time on a deterministic Turing machine (for example, linear programming or primality testing via the AKS algorithm), and NP, which includes problems where a proposed solution can be verified in polynomial time, such as the traveling salesman problem or Boolean satisfiability (SAT). A cornerstone result is the theory of , established by in 1971, which identifies problems like SAT as the hardest in in the sense that if any NP-complete problem is in , then P = NP; thousands such problems have since been cataloged. The field's most famous open problem, P vs. NP, conjectures that P ≠ NP, meaning some problems verifiable quickly cannot be solved quickly, with implications for fields like and optimization; it remains unresolved as one of the . Historically, complexity theory emerged in the 1960s from , with foundational work by on Turing machines, followed by key developments like the time hierarchy theorem by Hartmanis and Stearns (1965), which separates complexity classes, and the Cobham-Edmonds thesis linking polynomial time to practical feasibility. Other important classes include for polynomial space and for exponential time, with results like (1970) showing that nondeterministic space is no more powerful than deterministic space up to quadratic factors.

Basic Concepts

Definition and Scope

is the study of the resources, primarily time and space, required to solve computational problems as a function of the input . It seeks to classify problems based on their inherent difficulty and to understand the fundamental limits of efficient . The primary goals include distinguishing between tractable problems, which can be solved using a reasonable amount of resources, and intractable ones, which appear to require exponentially more resources as the input grows, thereby providing insights into the boundaries of what can be feasibly computed. The field traces its origins to the 1930s, when introduced the as a to define what numbers are computable, addressing foundational questions in logic and mathematics. Independently, developed around the same time, providing an equivalent formal system for expressing computable functions and proving the undecidability of certain problems, such as the . These works established the theoretical underpinnings of but did not yet focus on resource efficiency. emerged as a distinct subfield in the and , with a pivotal milestone being Stephen Cook's 1971 introduction of , which demonstrated that many practical problems are at least as hard as , highlighting the challenges in finding efficient algorithms. In this framework, computational problems are broadly distinguished into three types: decision problems, which require determining whether a given input satisfies a property (yielding a yes/no answer); search problems, which involve constructing or finding an object that meets specified criteria; and optimization problems, which aim to identify the solution that maximizes or minimizes an objective function among feasible options. These categories allow to analyze different aspects of problem-solving, with decision problems often serving as the foundation for studying the others due to their binary nature. Asymptotic notations, such as big-O, are used to abstractly describe how resource requirements scale with input size.

Input Representation and Size

In computational complexity, decision problems are formally defined as languages over a finite alphabet, typically the alphabet \Sigma = \{0, 1\}, where the language L \subseteq \Sigma^* consists of all input strings for which the answer is "yes." This representation allows problems to be encoded as subsets of strings, enabling uniform treatment across diverse instances such as graphs or numbers, which are first translated into bit strings before processing. For function problems, which require computing an output rather than a yes/no decision, the input is similarly a string x \in \Sigma^*, and the output is another string f(x) computed by the machine, often with the output length bounded polynomially in the input size. Encoding schemes standardize inputs as binary strings to facilitate analysis, with the primary size metric n = |x| denoting the length of the input string in bits. For Turing machines, inputs are placed on the read-only input tape as sequences over the tape alphabet (which includes 0 and 1), and the machine's description itself can be encoded as a binary string to allow simulation by a universal Turing machine. This bit-length measure ensures that complexity is evaluated relative to the information content of the input, avoiding ambiguities in representation; for instance, integers are encoded in binary, and graphs via adjacency matrices flattened into strings. Computational models vary in how they handle input uniformity and succinctness, affecting complexity bounds. Uniform models, such as multitape Turing machines, use a single applicable to all input sizes without additional , ensuring the computation description does not grow with n. In contrast, nonuniform models, like families of circuits, permit "advice" strings of length in n that depend on the input size, potentially allowing more efficient computations for specific instance sizes but complicating direct comparisons to uniform classes. Succinct representations, where large objects like graphs are encoded using circuits or expressions of size logarithmic in the object size, dramatically increase complexity; for example, basic graph connectivity becomes under succinct circuit inputs, while problems like 3-SAT escalate to NEXP-complete. A representative example is n distinct numbers in the comparison model, where each comparison yields a yes/no outcome on the relative order of two elements. Any such requires at least \Omega(n \log n) comparisons in the worst case, as demonstrated by modeling the process as a binary decision with n! leaves (one per ), necessitating a tree height of at least \log_2(n!) \approx n \log n by . This lower bound highlights how input size n directly influences resource needs even for simple problems, independent of the underlying machine model.

Resource Measures

Time Resources

In , time resources quantify the computational effort required by measuring the number of steps a deterministic executes to halt on an input of length n. This measure captures the sequential nature of computation, where each step corresponds to a single transition in the machine's configuration, including reading, writing, and head movements. The time complexity function t_M(n) for a M is defined as the maximum number of such steps over all inputs of length n, providing a worst-case bound on . The class \mathrm{DTIME}(f(n)) consists of all languages that can be decided by a deterministic Turing machine running in at most f(n) steps on inputs of length n, assuming f(n) is a time-constructible function that the machine can accurately measure during execution. These classes form the foundation for analyzing time-bounded computations, enabling comparisons of algorithmic efficiency across different problems. For instance, scanning an input string of length n requires at least linear time, \Theta(n), as the machine's head must visit each symbol at least once to process it fully. Similarly, the naive algorithm for multiplying two n \times n matrices, which computes each entry as a sum of n products via three nested loops, runs in cubic time, \Theta(n^3), highlighting how resource demands scale with problem size in basic arithmetic operations. A key result establishing the richness of these classes is the time hierarchy theorem, which demonstrates that more time allows for strictly more powerful computations. Specifically, for any time-constructible function f(n) \geq n satisfying appropriate regularity conditions, \mathrm{DTIME}(f(n)) \subsetneq \mathrm{DTIME}(f(n) \log f(n)), meaning there exist languages decidable in the latter time bound but not the former. This strict separation, proven using diagonalization over a universal Turing machine simulator, underscores the intuitive notion that additional computational steps enable solving harder problems. The proof relies on constructing a language that encodes the running times of machines within the lower bound while staying within the higher bound, ensuring the inclusion is proper. Time measurements in complexity theory typically employ the multi-tape model, where each step incurs unit cost regardless of the number of tapes or simultaneous head movements, as this provides tighter and more natural bounds compared to single-tape machines. A multi-tape machine running in time t(n) can be simulated by a single-tape machine in O(t(n)^2) time, introducing a overhead due to the need to manage multiple tapes on one, but the converse simulation incurs only a constant factor slowdown. This unit-cost convention per transition aligns closely with intuitive algorithmic steps, distinguishing it from resources, which track tape usage as a complementary but independent measure.

Space and Bit Resources

In computational complexity theory, space complexity quantifies the memory resources required by , defined as the maximum number of cells visited on the work tapes of during its computation on inputs of length n. This measure focuses on storage limitations, distinct from time complexity which counts computational steps. The seminal work establishing hierarchies for such memory-bounded computations introduced the framework for analyzing how increasing space allowances enable solving more problems. The class \mathsf{DSPACE}(f(n)) comprises all languages that can be decided by a deterministic Turing machine using at most O(f(n)) space on its work tapes, where f(n) is a function from natural numbers to natural numbers with f(n) \geq \log n. Space hierarchy theorems demonstrate that these classes form a strict hierarchy: if f and g are space-constructible functions (meaning a machine can mark off f(n) cells in O(f(n)) space) and f(n) \log f(n) = o(g(n)), then \mathsf{DSPACE}(f(n)) \subsetneq \mathsf{DSPACE}(g(n)). This separation underscores that more space allows for strictly greater computational power, with proofs relying on diagonalization over machines using lesser space. For nondeterministic space, the class \mathsf{NSPACE}(f(n)) includes languages accepted by nondeterministic Turing machines bounded by O(f(n)) space. A key result, Savitch's theorem, shows that nondeterminism provides at most a quadratic blowup in space: \mathsf{NSPACE}(f(n)) \subseteq \mathsf{DSPACE}(f(n)^2) for f(n) \geq \log n, achieved by systematically exploring the configuration graph of the nondeterministic machine. Bit complexity addresses the cost of basic arithmetic operations on multi-bit integers, measuring the number of elementary bit manipulations (such as , or shifts) required. In models where numbers are represented in with n bits, naive takes O(n^2) bit operations, but advanced algorithms reduce this substantially. For instance, the multiplies two n-bit integers in O(n \log n \log \log n) bit operations using fast Fourier transforms over rings of integers powers of 2. More recently, in 2019 by and Joris van der Hoeven, this has been improved to O(n \log n) bit operations, confirming a long-standing and establishing the optimal asymptotic bound up to constant factors. These results highlight how bit-level analysis reveals efficiencies in algebraic computations that influence overall algorithm design. A prominent example of logarithmic space usage is the undirected s-t connectivity problem, which determines if there is a between vertices s and t in an undirected ; this is solvable in \mathsf{L} = \mathsf{DSPACE}(\log n) via a deterministic algorithm that transforms the graph into a sequence of s and deterministically enumerates paths in the resulting expander in log space. Such log-space algorithms often rely on reversible computations to avoid excessive memory for intermediate results. Trade-offs between time and space are evident in problems like n elements, where any comparison-based satisfies a product bound T(n) \cdot S(n) = \Omega(n^2 / \log n) in general sequential models, implying that achieving linear time requires \Omega(n \log n) space or vice versa. These trade-offs illustrate inherent resource constraints, guiding the choice of algorithms in memory-limited environments.

Communication and Other Resources

In computational complexity, communication complexity quantifies the minimum number of bits that parties must exchange to compute a function of their private inputs, serving as a fundamental measure in distributed computing settings. Introduced by Yao in his seminal work on distributive computing, this framework models scenarios where inputs are partitioned among multiple processors that interact solely through messages, without shared memory. The deterministic communication complexity of a function f is the size of the smallest protocol tree where each leaf outputs f(x, y), with internal nodes representing message exchanges based on input bits. A canonical example is the equality function, where two parties, with input x \in \{0,1\}^n and with y \in \{0,1\}^n, determine if [x = y](/page/X&Y). In the deterministic two-party model, any requires \Omega(n) bits of communication in the worst case, as established through fooling set arguments that highlight the need to distinguish exponentially many input pairs. This lower bound underscores the inefficiency of deterministic protocols for certain functions, motivating extensions to multi-party settings. Multi-party communication complexity generalizes the two-party case to k \geq 3 parties, each holding a private input, aiming to compute a function of all inputs collectively. Key models include the number-in-hand (NIH) variant, where each party has a full input share, and the number-on-forehead (NOF) model, introduced by , Furst, and , where each party sees all but their own input. In the NOF model, communication requirements can be exponentially higher than in two-party settings for functions like set disjointness, reflecting challenges in coordinating information across more participants. Randomized protocols incorporate shared or private randomness to reduce communication, trading correctness for bounded error probability. For the equality function, a public-coin protocol uses a random hash function from a universal family to map inputs to O(\log(1/\epsilon)) bits, allowing Alice to send the hash of her input to Bob, who checks equality with error at most \epsilon; this achieves O(1) communication for constant \epsilon. Such techniques extend to multi-party randomized models, where hashing lowers the bit exchange for approximate computations, though lower bounds like \Omega(\sqrt{n}) persist for disjointness even with randomness. Beyond bits exchanged, other resources like play a in complexity measures. In the BPP class, algorithms use polynomial-time probabilistic Turing machines with access to random bits, succeeding with probability at least $2/3 on yes-instances and at most $1/3 on no-instances; the number of random bits consumed quantifies this resource, with derandomization efforts seeking deterministic simulations. In alternative models, such as circuits, depth measures the longest from input to output, capturing parallelism constraints; low-depth circuits (e.g., AC^0 class) limit computational power, as proven by non-computability in constant depth. Communication complexity yields applications in deriving lower bounds for data stream algorithms, where processing sequential data with limited memory mimics multi-party protocols by partitioning the stream across rounds. For instance, \Omega(n) communication lower bounds for equality imply similar space lower bounds for streaming equality testing, as shown in reductions that embed communication protocols into stream models. Additionally, it connects to privacy in (SMC), where protocols ensure output computation without revealing inputs; Beaver, Micali, and Rogaway established that unconditional SMC requires communication exponential in the number of parties for general functions, linking protocol efficiency to . These insights extend briefly to parallel models, where communication notions inform distributed lower bounds.

Asymptotic Analysis

Growth Rate Notations

In computational complexity, asymptotic notations describe the bounding behavior of functions representing resource usage, such as time or space, as a function of input size n. These notations abstract away constant factors and lower-order terms to focus on dominant growth rates, enabling comparisons between algorithms independent of implementation details. They originated in mathematical analysis but were formalized for computer science by Donald Knuth, who advocated their use to bound algorithmic performance precisely. The Big O notation provides an upper bound on the growth of a . Formally, a function f(n) is O(g(n)) if there exist positive constants c and n_0 such that for all n \geq n_0, $0 \leq f(n) \leq c \cdot g(n). This captures the worst-case scenario where f(n) grows no faster than a constant multiple of g(n) for sufficiently large n. The Big Omega notation, conversely, establishes a lower bound. A function f(n) is \Omega(g(n)) if there exist positive constants c and n_0 such that for all n \geq n_0, f(n) \geq c \cdot g(n). This indicates that f(n) grows at least as fast as a constant multiple of g(n) asymptotically. The Big Theta notation combines both, defining f(n) = \Theta(g(n)) if f(n) = O(g(n)) and f(n) = \Omega(g(n)), thus providing a tight bound where f(n) and g(n) share the same growth rate up to constants. For stricter inequalities, little-o and little-omega notations are used. A function f(n) is o(g(n)) if for every constant c > 0, there exists n_0 such that for all n \geq n_0, $0 \leq f(n) < c \cdot g(n), meaning f(n) grows strictly slower than g(n). Similarly, f(n) = \omega(g(n)) if for every c > 0, there exists n_0 such that f(n) > c \cdot g(n) for n \geq n_0, indicating strictly faster growth. These can also be expressed using limits: f(n) = o(g(n)) if \lim_{n \to \infty} \frac{f(n)}{g(n)} = 0, and f(n) = \omega(g(n)) if the limit is \infty. These notations satisfy several useful properties. holds: if f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)); similar relations apply to \Omega, \Theta, o, and \omega. For sums, if f(n) = \Theta(g(n)) and h(n) = \Theta(k(n)), then f(n) + h(n) = \Theta(g(n) + k(n)), which simplifies analysis of combined terms. For example, n^2 + n = \Theta(n^2), as the linear term is dominated by the , satisfying both with g(n) = n^2. Additivity extends to multiples: if f(n) = O(g(n)), then c \cdot f(n) = O(g(n)) for any constant c > 0. These properties facilitate modular reasoning in complexity proofs.

Common Complexity Functions

In computational complexity analysis, the logarithmic function \log n (usually base 2) captures the growth rate of algorithms that efficiently reduce the problem size by a constant factor at each step, such as binary search on a sorted of n elements, which requires at most \lceil \log_2 (n+1) \rceil comparisons by halving the search interval repeatedly. This sublinear growth makes it highly efficient for search problems, far outperforming linear scans. Polynomial functions of the form n^k, where k is a fixed positive constant, describe the resource requirements for problems solvable in polynomial time, which are deemed tractable because their running times remain feasible even for moderately large inputs on practical machines. For instance, n elements can often be achieved in O(n \log n) time using algorithms like mergesort, fitting within this class and aligning with the P of decision problems decidable by a deterministic in time. Exponential functions like $2^n characterize the growth of brute-force algorithms that exhaustively explore all possibilities, such as generating all subsets of an n-element set, rendering them intractable for all but tiny inputs due to the rapid explosion in computation time. For example, solving the traveling salesman problem via naive takes \Theta(n!) time, which is super-exponential and grows faster than any fixed-base . These functions form a strict under asymptotic : \log n = o(n), n = o(n \log n), and n \log n = o(2^n), meaning each subsequent function grows asymptotically faster than the previous, establishing clear separations in efficiency for large n. This hierarchy underscores why logarithmic and bounds are preferred for scalable algorithms, while ones signal fundamental intractability. However, determining the exact of an arbitrary —such as whether it is precisely n^k or $2^n—is undecidable. In fine-grained complexity, super-polynomial functions like n^{\log \log n} arise as hypothesized lower bounds for problems believed to require more than any time. These functions grow slower than but faster than any n^k for fixed k, highlighting subtle distinctions within seemingly tractable regimes.

Models of Computation

Deterministic Models

Deterministic Turing machines provide the canonical model for sequential computation in complexity theory, capturing the intuitive notion of an algorithm as a step-by-step procedure operating on encoded input. Formally, a deterministic Turing machine consists of a finite control with states, an infinite tape divided into cells, a read-write head that moves left or right, and a transition function that, given the current state and tape symbol, specifies the next state, the symbol to write, and the head movement. This model was introduced by to define computable functions precisely. Single-tape Turing machines use one for both input and working storage, while multi-tape variants employ separate tapes for input and auxiliary computation, allowing the head on each tape to move independently according to the transition function. Although multi-tape machines appear more expressive, they are equivalent to single-tape machines: any k-tape machine running in time t can be simulated by a single-tape machine in time O(), by encoding the contents of all tapes onto one tape using special markers to delimit sections and simulating cross-tape movements through repeated sweeps. Conversely, single-tape machines simulate multi-tape ones with only linear overhead in the number of tapes. This quadratic slowdown establishes that time complexity classes like DTIME(t(n)) are robust across these variants up to factors. The (RAM) model offers a register-based alternative to Turing machines, featuring an unbounded number of registers holding integers, along with instructions for loading, storing, arithmetic operations, and conditional jumps, enabling direct access to any register in constant time. Introduced to study computability, the RAM model precisely captures the partial s, matching the power of Turing machines. In analysis, logarithmic-cost RAMs—where operations on words of size log n cost O(1)—simulate Turing machines with polynomial overhead, ensuring that polynomial-time RAM computations correspond to the class P. Boolean circuit families model non-uniform computation via directed acyclic graphs where nodes are input variables, constants, or gates computing Boolean functions like AND, OR, and NOT, with size measured by the number of gates for inputs of length n. Circuit complexity originated with Claude Shannon's analysis of switching circuits, establishing that most Boolean functions require circuits of exponential size via a counting argument over the 2^{2^n} possible functions and at most (2n)^{s} circuits of size s. The class P/poly comprises languages recognized by polynomial-size circuit families, allowing "advice" strings of length poly(n) that depend only on input length. Uniformity conditions, such as logarithmic-space uniformity, link P/poly to P by requiring circuits to be generatable by polynomial-time Turing machines. Turing machines and circuits interrelate closely: a t-time Turing machine induces a circuit family of size O(t^2) and polynomial depth by unrolling the computation tape into a static of subcomputations, while evaluating a size-s circuit requires only linear time on a via depth-first traversal. These simulations preserve polynomial-time classes, with corresponding to logarithmic-space uniform polynomial-size circuits. A fundamental limitation of deterministic models arises from the undecidability of the halting problem: no can determine, for arbitrary machines and inputs, whether halts, precluding universal lower bounds on time or that would decide halting by checking if resources suffice for termination. This undecidability underscores that complexity separations must rely on or other relativizing techniques rather than exhaustive simulation.

Nondeterministic and Alternating Models

A nondeterministic Turing machine (NTM) is a computational model that extends the deterministic Turing machine by permitting multiple possible successor configurations from a given state and tape contents, effectively allowing the machine to "branch" or make choices at certain points during computation. This nondeterminism models the ability to explore multiple possibilities in parallel, though in practice, it is simulated sequentially. A string is accepted by an NTM if there exists at least one finite computation path that reaches an accepting state, and rejected otherwise; the machine computes the same class of recursively enumerable languages as its deterministic counterpart. The resource-bounded complexity classes defined using NTMs capture the power of this branching. The class \mathsf{NTIME}(f(n)) consists of all languages accepted by an NTM that halts within O(f(n)) steps on inputs of length n, where f(n) is a time-constructible function. Similarly, \mathsf{NSPACE}(f(n)) includes languages accepted by an NTM using O(f(n)) space. A key result in nondeterministic space complexity is the Immerman–Szelepcsényi theorem, which proves that nondeterministic space classes are closed under complementation: specifically, \mathsf{NL} = \mathsf{coNL}, where \mathsf{NL} = \mathsf{NSPACE}(\log n). This theorem provides a nondeterministic log-space algorithm to decide the complement of any language in NL, resolving a long-standing question about the symmetry of nondeterministic space. Alternating Turing machines (ATMs) further generalize NTMs by incorporating both existential and in their state transitions. In an ATM, states are partitioned into existential (like NTM states, accepting if some successor accepts) and universal states (accepting only if all successors accept), with quantification alternating along paths. Introduced to model logical quantifiers in computation, ATMs accept precisely the recursively enumerable languages, but their resource classes exhibit surprising collapses: for instance, alternating polynomial \mathsf{APSPACE} equals deterministic exponential time \mathsf{EXPTIME}. This equivalence highlights the exponential power gained from alternation in space-bounded settings. To investigate limitations of proof techniques in , nondeterministic models are often relativized using Turing machines, where the machine can query membership in an external set A in one step. NTMs define relativized classes like \mathsf{NP}^A, and the Baker–Gill–Solovay theorem constructs A and B such that \mathsf{P}^A = \mathsf{NP}^A but \mathsf{P}^B \neq \mathsf{NP}^B, demonstrating that relativizing proofs cannot resolve the \mathsf{P} vs. \mathsf{NP} question. Such relativization barriers underscore the challenges in separating nondeterministic from deterministic polynomial-time computation. Nondeterministic models can be simulated deterministically, but at the cost of an increase in time.

Parallel, Distributed, and Quantum Models

Parallel models of extend the sequential framework by incorporating multiple processors that operate concurrently to solve problems more efficiently. The Parallel Random Access Machine (PRAM) is a foundational model consisting of p synchronized processors sharing a common memory, where each processor executes instructions in and can access any memory location in constant time, assuming no conflicts or resolving them via variants like EREW (exclusive read, exclusive write), (concurrent read, exclusive write), or CRCW (concurrent read, concurrent write). This model abstracts away details to focus on algorithmic parallelism, enabling analysis of in terms of the number of processors and parallel steps. Seminal algorithms in PRAM, such as parallel prefix , demonstrate how problems solvable in linear sequential time can achieve logarithmic parallel time with sufficient processors. The complexity class NC captures problems amenable to efficient parallelization in models like PRAM or uniform circuit families, defined as those solvable by a family of circuits of size and polylogarithmic depth O(\log^k n) for some k, with fan-in gates. NC represents "nicely parallelizable" computations, and it is known that NC \subseteq , meaning every problem in NC can be solved sequentially in time, though the converse ( \subseteq NC) remains open, highlighting the potential but unproven power of parallelism. Examples include multiplication and , both in NC (specifically NC^1), underscoring the class's role in identifying inherently parallel tasks. Distributed models address computation across networked processors without shared memory, emphasizing message-passing in graph-structured topologies where nodes represent processors and edges denote communication links. In the LOCAL model, nodes can exchange messages of arbitrary size (polylogarithmic in network size) with neighbors in each synchronous round, allowing algorithms to focus on locality—the radius of information propagation needed for decisions. The CONGEST model refines this by restricting messages to O(\log n) bits per edge per round, better reflecting bandwidth constraints in real networks and leading to tighter bounds for problems like minimum spanning tree construction, which requires \tilde{O}(\sqrt{n} + D) rounds in CONGEST compared to faster local approximations in LOCAL, where D is the graph diameter. These models link to communication complexity by quantifying the total bits exchanged for coordination, though distributed algorithms prioritize per-round limits over global totals. Quantum models leverage superposition and entanglement for computation beyond classical limits, with the (QTM) extending the classical by operating on qubits—quantum bits that can exist in superpositions of states—via unitary transformations on the tape and state register, preserving probability norms. The QTM formalizes universal quantum computation, equivalent in power to models. The class consists of decision problems solvable by a QTM (or equivalent quantum polynomial-time machine) in polynomial time with bounded error probability (at most 1/3). A landmark algorithm is Shor's, which factors integers N in O((\log N)^3) quantum time using quantum Fourier transforms to find periods in , exponentially faster than the best known classical subexponential algorithms. Recent advancements include experimental demonstrations of quantum advantage, such as Google's 2019 —a 53-qubit superconducting quantum computer—that sampled random quantum circuits in 200 seconds, a task estimated to require 10,000 years on the world's fastest classical , marking an initial milestone despite debates on classical simulability. Subsequent advancements include Google's 2025 processor, which performed a physics 13,000 times faster than the world's fastest , achieving verifiable quantum advantage.

Complexity Classes

Polynomial-Time Classes

The class P encompasses all decision problems that can be solved by a deterministic in polynomial time, formally defined as \mathrm{P} = \bigcup_{k \geq 1} \mathrm{DTIME}(n^k). This class captures computationally tractable problems, where the running time grows as a function of the input size n, enabling efficient algorithms for tasks like or shortest-path in graphs. Seminal work by Edmonds emphasized the importance of polynomial-time solvability for practical algorithms in . In contrast, includes decision problems where a proposed solution can be verified in time by a deterministic , equivalently \mathrm{NP} = \bigcup_{k \geq 1} \mathrm{NTIME}(n^k), relying on nondeterministic Turing machines for acceptance. This verification property underpins NP's role in modeling optimization and search problems believed to be intractable, though unproven. Classic examples include the (SAT), where checking a satisfying assignment for a formula takes time, and the traveling salesman problem (TSP), where verifying a tour's length against a threshold is efficient. The class comprises the complements of languages in , meaning problems where "no" instances have short verifiable proofs, but finding such proofs for "yes" instances may be hard. Problems in NP \cap , such as primality testing (now known to be in P), exhibit both short proofs for acceptance and rejection. Variants extend to promise problems, where the input is promised to belong to one of two (e.g., yes or no instances), allowing relaxed definitions for classes like and AM. The class (Merlin-Arthur) involves a probabilistic verifier (Arthur) that accepts valid proofs from an unbounded prover (Merlin) with high probability, using one message from Merlin followed by Arthur's randomized verification, all in polynomial time. Similarly, AM (Arthur-Merlin) features public-coin interaction where Arthur sends random bits first, and Merlin responds, capturing interactive proofs with constant rounds. These classes generalize (MA contains NP) and relate to probabilistic proof systems for promise problems. The question of whether \mathrm{P} = \mathrm{NP} remains one of the most profound open problems in , designated as a Millennium Prize Problem by the with a $1,000,000 award for a solution. A proof that \mathrm{P} = \mathrm{NP} would imply efficient algorithms for all problems, revolutionizing fields like optimization and , but it would also collapse modern , as one-way functions—essential for secure systems like Diffie-Hellman key exchange—would not exist under standard assumptions. Conversely, separating the classes would confirm inherent computational limits, supporting the belief in 's intractability.

Space-Bounded Classes

Space-bounded complexity classes focus on the amount of , or , used by Turing machines to decide languages, providing a measure of computational resources distinct from time complexity. These classes are defined using the notation \mathsf{DSPACE}(f(n)) for deterministic bounded by a function f(n), and \mathsf{NSPACE}(f(n)) for the nondeterministic variant, where the input tape is read-only and separate from the work tapes. Seminal work established that captures hierarchies of problems based on limits, often allowing superpolynomial time as long as remains controlled. The class \mathsf{L}, also denoted \mathsf{LOGSPACE} or \mathsf{DSPACE}(\log n), consists of all languages decidable by a deterministic using O(\log n) space on its work tapes, where n is the length of the input. This restriction models computations with minimal memory, such as those simulating finite automata or solving problems like checking membership for single-tape machines, though multi-tape variants allow broader applicability. \mathsf{L} forms the base of the logarithmic space hierarchy and includes problems feasible with pointer-based navigation in data structures of size in n. The nondeterministic counterpart, \mathsf{NL} or \mathsf{NSPACE}(\log n), extends \mathsf{L} by allowing nondeterministic branching in the computation, still bounded by O(\log n) space. A representative problem in \mathsf{NL} is the reachability problem (STCON), which determines whether there exists a from a source s to a t in a directed graph with n vertices. STCON is complete for \mathsf{NL} under log-space , meaning every problem in \mathsf{NL} can be reduced to it via a \mathsf{L}-, highlighting nondeterminism's power for path-system verifications. Polynomial space classes culminate in \mathsf{PSPACE} = \mathsf{DSPACE}(n^{O(1)}), the set of languages decidable deterministically using space polynomial in n. This class encompasses problems requiring extensive memory for exhaustive searches, such as evaluating game trees in generalized chess variants. A canonical \mathsf{PSPACE}-complete problem is the truth evaluation of quantified Boolean formulas (QBF or TQBF), where a formula like \forall x_1 \exists x_2 \dots \phi(x_1, x_2, \dots) (with \phi a 3-CNF) is true if satisfied under all specified quantifier alternations. TQBF's \mathsf{PSPACE}-completeness, established via polynomial-time reductions from arbitrary \mathsf{PSPACE} languages, underscores the class's breadth for alternating quantification problems. The space hierarchy theorem asserts that space provides strictly increasing computational power: for space-constructible functions f and g with f(n) = o(g(n)) and f(n) \geq \log n, there exists a language in \mathsf{DSPACE}(g(n)) not in \mathsf{DSPACE}(f(n)). This diagonalization technique, using self-referential machines to simulate and exceed lower bounds, implies potential separations like \mathsf{L} \subsetneq \mathsf{PSPACE}. Known inclusions form a chain \mathsf{L} \subseteq \mathsf{NL} \subseteq \mathsf{P} \subseteq \mathsf{PSPACE}, where \mathsf{P} is the class of polynomial-time decidable languages; the first three follow from viewing deterministic machines as special cases of nondeterministic ones, while \mathsf{P} \subseteq \mathsf{PSPACE} holds because polynomial time implies at most polynomial space usage. For context, \mathsf{PSPACE} \subseteq \mathsf{EXPTIME}, as space-bounded machines run in exponential time at worst.

Nondeterministic and Beyond Classes

The class consists of all decision problems solvable by a deterministic in exponential time, formally defined as = ∪_{c≥1} DTIME(2^{n^c}). This class captures problems requiring time exponential in the input size, such as determining the winner in certain multiplayer games with . A key result equates with the class APSPACE, which comprises languages accepted by alternating s using polynomial space; this equivalence arises from the power of alternation in simulating exponential time computations within bounded space. NEXPTIME extends nondeterminism to exponential time, defined as NEXPTIME = ∪{c≥1} NTIME(2^{n^c}), encompassing problems where a can verify solutions in exponential time. The (PH) builds on nondeterministic polynomial-time classes through levels Σ_k^p and Π_k^p, where Σ_0^p = Π_0^p = , and for k ≥ 1, Σ{k+1}^p consists of languages reducible to over Σ_k^p oracles, with PH as their union. These levels, such as Σ_2^p for ∃∀ quantified formulas, generalize (which is Σ_1^p) and capture increasingly complex alternations of existential and universal choices within polynomial time. The elementary complexity class ELEMENTARY includes all problems solvable in time bounded by a fixed-height tower of exponentials, formally ELEMENTARY = ∪_{k≥0} k-EXPTIME, where 0-EXPTIME = and k-EXPTIME = DTIME(2^{n^{O(1)}} iterated k times upward). This class corresponds to the union of levels in the Grzegorczyk hierarchy beyond the primitive recursive functions, starting from functions primitive recursive in and extending to elementary functions, which admit efficient simulation by Turing machines within these time bounds. Oracle separations demonstrate relativized nonequalities among these classes; notably, there exists an oracle A such that PH^A ⊈ PSPACE^A, constructed via parity circuits that witness the collapse of the hierarchy relative to A while keeping PSPACE^A unchanged. Such relativized results highlight barriers to unconditional proofs, as they hold relative to specific oracles but do not resolve the unrelativized inclusions like PH ⊆ PSPACE.

Lower Bounds and Hardness

Techniques for Lower Bounds

Techniques for establishing lower bounds in are essential for understanding the inherent limitations of computational models. These methods aim to prove that certain problems require substantial resources, such as , or circuit size, to solve. Direct approaches, like arguments, provide concrete bounds by comparing the number of possible computations to the number of functions or problems. More advanced techniques, such as , exploit self-referential constructions to separate complexity classes. Inductive methods analyze restricted models like branching programs to derive lower bounds. However, significant barriers, including relativization and natural proofs, explain why progress on major open questions, like P versus NP, remains elusive. Direct methods, particularly counting arguments, have been pivotal in establishing fundamental lower bounds for circuit complexity. In 1949, Claude Shannon demonstrated that almost all Boolean functions on n variables require circuit size \Omega(2^n / n) over the basis of AND, OR, and NOT gates with fan-in 2. This result arises from a counting argument: the total number of distinct Boolean functions is $2^{2^n}, while the number of circuits of size s is at most (s+2)^{2s} (accounting for choices of gate types, inputs, and topology), implying that for s = o(2^n / n), most functions cannot be represented. For explicit functions, weaker but still significant bounds exist; for instance, the parity function requires exponential size $2^{\Omega(n^{1/(d-1)})} in constant-depth circuits (AC^0) of depth d, as proven using the switching lemma. This highlights how restrictions on circuit depth amplify the impact of counting techniques. Diagonalization, inspired by Cantor's work on uncountability, constructs or functions that differ from all machines in a lower , thereby separating classes. The time , established by Hartmanis and Stearns in , states that if f(n) and g(n) are time-constructible functions with f(n) \log f(n) = o(g(n)), then \mathsf{DTIME}(f(n)) \subsetneq \mathsf{DTIME}(g(n)). The proof uses : a M running in time g(n) simulates all machines running in time f(n) on inputs of length n, using to flip the output on its own description, ensuring it solves a language not solvable in \mathsf{DTIME}(f(n)). This technique separates classes like \mathsf{L} from \mathsf{P}, as logarithmic is contained in time but cannot simulate arbitrary -time computations due to the . thus provides a robust tool for non-relativizing separations based on resource hierarchies. Inductive proofs offer a structured way to derive lower bounds for space-bounded s by analyzing models like branching programs, which simulate logarithmic . Branching programs consist of a with nodes labeled by variables or constants, where paths from source to sink determine the output. In 1966, Nečiporuk proved that certain explicit functions, such as element distinctness, require branching program size \Omega(n^2 / \log^2 n). The proof proceeds inductively by partitioning the variables into k = \Theta(n / \log n) disjoint s and counting the number of distinct subfunctions induced on each subset; each subprogram must distinguish at least $2^{\Omega(k)} possibilities per subset, leading to a superlinear total size lower bound. Since the space used by a branching program is logarithmic in its size, this implies \Omega(\log n) space lower bounds for such problems in deterministic models, underscoring the limitations of space-efficient . Despite these successes, theoretical barriers limit the applicability of proof techniques to unresolved questions. The relativization barrier, introduced by , , and Solovay in 1975, shows that any proof technique that relativizes—meaning it holds in the presence of an —cannot separate \mathsf{[P](/page/P′′)} from \mathsf{[NP](/page/NP)}, as there exist oracles A where \mathsf{P}^A = \mathsf{NP}^A and oracles B where \mathsf{P}^B \neq \mathsf{NP}^B. Their construction uses over oracle machines to build such separating oracles, demonstrating that techniques like time hierarchies, which relativize, are insufficient alone. Complementing this, the natural proofs barrier by Razborov and Rudich in 1997 argues that proofs constructing "natural" hard functions—those that are large, constructive, and useful for pseudorandom generators—would imply the existence of strong pseudorandom generators against \mathsf{P}, contradicting widely believed cryptographic assumptions. These barriers explain why non-relativizing, non-natural proof methods are needed for breakthroughs, guiding research toward algebraic or geometric approaches.

Reductions and NP-Completeness

In , reductions provide a formal for comparing the relative of decision problems by transforming instances of one problem into instances of another while preserving the . A , also known as a Karp reduction, maps an instance of problem A to an instance of problem B such that the to A is yes the to B is yes, and the mapping is computable in time. These are particularly useful for establishing because they preserve membership in . In contrast, a , or Cook reduction, allows an oracle for problem B to solve multiple subproblems of A, still within time, offering greater flexibility but complicating proofs of in some classes. Log-space , a stricter variant, require the transformation to use only logarithmic space on a , ensuring the itself is highly efficient and preserving finer-grained complexity distinctions, such as within P. The concept of identifies problems in that are as hard as any other in the class, meaning every problem in reduces to them via polynomial-time many-one reductions. The foundational result is the Cook-Levin , which proves that the (SAT)—determining if there exists a truth assignment satisfying a given Boolean formula in —is . This constructs a polynomial-time reduction from any problem, verified by a , to an equivalent SAT instance by encoding the machine's computation as clauses that capture accepting paths. Building on this, Richard Karp extended the result to numerous combinatorial problems, demonstrating their through chains of many-one reductions from SAT. Prominent examples of NP-complete problems include 3-SAT, the restriction of SAT to formulas where each clause has exactly three literals, which Karp showed is NP-complete via a straightforward reduction from general SAT by splitting longer clauses using auxiliary variables. The —deciding if a contains a of size at least k—is also NP-complete, reduced from 3-SAT by constructing a where represent literals and clauses, with edges encoding and satisfaction conditions. Similarly, the problem—determining if a has a set of size at most k covering all edges—is NP-complete, proven by a reduction from via the , where a in the original corresponds to a in the complement. The implications of NP-completeness are profound: if any NP-complete problem is solvable in polynomial time, then P = NP, collapsing the hierarchy and resolving numerous open questions in algorithm design and . Ladner's theorem further refines the structure within NP, showing that if P ≠ NP, then there exists a polynomial-time hierarchy of NP problems that are neither in P nor NP-complete, demonstrating dense intermediate degrees under many-one reductions. This implies that NP-completeness does not capture all hardness in NP, allowing for problems of varying intermediate difficulty. Beyond NP, reductions extend to higher classes like PSPACE. The quantified Boolean formulas (QBF) problem, evaluating formulas with alternating existential and universal quantifiers over Boolean variables, is PSPACE-complete, as shown by Stockmeyer and Meyer through reductions from alternating Turing machine computations, capturing the full power of polynomial space. Recent developments in fine-grained complexity employ more precise reductions to probe exponential-time barriers, such as those assuming the Strong Exponential Time Hypothesis (SETH), which posits that k-SAT requires time 2^{n(1-o(1))} for any k. Impagliazzo and Paturi introduced reductions linking SETH-hard problems like k-SAT to graph problems, yielding tight lower bounds for algorithms under SETH without relying on coarser NP-hardness.

Applications and Extensions

Algorithm Selection and Design

In , algorithm selection and design are profoundly influenced by the classification of problems into complexity classes, enabling practitioners to choose methods that balance efficiency with feasibility for given resource constraints. For problems in , polynomial-time algorithms are preferred, as they scale predictably with input size, whereas for NP-hard problems, exact solutions may be intractable, prompting the use of heuristics or approximations that trade optimality for . This guidance ensures that designs prioritize worst-case guarantees where reliability is critical, such as in safety-critical systems, while leveraging average-case performance for practical deployment. Empirical analysis plays a key role in algorithm design by evaluating performance beyond theoretical worst-case bounds, often revealing that average-case complexity is more representative of real-world inputs. Unlike worst-case analysis, which assumes adversarial inputs to derive upper bounds like O(n^2) for quicksort, average-case studies assume probabilistic input distributions to compute expected running times, such as O(n log n) for quicksort under uniform randomness. This approach is particularly useful for refining designs, as empirical measurements on benchmark datasets can validate or adjust theoretical predictions, helping select algorithms that perform well on typical instances despite occasional spikes. For instance, tools like runtime profilers quantify these behaviors across varied inputs, informing decisions on whether to favor algorithms with strong average-case profiles over those optimized solely for worst-case scenarios. For NP-hard problems, where exact polynomial-time solutions are unlikely, heuristics provide practical designs by yielding good approximations quickly, often guided by problem structure rather than exhaustive search. Seminal work on heuristics emphasizes their role in navigating vast solution spaces, such as local search methods that iteratively improve partial solutions until , achieving near-optimal results on many instances without theoretical optimality guarantees. These are essential in applications like scheduling or , where computational budgets limit exact methods, and empirical validation on real data ensures reliability. Design paradigms in complexity-aware algorithm development include dynamic programming for problems admitting solutions, where runtime is in the input's numeric values but in . For the 0-1 , which is NP-complete but weakly so, dynamic programming constructs an optimal solution table of size O(nW), where n is the number of items and W the capacity, enabling exact solutions for moderate W despite the problem's hardness. This paradigm exploits subproblem overlaps and , transforming intractable instances into tractable ones when values are small. Greedy algorithms form another for in complexity-constrained design, selecting locally optimal choices to build global solutions efficiently. In the , Johnson's repeatedly chooses the set the most uncovered elements, achieving an ratio of O(log n) relative to the optimal cover, which is tight unless P=. This approach is widely adopted for its simplicity and polynomial-time execution, providing bounded error for NP-hard problems while avoiding the exponential cost of exact methods. Amortized analysis serves as a tool for designing and evaluating algorithms with variable operation costs, averaging expenses over a sequence to reveal overall efficiency. Introduced formally in Tarjan's framework, it applies methods like the to bound average per-operation time, such as O(1) for dynamic array insertions despite occasional resizes. This technique is crucial for design, ensuring that sporadic high costs do not undermine practical performance. Data structures are designed to optimize space-time tradeoffs, tailoring choices to problem constraints in . For predecessor search, structures like van Emde Boas trees achieve O(log log n) query time with O(n) space, while hash tables offer expected O(1) time at the cost of O(n) space and potential worst-case degradation. These tradeoffs are analyzed to minimize total resources, such as selecting balanced binary search trees for O(log n) time and O(n) space in sorted , balancing query speed against limits. A representative case study is for single-source shortest paths in graphs with non-negative weights, a problem in that exemplifies complexity-guided design. Originally formulated using a implementation yielding O(V^2) time for V vertices, modern variants with binary heaps achieve O((V + E) log V) time, where E is edges, by efficiently extracting minima and updating distances. This design leverages the graph's structure for polynomial scalability, making it suitable for network routing applications.

Parameterized and Approximation Complexity

Parameterized complexity refines classical complexity analysis by considering an additional parameter k alongside the input size n, allowing algorithms with running times of the form f(k) \cdot n^{O(1)}, where f is a , to be deemed efficient for fixed k. Problems admitting such fixed-parameter tractable (FPT) algorithms form the class FPT, introduced by Downey and Fellows as a to distinguish tractable parameterized problems from intractable ones. In contrast, problems that are W-hard, assuming FPT ≠ W, resist FPT algorithms even when parameterized appropriately, establishing a analogous to versus . A canonical example of an FPT problem is , where given a and parameter k, one seeks a set of at most k vertices incident to all edges; it admits a branching algorithm running in O(2^k k n) time. Kernelization complements FPT algorithms by preprocessing instances in polynomial time to produce an equivalent kernel whose size is bounded by a function of k alone, enabling further efficient solving; for , a 2k-vertex kernel exists via and crown decompositions. These techniques have broad applications in graph problems, where parameters like solution size or yield tractable cases otherwise NP-hard. Approximation complexity addresses optimization problems by seeking solutions within a guaranteed factor of the optimum in polynomial time, with the class APX comprising those admitting constant-factor s. A stronger subclass, PTAS, includes problems with (1 + ε)- algorithms for any ε > 0, though the dependence on 1/ε may be super. The , establishing that has probabilistically checkable proofs verifiable by constant queries with bounded error, underpins inapproximability results; for instance, it implies no PTAS exists for the Traveling Salesman Problem unless P = . Advancements leverage Gap-ETH, a strengthened exponential-time hypothesis positing that 3-SAT cannot be distinguished from unsatisfiability in subexponential time with a constant gap, to derive fine-grained inapproximability for parameterized problems like and . Learning-theoretic further link approximation hardness to statistical-computational gaps in , where inapproximability barriers for problems like set cover inform bounds for learning conjunctions. Recent developments as of 2025 explore interfaces between and , such as parameterized algorithms for learning tasks.

References

  1. [1]
    Computational Complexity Theory
    Jul 27, 2015 · Computational complexity theory is a subfield of theoretical computer science one of whose primary goals is to classify and compare the practical difficulty of ...
  2. [2]
    [PDF] AN OVERVIEW OF COMPUTATIONAL COMPLEXITY
    Computational complexity is about the relative difficulty of computing functions, and defining the intrinsic complexity of a problem.
  3. [3]
    [PDF] a gentle introduction to computational complexity theory, and a little ...
    Sep 10, 2011 · Computational complexity theory studies how much resources (like time) a problem takes to solve, focusing on problems solvable by a computer.
  4. [4]
    [PDF] Computational Complexity: A Modern Approach
    Computational complexity theory has developed rapidly in the past three decades. The list of surprising and fundamental results proved since 1990.
  5. [5]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    By A. M. TURING. [Received 28 May, 1936.—Read 12 November, 1936.] The "computable" numbers may be described briefly ...
  6. [6]
    [PDF] An Unsolvable Problem of Elementary Number Theory Alonzo ...
    Mar 3, 2008 · The purpose of the present paper is to propose a definition of effective calculability which is thought to correspond satisfactorily to the ...
  7. [7]
    [PDF] The Complexity of Theorem-Proving Procedures
    It is shown that any recognition problem solved by a polynomial time-bounded nondeterministic Turing machine can be “reduced” to the problem of determining ...
  8. [8]
    [PDF] 1 Computational Problems - Stanford CS Theory
    Apr 29, 2010 · In this course we will deal with four types of computational problems: decision prob- lems, search problems, optimization problems, and counting ...
  9. [9]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · Computational Complexity: A Modern. Approach. Draft of a book: Dated January 2007. Comments welcome! Sanjeev Arora and Boaz Barak. Princeton ...
  10. [10]
    [PDF] A Survey of the Complexity of Succinctly Encoded Problems
    May 5, 2016 · Abstract. Succinctness is one of the few ways that we have to get a handle on the complexity of large classes such as NEXP and EXPSPACE.
  11. [11]
    [PDF] Comparison-based Lower Bounds for Sorting
    Theorem 5.1 Any deterministic comparison-based sorting algorithm must perform Ω(nlog n) com- parisons to sort n elements in the worst case. Specifically, for ...Missing: source | Show results with:source
  12. [12]
    [PDF] Lecture 13: Time Complexity - People | MIT CSAIL
    Measuring Time Complexity of a TM. We measure time complexity by counting the steps. taken for a Turing machine to halt on an input. Example: Let A = { 0k1k | ...
  13. [13]
    [PDF] ECS 120 Lesson 22 – Time Complexity
    May 21, 2001 · The running-time or time complexity of M is the function tM :N → N, where tM (n) is the max- imum number of steps that M performs on any input ...
  14. [14]
    [PDF] Introduction to Complexity Theory
    We define the following basic complexity classes. • A language A ∈ DTIME(T(n)) iff there is a deterministic Turing machine that runs in time T(n) such.
  15. [15]
    ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS
    I. Introduction. In his celebrated paper [1], A. M. Turing investigated the computability of sequences (functions) by mechanical procedures and showed.
  16. [16]
    [PDF] Complexity Theory - Lecture 12: Hierarchy Theorems - TU Dresden
    Nov 29, 2017 · This improvement was discovered soon after the first Time Hierarchy Theorem was found by Hartmanis and Stearns (1965). Markus Krötzsch, 29th ...
  17. [17]
  18. [18]
    Relationships between nondeterministic and deterministic tape ...
    The amount of storage needed to simulate a nondeterministic tape bounded Turingmachine on a deterministic Turing machine is investigated.
  19. [19]
  20. [20]
    [PDF] Undirected Connectivity in Log-Space∗ - Omer Reingold
    We present a deterministic, log-space algorithm that solves st-connectivity in undirected graphs. The previous bound on the space complexity of undirected ...
  21. [21]
    [PDF] a time-space tradeoff for sorting on a general
    model andthe time and space measures are sufficiently general that any lower bounds do reflect an intrinsic property of the function (sorting) being computed. ...
  22. [22]
    Some complexity questions related to distributive computing ...
    Some complexity questions related to distributive computing(Preliminary Report). Author: Andrew Chi-Chih Yao.
  23. [23]
    [PDF] The Multiparty Communication Complexity of Exact-T
    Multiparty communication complexity was first defined by Chandra, Furst, and Lipton [8] and used to obtain lower bounds on branching programs. Since then it has ...
  24. [24]
    [PDF] the multiparty communication complexity of set disjointness
    In this section, we study the communication complexity of set disjointness in the nondeterministic and. Merlin-Arthur multiparty models. We will see that the ...<|control11|><|separator|>
  25. [25]
    [PDF] Lecture 18-19: Communication complexity lower bounds
    Apr 4, 2013 · The gap between the deterministic and private coin complexity of a function cannot be more than exponential. The proof of the following ...
  26. [26]
    [PDF] Computational Complexity: A Modern Approach - cs.Princeton
    Methods to lower bound the communication complexity of specific functions include the fooling set, tiling, rank, and discrepancy methods. Using these.
  27. [27]
    [PDF] Robust Lower Bounds for Communication and Stream Computation
    In this paper, we use robust lower bounds on communication complexity in order to deduce robust data stream lower bounds. Once the communication bounds have ...
  28. [28]
    Communication complexity of secure computation (extended abstract)
    In this paper, we initiate the investigation of the communication complexity of unconditionally secure multi-party computation, and its relation with various ...
  29. [29]
    Big Omicron and big Omega and big Theta | ACM SIGACT News
    Big Omicron and big Omega and big Theta. Author: Donald E. Knuth. Donald E ... First page of PDF. Formats available. You can view the full content in the ...
  30. [30]
    Introduction to Algorithms - MIT Press
    A comprehensive update of the leading algorithms text, with new material on matchings in bipartite graphs, online algorithms, machine learning, and other ...
  31. [31]
    [PDF] UNIT 5B Binary Search
    We say Binary Search has complexity O(log n). 22. “floor”. Page 42. O(log n) (“logarithmic time”). 24 n. (amount of data). Number of. Operations log. 2 n log.
  32. [32]
    [PDF] 22C:21 Lecture Notes Running time of Binary Search
    Sep 3, 2008 · Therefore, the running time of the binarySearch function is at most 8log2 n + 5.
  33. [33]
    [PDF] The Complexity Classes P and NP
    Why Polynomial Time? It is convenient to define decision problems to be tractable if they belong to the class P, since. - the class P is closed under ...
  34. [34]
    Intractability - Algorithms, 4th Edition
    In contrast, an exponential time algorithm has a different qualitative behavior. For example, a brute force algorithm for the TSP might take N! steps.
  35. [35]
    Intractable Problems - UMSL
    Most intractable problems have an algorithm – the same algorithm – that provides a solution, and that algorithm is the brute-force search. ... exponential time ...
  36. [36]
    Review of Asymptotic Complexity - CS@Cornell
    Order of Growth and Big-O Notation ; O(log n), logarithmic ; O(n), linear ; O(n log n), "n log n" ; O(n2), quadratic ; O(n3), cubic ...
  37. [37]
    [PDF] Fine-Grained Complexity Theory - DROPS
    In other words, is there an analogue of NP-hardness for polynomial-time problems? In this tutorial, we will give an introduction to fine-grained complexity ...
  38. [38]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · In the 1980s, researchers felt boolean circuits are mathematically simpler than the Turing Machine, and thus proving circuit lowerbounds may ...
  39. [39]
    [PDF] g. C. SHEt'HERDSON Univer.sity of Bristol, EnglandI AND H. E. ...
    COMPUTABILITY OF RECURSIVE FUNCTIONS. 247. We now introduce a convenient abbreviation. Suppose we have subroutines t,'~ ) (i = 1, - • - , s) for performing ...
  40. [40]
    [PDF] The Synthesis of Two-Terminal Switching Circuits
    The Synthesis of Two-Terminal Switching Circuits. By CLAUDE. E. SHANNON. PART I: GENERAL THEORY. 1. INTRODUCTION. HE theory of switching circuits may be divided ...
  41. [41]
  42. [42]
    [PDF] 9 Nondeterministic Turing Machines - Jeff Erickson
    It is fairly easy to reduce the running time to O(trt) by using either two tapes (a “read tape” containing N-configurations at time t and a “write tape”.
  43. [43]
    [PDF] The Complexity of Theorem-Proving Procedures - cs.Toronto
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings means a ...
  44. [44]
    [PDF] Nondeterministic space is closed under complementation
    In this paper we show that nondeterministic space s(n) is closed under com- plementation, for s(n) greater than or equal to logn. It immediately follows.
  45. [45]
    Alternation | Journal of the ACM
    We define alternating Turing Machines which are like nondeterministic Turing Machines, except that existential and universal quantifiers alternate. Alternation ...
  46. [46]
    [PDF] Nondeterministic Turing Machines - Cornell: Computer Science
    Just like finite automata and pushdown automata, Turing machines exist in nondeterministic as well as deterministic flavors. However, nondeterminism does ...
  47. [47]
    Parallelism in random access machines - ACM Digital Library
    In this paper we introduce a formal model for random access computers and argue that the model is a good one to use in the theory of computational complexity.
  48. [48]
    Locality in Distributed Graph Algorithms | SIAM Journal on Computing
    This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion.
  49. [49]
    Quantum theory, the Church–Turing principle and the universal ...
    Quantum theory, the Church–Turing principle and the universal quantum computer. David Deutsch.
  50. [50]
    [quant-ph/9508027] Polynomial-Time Algorithms for Prime ... - arXiv
    Aug 30, 1995 · Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. Authors:Peter W. Shor (AT&T Research).
  51. [51]
    Quantum supremacy using a programmable superconducting ...
    Oct 23, 2019 · ... Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to ...
  52. [52]
    [PDF] P, NP and mathematics – a computational complexity perspective
    Dec 21, 2006 · 6. Page 7. Definition 2.2 (The class P). A function f : I → I is in the class P if there is an algorithm computing f and positive constants A, c ...
  53. [53]
    [PDF] REDUCIBILITY AMONG COMBINATORIAL PROBLEMS
    REDUCIBILITY AMONG COMBINATORIAL PROBLEMS. +. Richard M. Karp. University of California at Berkeley. Abstract: A large class of computational problems involve ...
  54. [54]
    Arthur-Merlin games: A randomized proof system, and a hierarchy of ...
    Arthur-Merlin games: A randomized proof system, and a hierarchy of complexity classes ... Babai, Interactive proofs in finite groups, in “Randomness and ...Missing: paper | Show results with:paper
  55. [55]
    Relativized Arthur-Merlin versus Merlin-Arthur games - ScienceDirect
    Arthur-Merlin games were introduced recently by Babai in order to capture the intuitive notion of efficient, probabilistic proof systems.<|control11|><|separator|>
  56. [56]
    P vs NP - Clay Mathematics Institute
    P vs NP. If it is easy to check that a solution to a problem is correct, is it also easy to solve the problem? This is the essence of the P vs NP question.
  57. [57]
    [PDF] Lecture Notes on Computational Complexity - Luca Trevisan
    From now on, we will talk about decision problems and languages interchangeably. In a search problem, given an input x ∈ {0,1}∗ we want to compute some answer.
  58. [58]
    On the Structure of Polynomial Time Reducibility
    Define P (NP) to be the class of problems recognized by deterministic (nondeterministic) Turmg machines which run in polynomial time. Copyright O 1975, ...
  59. [59]
    [PDF] Beyond NP: The Work and Legacy of Larry Stockmeyer
    May 24, 2005 · Stockmeyer and Meyer [61, 58] gave a PSPACE- complete problem Bω (now TQBF) by generalizing the com- plete sets developed by Meyer and ...
  60. [60]
    [PDF] Measuring Empirical Computational Complexity - Stanford CS Theory
    ABSTRACT. The standard language for describing the asymptotic behavior of algorithms is theoretical computational complexity. We propose.
  61. [61]
    [PDF] Rigorous Analysis of Heuristics for NP-hard Problems
    Jan 21, 2005 · A heuristic for NP-hard problems is a polynomial time algorithm that produces optimal or near optimal solutions on some inputs, but may fail on ...
  62. [62]
    [PDF] COMPUTERS AND INTRACTABILITY A Guide to the Theory of NP ...
    However, even without a proof that. NP-completeness implies intractability, the knowledge that a problem is. NP-complete suggests, at the very least, that a ...Missing: pseudo- | Show results with:pseudo-
  63. [63]
    [PDF] Amortized Computational Complexity - cs.Princeton
    The banker's view of amortization was used implicitly by Brown and Tarjan [8] in analyzing the amortized complexity of 2, 3 trees and was developed more fully ...
  64. [64]
    [PDF] Time-Space Trade-Offs for Predecessor Search - People | MIT CSAIL
    Mar 10, 2006 · In this paper we provide tight trade-offs between query time and space of representation for static predecessor search. This is one of the most ...
  65. [65]
    [PDF] dijkstra-routing-1959.pdf
    1. 19. Page 2. 270. E. W. DIJKSTRA: the data for at most a branches, viz. the branches in sets I and II and the branch under consideration ...
  66. [66]
    [PDF] Parameterized Algorithms - mimuw
    Jan 4, 2023 · Parameterized algorithmics analyzes running time in finer detail than clas- sical complexity theory: instead of expressing the running time as a ...
  67. [67]
    [PDF] Kernelization – Preprocessing with A Guarantee - CS@UCSB
    Historically, the study of kernelization is rooted in parameterized complexity but it appeared soon that the challenges of kernelization tractability are deeply.
  68. [68]
    Theoretical Applications for Approximation Algorithms
    Dec 8, 2011 · For example, the O(logn) approximation to set cover can be used to show a better bound on the sample complexity of learning conjunctions.