Fact-checked by Grok 2 weeks ago

Quantum complexity theory

Quantum complexity theory is a branch of computational complexity theory that studies the resources—such as time, space, and query complexity—required to solve computational problems using models of quantum computation, which harness principles of quantum mechanics like superposition, entanglement, and interference. Unlike classical complexity theory, which relies on deterministic or probabilistic Turing machines, quantum complexity theory explores how quantum phenomena enable potential speedups over classical algorithms, defining new complexity classes and investigating their relationships to classical ones like P, NP, and BPP. This field emerged in the late 20th century, building on foundational work that demonstrated quantum computers could simulate classical ones efficiently while potentially outperforming them on specific tasks. Central to quantum complexity theory are formal models of computation, including quantum Turing machines (QTMs) and quantum circuits. A QTM extends the classical by incorporating a quantum state control function that ensures unitary evolution of the system's wavefunction, allowing the machine to process superpositions of states. Quantum circuits, on the other hand, represent computations as sequences of unitary gates applied to qubits—the basic units of , which are two-dimensional vectors in , such as the basis states |0⟩ and |1⟩. These models are polynomially equivalent in power, meaning problems solvable in polynomial time on one can be solved in polynomial time on the other. Additional frameworks, like the quantum query model, analyze the number of queries to an needed to solve problems, revealing quadratic speedups in unstructured search scenarios. Key complexity classes in this theory include (Bounded-Error Quantum Polynomial Time), which consists of decision problems solvable by a in polynomial time with success probability at least 2/3. BQP contains the classical classes P and BPP, is contained within and , and is believed to include problems outside BPP, such as . Another important class is QMA (Quantum Merlin-Arthur), the quantum analog of , where a quantum proof (a state in a ) can be verified efficiently by a quantum verifier with high probability. QMA is known to contain and QCMA, and problems like the k-local are QMA-complete for k ≥ 2, highlighting the class's hardness. Recent surveys emphasize ongoing efforts to separate these classes from classical ones and explore quantum advantages in areas like optimization and . Notable results underscore quantum complexity theory's impact, including , which factors large integers in polynomial time on a quantum computer—exponentially faster than the best known classical algorithms—and , which provides a quadratic speedup for unstructured search problems. These algorithms demonstrate that properly contains problems not known to be in BPP, fueling research into and fault-tolerant quantum computing. Challenges persist, such as decoherence and error correction, but advances in quantum error-correcting codes (e.g., Shor's 9-qubit code) and hybrid quantum-classical approaches continue to bridge theoretical insights with practical implementations.

Foundations

Historical Development

The foundations of quantum complexity theory were laid in the 1970s and 1980s through advancements in classical , which provided essential concepts for analyzing computational efficiency. In 1971, introduced the classes (problems solvable in polynomial time by a deterministic ) and (problems verifiable in polynomial time), framing the central question of whether equals . Building on this, Richard Karp in 1972 demonstrated for 21 combinatorial problems, establishing a framework for classifying computational hardness that would later influence quantum analogs. These developments highlighted the limitations of classical , motivating explorations into alternative models. Quantum ideas emerged in the late and early as physicists sought to address the inefficiencies of classical simulations for . In 1980, Paul Benioff proposed a quantum mechanical using a driven by unitary evolution, demonstrating that could perform universal computation equivalently to classical ones but potentially more efficiently for certain physical simulations. This was expanded in 1982 by , who argued in his lecture that classical computers struggle to simulate quantum physics due to exponential state growth, advocating for quantum computers to naturally model such phenomena. These works shifted focus toward quantum mechanical principles as a basis for enhanced computational power, bridging physics and . The formalization of quantum complexity theory began in the early 1990s, with Ethan Bernstein and Umesh Vazirani's 1993 paper defining the class (bounded-error quantum polynomial time), which captures problems solvable efficiently by a with access to quantum gates and measurement. This definition provided a rigorous framework analogous to classical , enabling comparisons between quantum and classical resources. The field gained momentum in 1994 through Peter Shor's algorithm for and discrete logarithms, which demonstrated that contains problems believed outside , such as factoring large numbers exponentially faster than known classical methods. That same year, Daniel Simon's problem showed quantum algorithms could solve parity functions with exponential over classical query complexity, further illustrating quantum advantages. The late 1990s marked a boom in quantum algorithms, exemplified by Lov Grover's 1996 , which achieves speedup for unstructured database searches, influencing query complexity models. In the 2000s, attention turned to interactive proof systems and verification, with Alexei Kitaev's 1999 introduction of (quantum analog of ), which formalizes quantum Merlin-Arthur proofs for verifying quantum computations. This class captured problems like local estimation, central to quantum results. Key experimental milestones validated theoretical predictions through demonstrations of quantum advantage. In 2019, Google's achieved by sampling random quantum circuits in 200 seconds—a task estimated to take classical supercomputers 10,000 years—confirming practical separations in . By 2025, advancements in optical networks enabled distributed , as shown in experiments linking trapped-ion modules via photonic interconnects to perform entangled computations intractable on isolated classical or single quantum systems, demonstrating progress toward scalability for complex quantum simulations.

Classical Complexity Prerequisites

A deterministic Turing machine (DTM) is a theoretical consisting of an infinite divided into cells, a read/write head that moves left or right, a of , and a transition function that dictates the next , to write, and head based on the current and scanned . The of a DTM on input of length n is the maximum number of steps it takes to halt across all inputs of that length, leading to classes like \mathsf{DTIME}(t(n)), the set of languages decided by DTMs running in at most O(t(n)) time. Similarly, measures the maximum number of tape cells visited, defining \mathsf{DSPACE}(s(n)) as the languages decided using O(s(n)) space. The class \mathsf{P} comprises languages decidable by DTMs in polynomial time, i.e., \mathsf{P} = \bigcup_{k \geq 1} \mathsf{DTIME}(n^k). \mathsf{NP} is the class of languages where membership can be verified in polynomial time by a DTM given a nondeterministic certificate, formally \mathsf{NP} = \bigcup_{k \geq 1} \mathsf{NTIME}(n^k), with \mathsf{P} \subseteq \mathsf{NP}. \mathsf{BPP} includes languages decided by probabilistic Turing machines (PTMs) in polynomial time with error probability at most $1/3 for every input. \mathsf{PSPACE} consists of languages decidable by DTMs using polynomial space, i.e., \mathsf{PSPACE} = \bigcup_{k \geq 1} \mathsf{DSPACE}(n^k). The time hierarchy theorem establishes strict separations between complexity classes based on time bounds: for time-constructible functions t(n) and t'(n) where t'(n) = \Omega(t(n) \log t(n)) and t(n) = o(t'(n)/\log t'(n)), \mathsf{DTIME}(t(n)) \subsetneq \mathsf{DTIME}(t'(n)). This diagonalization argument constructs a language requiring more time than any machine in \mathsf{DTIME}(t(n)), proving that more computational time enables solving harder problems. Oracle machines extend Turing machines by allowing queries to an oracle set A \subseteq \{0,1\}^*, answered in one step, relativizing computations relative to A (denoted M^A). The Baker-Gill-Solovay theorem (1975) demonstrates that relativization cannot resolve \mathsf{P} vs. \mathsf{NP}: there exists an oracle A such that \mathsf{P}^A = \mathsf{NP}^A, and an oracle B such that \mathsf{P}^B \neq \mathsf{NP}^B. These oracles are constructed via diagonalization, showing that proofs of \mathsf{P} = \mathsf{NP} or \mathsf{P} \neq \mathsf{NP} must use non-relativizing techniques. Probabilistic classes like \mathsf{RP} and \mathsf{BPP} differ in error structure: \mathsf{RP} requires PTMs to accept yes-instances with probability at least $2/3 and reject no-instances with probability 1 (one-sided error), while \mathsf{BPP} allows error at most $1/3 on both yes- and no-instances (two-sided error). Thus, \mathsf{RP} \subseteq \mathsf{BPP}, but the one-sided error in \mathsf{RP} makes it suitable for problems where false positives are tolerable but false negatives are not. These classical classes provide foundational baselines for defining analogous quantum complexity measures.

Quantum Computing Basics

Quantum computing relies on quantum bits, or , as the fundamental units of information, which differ from classical bits by allowing superposition and entanglement. A single is represented mathematically as a in a two-dimensional complex : |\psi\rangle = \alpha |0\rangle + \beta |1\rangle, where \alpha and \beta are complex amplitudes satisfying the normalization condition |\alpha|^2 + |\beta|^2 = 1. This superposition enables a to encode both 0 and 1 simultaneously, with the probabilities of measuring 0 or 1 given by |\alpha|^2 and |\beta|^2, respectively. Multiple qubits extend this to higher-dimensional spaces; for n qubits, the state resides in a $2^n-dimensional , allowing exponential growth in representable states. Quantum gates manipulate through unitary operations, preserving the norm of the and enabling reversible computations. Basic single-qubit gates include the Pauli-X gate, which acts as a quantum NOT operation by swapping |0\rangle and |1\rangle, represented by the matrix \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. The Hadamard gate creates superposition, transforming |0\rangle to \frac{|0\rangle + |1\rangle}{\sqrt{2}} and |1\rangle to \frac{|0\rangle - |1\rangle}{\sqrt{2}}, with matrix \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}. Two-qubit gates like the controlled-NOT (CNOT) introduce entanglement, flipping the target qubit if the control is |1\rangle, as in |00\rangle \to |00\rangle, |01\rangle \to |01\rangle, |10\rangle \to |11\rangle, |11\rangle \to |10\rangle. These gates form a for quantum computation when combined appropriately. The (QTM) extends the classical model to incorporate quantum effects, serving as a foundational theoretical for quantum computation. In a QTM, the tape and internal states can exist in superposition, with transitions governed by unitary evolution rather than deterministic rules. Computation proceeds via a sequence of unitary transformations on the joint state of the machine and input, followed by ; a -time QTM is defined as one completing its evolution in a number of steps in the input size. This model captures the full power of quantum parallelism while tying back to classical s through outcomes. An equivalent and widely used model is the , consisting of a sequence of quantum applied to n qubits, starting from an initial state like |0\rangle^{\otimes n}. Each is a acting on one or more qubits, and the circuit ends with in the computational basis, yielding a classical bit string with probability determined by the squared modulus of the final . Polynomial-size quantum circuits, with depth polynomial in n, simulate any polynomial-time QTM, establishing their computational equivalence. Measurement in quantum computing follows the Born rule, where the probability of collapsing to a basis state |x\rangle from |\psi\rangle = \sum_x \alpha_x |x\rangle is |\alpha_x|^2. This projective irreversibly collapses the superposition, extracting classical but disrupting quantum . The prohibits perfect copying of arbitrary unknown quantum states, as any attempt to clone a superposition leads to inconsistent outcomes across basis states, limiting error correction and information broadcasting in quantum systems.

Core Quantum Complexity Classes

BQP and Its Definition

Bounded-error quantum time () is a fundamental in quantum computational theory, capturing the decision problems solvable by a quantum computer in time with bounded probability. Formally, consists of all languages L \subseteq \{0,1\}^* for which there exists a -time quantum M such that for every input x \in L, the probability that M(x) accepts (outputs 1) is at least $2/3, and for every x \notin L, the probability that M(x) accepts is at most $1/3. This mirrors the bounded- requirement of classical probabilistic classes but leverages quantum and for potentially greater computational power. In the quantum circuit model, a computation is represented by a uniformly generated family of -size quantum acting on n , initialized in the state |0\rangle^{\otimes n}, followed by of the first qubit; the acceptance probability is then |\langle 1 | U | 0 \rangle|^2, where U is the unitary evolution of the circuit. A key property of BQP is the ability to amplify the success probability through repetition. By running the quantum algorithm multiple times independently and taking the majority vote on the outcomes, the error probability can be reduced exponentially to negligibly small values, analogous to amplification in the classical BPP class; this holds because quantum measurements are independent across runs, and the bounded initial error ensures the amplification succeeds in polynomial time. Regarding inclusions with classical complexity classes, it is known that \mathrm{P} \subseteq \mathrm{BQP} \subseteq \mathrm{PSPACE} and \mathrm{BQP} \subseteq \mathrm{PP}, where PP denotes probabilistic polynomial time with unbounded error. The inclusion \mathrm{P} \subseteq \mathrm{BQP} follows from the ability of quantum circuits to simulate classical deterministic computation without error. The containment \mathrm{BQP} \subseteq \mathrm{PSPACE} arises from a polynomial-space algorithm that approximates the acceptance probability by recursively computing the sum of amplitudes over the exponential number of computational paths using dynamic programming techniques that reuse space. Similarly, \mathrm{BQP} \subseteq \mathrm{PP} because the acceptance probability can be decided by a probabilistic machine that samples paths and uses majority voting, leveraging the fact that PP can approximate such probabilities without bounded error constraints. To demonstrate that BQP properly extends classical probabilistic power, oracle separations exist relative to certain oracles. Bernstein and Vazirani introduced the recursive Fourier sampling problem, which is in but not in BPP relative to an , providing early of quantum . Subsequently, Simon's problem further illustrates this separation: given access to a that is either constant or has a unique period, a solves it in time, while any classical probabilistic algorithm requires exponential queries relative to the same , thus showing \mathrm{[BQP](/page/BQP)} \not\subseteq \mathrm{BPP} in the relativized world. The quantum complexity class QMA, or Quantum Merlin-Arthur, serves as the quantum analogue of the classical complexity class . In QMA, a "yes" instance of a can be verified in time by a quantum verifier (Arthur) that receives a polynomial-size quantum proof state |ψ⟩ from a prover (), with the verifier operating as a machine. The class is defined with respect to completeness and soundness parameters, typically 2/3 and 1/3 respectively, ensuring that for "yes" instances, there exists a proof accepted with probability at least 2/3, while for "no" instances, no proof is accepted with probability exceeding 1/3. A QMA-complete problem is the Local Hamiltonian problem, introduced by Kitaev in , which involves determining the energy of a quantum system described by a sum of local s. Given an n-qubit H = ∑_{i=1}^m H_i, where each H_i acts non-trivially on at most k qubits (with k constant, such as 5 for the original result), and thresholds a, b ∈ ℝ with b - a ≥ 1/poly(n), the promise problem decides whether the minimum eigenvalue λ_min(H) ≤ a (yes case) or λ_min(H) ≥ b (no case). This problem captures the difficulty of finding low-energy states in many-body , and its QMA-completeness holds even for 3-local Hamiltonians. Related classes include QCMA (Quantum Classical Merlin-Arthur), where the proof is classical rather than quantum, with the verifier still quantum-powered; co-QMA, the class of complements of QMA languages; and approximate variants like PromiseQMA, which refine the promise structure for decision problems with specified gaps. Inclusion relations position QMA hierarchically as NP ⊆ QMA ⊆ PP, reflecting that classical nondeterministic verification embeds into quantum proofs, while QMA decisions can be simulated by probabilistic polynomial-time machines with majority vote. Additionally, QMA ⊆ QMA(2), where the proof consists of two unentangled polynomial-sized quantum states, allowing verification with restricted proof structures. In the 2020s, research on QMA with multiple provers (QMA(k) for k ≥ 2 non-communicating provers) has advanced amplification and to quantum PCPs, showing improved protocols for multi-prover settings that enhance robustness without collapsing to larger classes like NEXP.

Simulating Quantum Systems

Algorithms for Classical Simulation

Classical simulation of quantum computations involves developing algorithms to replicate the behavior of quantum circuits or systems on classical hardware, which is essential for verifying quantum devices, algorithms, and understanding computational limits. These methods range from exact representations that capture full quantum states to approximate techniques exploiting structure in specific quantum circuits. While exact simulation is feasible only for small systems due to exponential resource demands, specialized approaches enable efficient for restricted classes of quantum operations, such as those with low entanglement or particular gate sets. Exact simulation maintains the full vector, a complex of $2^n for n qubits, updated via matrix-vector multiplications for each in the . This requires O(2^n) space to store the and O(2^n) time per two-qubit , as the dense $4 \times 4 unitary is applied to pairs of amplitudes across the $2^n basis states, leading to overall O(2^n \cdot \mathrm{poly}(n)) for depth-\mathrm{poly}(n) circuits. Such methods, implemented in simulators like QuEST, provide precise results but scale poorly beyond 40-50 qubits on high-performance classical systems. Tensor network methods represent quantum states and operators as networks of lower-dimensional , enabling efficient simulation for circuits with limited entanglement. Matrix Product States (MPS), a one-dimensional , approximate states as a chain of with bond dimension \chi controlling accuracy, suitable for gapped one-dimensional systems or shallow circuits where entanglement grows slowly. For circuit evaluation, algorithms sequentially apply by reshaping and multiplying , with O(n \chi^3 d^2) per gate for local of dimension d, often in n for low-\chi regimes. These techniques, rooted in the density-matrix renormalization group (DMRG), have simulated circuits up to hundreds of qubits when entanglement is bounded. Path integral approaches simulate quantum evolution by summing over all possible paths in a discretized imaginary-time or real-time formalism, particularly efficient for sparse Hamiltonians where interactions are local. In (PIMC), the partition function or operator is sampled stochastically via updates on worldlines, avoiding direct storage and scaling favorably for stoquastic or sign-problem-free Hamiltonians with O(2^{n/2}) or better effective in low dimensions. This method excels for thermal properties or ground-state projection of sparse many-body systems, such as lattice models, by leveraging the Hamiltonian's sparsity to reduce path correlations. The stabilizer formalism enables polynomial-time simulation of Clifford circuits, which consist of Hadamard, phase, CNOT, and single-qubit measurements, using Gottesman's representation of states. Stabilizer states are tracked via a tableau of $2n Pauli operators that stabilize the state, updated in O(n^2) time per gate through symplectic transformations on the tableau. This approach, formalized in the Gottesman-Knill theorem, simulates arbitrary-depth Clifford circuits in O(n^3) total time, as measurements project onto stabilizer subspaces without exponential cost. Aaronson and Gottesman later optimized it to O(n^2) per gate by avoiding , facilitating simulations of up to thousands of qubits. Recent advances incorporate variational methods for approximate simulation, adapting hybrid quantum-classical techniques like the (VQE) to fully classical frameworks. In classically optimized VQE (CO-VQE), parameterized circuits are optimized using classical gradients to approximate ground states or expectation values, reducing circuit depth and enabling simulation of noisy intermediate-scale quantum (NISQ)-like systems with up to 100 qubits on GPUs. In 2025, new simulators such as qblaze have pushed these limits further, enabling efficient testing of quantum algorithms on circuits with up to 60 qubits through optimized GPU-based state-vector methods. These methods combine for state representation with variational optimization, achieving near-exact results for low-entanglement problems while scaling better than exact methods for larger instances.

Hardness Results for Simulation

In quantum complexity theory, the hardness of classically simulating quantum s is demonstrated through lower bounds that establish the computational intractability of replicating quantum behavior on classical hardware. Strong simulation, defined as the exact of measurement outcome probabilities for a quantum circuit, is #P-hard for general quantum circuits. This hardness arises because such probabilities can encode #P-complete problems, such as counting the number of accepting paths in a nondeterministic . Even for restricted classes like matchgate circuits, which are efficiently simulable in some settings, strong simulation becomes #P-hard when generalized inputs or measurements are allowed, as shown by connections to Valiant's results on polynomial-time simulable quantum s. Weak simulation, which requires generating samples from the quantum circuit's output such that the sampled probabilities approximate the true ones, also faces significant barriers. weak simulation is PP-hard under certain oracle assumptions, where PP denotes the class of decision problems solvable by a accepting with majority vote. This relativized underscores that quantum sampling tasks can exceed the power of classical probabilistic computation in black-box settings, implying that efficient classical samplers would collapse complexity classes in those worlds. A key threshold result due to Aaronson establishes that simulating random quantum circuits of sufficient depth is #P-hard, even approximately. Specifically, for circuits drawn from ensembles with constant-depth layers of random two-qubit gates, computing output probabilities to within additive error requires exponential time classically, with direct implications for demonstrations. This hardness provides a rigorous basis for arguing that certain quantum experiments, like random circuit sampling, cannot be efficiently faked by classical means, thereby certifying quantum advantage. Relativized hardness results further illuminate simulation challenges, as exemplified by the BBBV theorem, which shows that quantum query advantages—such as those in unstructured search—imply classical simulation hardness in worlds. In these settings, if a achieves superpolynomial speedup in the query model, no classical algorithm can simulate it efficiently relative to the , preventing broad collapses of the . Recent developments in the 2020s have strengthened these bounds for non-Clifford circuits, proving that adding even a constant number of non-Clifford gates to Clifford circuits (which are efficiently simulable) requires exponential classical resources for strong simulation. Connections to hardness reinforce this, where non-Clifford operations in photonic systems yield #P-hard sampling tasks that remain intractable even under noise models.

Quantum Query Complexity

Models of Query Access

In quantum complexity theory, the standard query model treats the input as a black-box O_f that encodes a f, allowing quantum algorithms to access it through unitary operations on superposed states. Specifically, for a f: \{0,1\}^n \to \{0,1\}, the acts as O_f |x\rangle |b\rangle = |x\rangle |b \oplus f(x)\rangle, where |x\rangle is a superposition of input strings and |b\rangle is an ancilla ; each application of O_f counts as one query, enabling parallel evaluation across the superposition. This model contrasts with classical query access, which is limited to individual inputs, and forms the basis for analyzing quantum speedups in black-box settings. For graph problems, the model provides query access to the graph's structure via an O_{A} for the adjacency matrix A \in \{0,1\}^{n \times n}, where O_A |u\rangle |v\rangle |b\rangle = |u\rangle |v\rangle |b \oplus A_{u,v}\rangle and A_{u,v} = 1 if an exists between vertices u and v. Quantum queries in this model allow superposition over pairs (u,v), facilitating efficient extraction of global properties, and can incorporate oracles that apply a phase : O_A^\phi |u\rangle |v\rangle = (-1)^{A_{u,v}} |u\rangle |v\rangle, which preserves amplitudes while encoding information in phases. This approach is particularly suited for dense graphs, where the matrix representation enables broad superposition-based queries. The or model, an alternative for sparse graphs, grants via an O_u that returns the ordered of of u, often modeled as querying an where positions correspond to potential . In the quantum setting, this allows superposition over u, with ordered enabling algorithms to extract neighbor in , though it requires additional structure to handle variable degrees without revealing the full at once. Quantum versions typically count each to a neighbor entry as a query, supporting algorithms that navigate graphs by amplifying relevant neighbor information. More generally, black-box query models in quantum complexity extend the classical framework, where algorithms adaptively or non-adaptively query the to compute a , but quantum versions leverage superposition and for amplitude-based computation. Techniques like , which iteratively boosts the probability of measuring correct outcomes, are integral to achieving bounded-error success in these models. The quantum query complexity Q(f) of a f is defined as the minimum number of queries required by any A to compute f(x) with success probability at least $2/3 (error at most $1/3) over random x. This measure quantifies the inherent query cost, independent of gate complexity, and underpins separations between quantum and classical resources.

Applications to Search and Graph Problems

Quantum query complexity finds significant applications in unstructured search problems, where the goal is to identify a marked item in a database without any inherent structure. provides a quadratic speedup in this setting, requiring O(\sqrt{N}) quantum queries to find the marked item with high probability in a database of size N, whereas any classical deterministic or requires \Omega(N) queries in the worst case. This speedup arises from the ability of quantum queries to amplify the of the target state through phase inversion and operations. In graph problems, the adjacency matrix query model—where queries reveal entries of the graph's —highlights quantum advantages for tasks. For s-t , which determines if there is a between specified vertices s and t in an n-vertex , the quantum query complexity is \Theta(n^{3/2}), offering a over the classical lower bound of \Omega(n^2). This quantum bound is tight, with the upper bound achieved by combining search with construction techniques, and the lower bound derived via the polynomial method for monotone properties. In alternative models like the query, quantum algorithms achieve \Theta(n) queries for s-t , matching classical bounds up to logarithmic factors but enabling more efficient implementations in sparse s. The element distinctness problem, a generalization of , asks whether all elements in a list of n items (each from a large domain) are unique or if at least two are identical. Ambainis' quantum walk-based solves this with O(n^{2/3}) quantum queries, improving upon earlier O(n^{3/4}) bounds and providing a super-quadratic over the classical \Omega(n) query lower bound. This result relies on on a Johnson graph to detect collisions efficiently, and the bound is tight, matching the \Omega(n^{2/3}) quantum lower bound established via the adversary method. The forrelation problem further illustrates profound quantum advantages, serving as a property-testing task to estimate the correlation between a Boolean function and the Fourier transform of another. It admits a quantum algorithm using O(\sqrt{n}) queries to approximate the forrelation value to constant precision, while any classical algorithm requires \Omega(n) queries. This yields an exponential separation in query complexity, close to the maximal possible \Theta(\sqrt{n}) quantum versus \Theta(n) classical, and has implications for understanding the power of quantum superposition in linear algebraic problems. Recent advancements include parameterized quantum query algorithms for graph problems like k-vertex cover and k-matching in the model. As of 2024, these achieve query complexities of O(\sqrt{kn} + k^{3/2}\sqrt{n}) for k-vertex cover and O(\sqrt{kn} + k^2) for k-matching, providing optimal bounds for k = O(\sqrt{n}) and k = O(n^{2/3}), respectively, with matching lower bounds of \Omega(\sqrt{kn}) via the adversary method. These results demonstrate quantum speedups for fixed-parameter tractable problems, leveraging search and kernelization techniques.

Example Quantum Query Algorithms

One of the earliest demonstrations of quantum speedup in the query model is the Deutsch-Jozsa algorithm, which determines whether a f: \{0,1\}^n \to \{0,1\} is or balanced (i.e., takes value 0 on exactly half of its inputs) using a single query to a phase oracle that applies (-1)^{f(x)} to the input state |x\rangle. The algorithm begins by preparing an equal superposition of all $2^n inputs using Hadamard gates on n+1 s (with the ancillary qubit initialized to |1\rangle), applies the phase oracle to entangle the function evaluation, and then applies Hadamard gates again to the first n qubits; measuring these yields the all-zero state if f is and a nonzero state otherwise, succeeding with certainty. In contrast, any deterministic classical requires at least $2^{n-1} + 1 queries in the worst case to distinguish constant from balanced functions, while randomized classical algorithms require \Omega(2^{n/2}) queries with bounded error. The Bernstein-Vazirani algorithm solves the problem of identifying a hidden string s \in \{0,1\}^n such that f(x) = s \cdot x \mod 2 for all x, using exactly one query to the function oracle. It initializes n qubits in |0\rangle and applies Hadamard gates to create a superposition in the Hadamard basis, queries the oracle (which flips the phase based on the inner product), and applies Hadamard gates again before measurement; the resulting bit string is precisely s. This achieves an exponential speedup, as any classical algorithm, even randomized, requires \Omega(n) queries to determine s. Simon's algorithm addresses the problem of finding a hidden string s \in \{0,1\}^n \setminus \{0^n\} such that f(x) = f(y) if and only if x = y \oplus s (or x = y), using O(n) queries to the for f. The algorithm repeatedly prepares a uniform superposition over n+1 qubits (ancilla in |0\rangle), applies the oracle to compute f(x) in superposition, applies Hadamard gates to the first n qubits (effectively performing a over \mathbb{Z}_2^n), and measures to obtain a string orthogonal to s; O(n) such measurements suffice to solve for s via linear algebra over \mathbb{F}_2. Classically, distinguishing the two-oracle case from the case (where no such s exists) requires \Omega(2^{n/2}) queries even with . Grover's algorithm provides a quadratic speedup for unstructured search, finding a marked item in an unsorted database of N elements using O(\sqrt{N}) queries to an oracle that recognizes the solution. It operates via amplitude amplification: starting from a uniform superposition |\psi\rangle = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} |x\rangle, the algorithm alternates applications of the oracle O_f (which flips the sign of the marked state's amplitude) and the diffusion operator D = 2|\psi\rangle\langle\psi| - I (which inverts amplitudes about the mean). After k iterations, the angle of the marked state's amplitude is \theta_k = (2k+1)\theta where \sin \theta = 1/\sqrt{N}, achieving probability near 1 after roughly \frac{\pi}{4} \sqrt{N} steps; the full circuit on \lceil \log_2 N \rceil qubits thus requires this many oracle calls and controlled phase gates for diffusion. Classically, \Theta(N) queries are necessary in the worst case for search. To establish lower bounds on quantum query complexity, the polynomial method represents the acceptance probability of a T-query as a of degree at most $2T in the input bits, implying that the number of queries is at least half the approximate degree of the approximating the . For example, this yields \Omega(\sqrt{N}) queries for Grover's and \Omega(n) for the on n bits, matching known upper bounds up to constants. The method extends classical techniques by considering the positivity of quantum amplitudes but accounting for complex coefficients and .

Advanced Concepts and Open Problems

Quantum Proof Systems

Quantum proof systems extend classical interactive proof systems to the quantum domain, allowing provers to send quantum states and verifiers to perform quantum measurements. These systems capture the power of quantum computation in tasks, where the prover aims to convince a computationally bounded verifier of a statement's validity through multi-round interactions. Unlike classical interactive proofs, quantum variants leverage superposition and entanglement, leading to surprising equalities with classical classes. Quantum interactive proofs (QIP) consist of multi-round protocols where the prover sends quantum messages to a polynomial-time quantum verifier. For a language in QIP, there exists a prover strategy such that, for yes-instances, the verifier accepts with probability at least $1 - \epsilon, while for no-instances, acceptance probability is at most \epsilon, over polynomially many rounds, where \epsilon is negligible. Formally, the completeness condition is that for true statements x, the verifier V accepts with \Pr[V \text{ accepts } | x \in L] \geq 1 - \epsilon after poly(|x|) rounds, with the prover optimizing over quantum strategies. A landmark result shows that QIP equals PSPACE, demonstrating that quantum interaction does not increase power beyond classical space-bounded computation. As a single-round special case, quantum Merlin-Arthur (QMA) protocols involve a prover sending a quantum witness to the verifier, but QIP encompasses more general multi-round interactions. Quantum zero-knowledge proofs adapt the zero-knowledge property to quantum settings, ensuring the verifier learns nothing beyond the statement's validity, even against quantum adversaries. A key example is the extension of the classical for graph non-, where the prover convinces the verifier that two s are non-isomorphic without revealing an or other information. This remains statistically zero-knowledge against quantum verifiers, preserving by an efficient indistinguishable from the interaction. Multi-prover interactive proofs with , denoted MIP*, allow multiple provers sharing entanglement but no communication. The class MIP* equals , the set of all recursively enumerable languages, resolving long-standing questions about the power of entangled proofs. This equality also provides a negative solution to Tsirelson's problem, showing that certain quantum correlation sets cannot be approximated by finite-dimensional systems, with implications for and algebras. The quantum PCP conjecture posits an analog of the classical theorem for quantum languages like QMA, conjecturing that yes-instances have quantum proofs verifiable by querying a constant number of qubits with high probability. While unresolved, partial results establish low-error quantum Merlin-Arthur protocols with logarithmic query gaps, but constant-factor separations remain open, highlighting challenges in quantum proof verification.

Separations and Relativization

In quantum complexity theory, relativization refers to the practice of studying complexity classes relative to an , a hypothetical black-box that provides answers to membership queries. This technique, pioneered by , , and Solovay in , reveals that many proof methods in hold or fail uniformly across relativized worlds, limiting their ability to resolve fundamental questions like the relationship between BQP and the (PH). Specifically, most known separation techniques for quantum versus classical classes relativize, meaning they succeed or fail relative to every ; for instance, there exist oracles under which BQP is not contained in PH, as shown by Raz and Tal in 2019 (published 2022), who constructed an oracle where a distinguishes certain distributions in logarithmic time, while any PH machine requires superpolynomial time. Conversely, relativization also yields oracles where quantum and classical power collapse, such as algebrized oracles constructed by Aaronson and Wigderson in 2008 under which BQP equals P^#P, highlighting that -based methods cannot fully disentangle quantum advantages. Non-relativizing techniques have proven essential for certain classical separations but face significant barriers when applied to quantum settings. A prominent example is the proof that equal , established by Shamir in 1992 using arithmetization, which converts formulas into equations over finite fields to enable low-degree tests; this method does not relativize and thus evades the Baker-Gill-Solovay barrier. However, Aaronson and Wigderson's 2008 algebrization barrier extends relativization to algebraic oracles—those allowing low-degree queries—showing that arithmetization-based techniques "algebrize" and fail to separate from or other quantum classes like . As a result, quantum analogs of interactive proofs, such as QIP, have seen limited progress in establishing non-relativizing separations, underscoring the challenges in adapting these methods to capture quantum-specific phenomena like superposition and entanglement. Algebraic methods provide another avenue for separations, building on approximation theory to bound . The Razborov-Smolensky technique, developed in the mid-1980s, approximates constant-depth circuits (AC^0) using low-degree polynomials over finite fields, proving that functions like require superpolynomial size in AC^0; Razborov handled modulo-p gates in 1985, while Smolensky extended it to in 1987. Quantum extensions leverage similar approximate degree arguments: Aaronson in 2010 showed that is not contained in AC^0 by constructing a problem solvable in quantum logarithmic time—via a on a Johnson graph—but requiring exponential AC^0 size, using the Razborov-Smolensky approximation to bound the degree needed for quantum state distinctions. This non-oracle result establishes a concrete structural separation, as the problem involves distinguishing quantum superpositions that classical constant-depth circuits cannot approximate efficiently. Recent advancements up to 2025 have strengthened query complexity separations, particularly for learning and problems. In query models, where algorithms access input via queries, quantum protocols achieve speedups for specific learning tasks; for example, the k-forrelation problem, introduced by in 2020, requires Ω(N^{1-1/k}) classical queries but only O(N^{1/2}) quantum queries for k up to polylog(N), providing an optimal polynomial separation verified by , , and de Wolf in 2021. Similarly, in approximate —estimating the number of solutions to a , Hoyer, Tapp, and Mosca's 1998 uses O(√N) queries via amplitude estimation, compared to Θ(N) for classical deterministic or randomized methods, a gap widened by recent functional separations like , Aaronson, and Dunjko's 2024 result showing quantum advantages in sampling-based tasks beyond classical polynomial-time simulations. Despite these advances, limitations persist in the power of non-relativizing techniques for s. Raz and Tal's 2022 work not only provides the separation of from but also introduces novel lower bounds on constant-depth circuits using over Gaussian distributions, demonstrating that even PH-level computations struggle to simulate quantum queries efficiently; this technique reveals the restricted scope of non-relativizing methods, as it relies on probabilistic tools that do not fully capture quantum circuit hardness without oracles. These barriers emphasize that resolving unconditional separations, such as versus P, likely requires entirely new proof paradigms beyond current algebraic or interactive frameworks.

Connections to Quantum Information Theory

Entanglement plays a central role in quantum complexity theory, particularly in the class QMA, where the verifier must assess quantum proofs that may involve entangled states provided by . The local problem, a QMA-complete problem, requires determining the energy of a described by local interactions, where the ground states are typically highly entangled, making verification computationally intensive. Furthermore, problems involving the estimation of entanglement in ground states, such as detecting whether low-energy states are close to product states or highly entangled, are QMA-complete, highlighting how entanglement certifies the hardness of quantum proofs. Entanglement serves as a key measure for assessing the classical simulability of in . High entanglement correlates with exponential hardness in simulating on classical computers, as it quantifies the non-local correlations that methods or other approximation techniques struggle to capture efficiently. Recent work introduces non-stabilizerness entanglement , which combines entanglement with magic resource measures to better predict simulation difficulty, showing that states with elevated values resist efficient classical simulation even under area-law constraints. In quantum , protocols leveraging entanglement demonstrate how quantum resources can exponentially reduce the communication required compared to classical settings, with direct ties to . , introduced by Bennett and Wiesner, allows the transmission of two classical bits using a single when shared entanglement is available, achieving a factor-of-two advantage over classical channels. , developed by Bennett et al., enables the transfer of an arbitrary state using two classical bits and a shared entangled pair, implementable in time within , thus illustrating how harnesses entanglement for efficient distributed computation. Holevo's theorem establishes a fundamental limit on the classical extractable from quantum measurements, stating that n qubits can convey at most n classical bits reliably, with the Holevo quantity \chi(\{p_i, \rho_i\}) = S(\sum_i p_i \rho_i) - \sum_i p_i S(\rho_i) bounding the accessible , where S denotes the . For , this implies that while quantum computations can process superpositions internally, the final classical output—typically a decision bit with high probability—is constrained by Holevo's bound, preventing from efficiently solving problems requiring the extraction of more than polynomial classical without repeated measurements or quantum channels. Quantum learning theory extends learning to quantum settings, where learners access quantum examples—such as density operators or unitaries sampled from a distribution—to approximate target concepts with low error. In this framework, efficient quantum PAC learners operate within complexity classes like /poly, which allow non-uniform quantum advice, enabling the learning of quantum states or functions that are intractable classically unless \subseteq . For instance, learning parities with quantum examples can be achieved in quantum polynomial time, whereas classical learners require exponential resources. As of 2025, perspectives on complexity emphasize proven advantages in specific tasks like estimation, where quantum protocols yield quadratic speedups over classical ridge regression by efficiently computing high-dimensional feature maps via quantum circuits, but no broad quantum advantage exists for general without cryptographic assumptions. Benchmarks confirm these gains in noisy intermediate-scale quantum settings for , though scalability remains limited by error rates.

References

  1. [1]
    [PDF] A Survey of Quantum Complexity Theory
    This paper provides an introduction to quantum complexity the ory. Two basic models of quantum computers, quantum Turing Machines and quantum circuits, are ...
  2. [2]
    [PDF] Quantum Computation and Complexity - Stanford CS Theory
    This paper is a brief introduction to quantum computation and a survey of important results regarding quantum computation and quantum complexity theory.
  3. [3]
    Quantum Complexity Theory | SIAM Journal on Computing
    In this paper we study quantum computation from a complexity theoretic viewpoint. Our first result is the existence of an efficient universal quantum Turing ...
  4. [4]
  5. [5]
    Distributed quantum computing across an optical network link - Nature
    Feb 5, 2025 · Here we experimentally demonstrate the distribution of quantum computations between two photonically interconnected trapped-ion modules.Missing: milestones | Show results with:milestones
  6. [6]
    ON THE COMPUTATIONAL COMPLEXITY OF ALGORITHMS
    I. Introduction. In his celebrated paper [1], A. M. Turing investigated the computability of sequences (functions) by mechanical procedures and showed.
  7. [7]
    [PDF] Cook 1971 - Department of Computer Science, University of Toronto
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed. Throughout this paper, a set of strings means a ...Missing: NP- | Show results with:NP-
  8. [8]
    Relativizations of the P = ? N P Question - SIAM Publications Library
    We investigate relativized versions of the open question of whether every language accepted nondeterministically in polynomial time can be recognized ...
  9. [9]
    [PDF] The church–turing principle and the universal quantum computer
    Oct 28, 2003 · A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that ...
  10. [10]
    [PDF] Simulating Physics with Computers - s2.SMU
    First, I am going to describe the possibility of simulating physics in the classical approximation, a thing which is usuaUy described by local differential ...
  11. [11]
    [PDF] Quantum circuit complexity - Computer Science
    Abstract We propose a complexity model of quantum circuits analogous to the standard (acyclic). Boolean circuit model. It is shown that any function.
  12. [12]
    [PDF] On the quantum mechanics of collision processes.
    “Zur Quantenmechanik der Stoßvorgänge,” Zeit. Phys. 37 (1926), 863-867. On the quantum mechanics of collision processes. [Preliminary announcement (. 1. )] By ...
  13. [13]
    A single quantum cannot be cloned - Nature
    Oct 28, 1982 · A single quantum cannot be cloned. W. K. Wootters &; W. H. Zurek. Nature volume 299, pages 802 ...
  14. [14]
    [cs/9811023] Complexity limitations on quantum computation - arXiv
    Nov 12, 1998 · The paper uses counting complexity to show BQP is low for PP, and that in some worlds, P=BQP, and BQP does not have complete sets.Missing: subseteq | Show results with:subseteq
  15. [15]
    On the Power of Quantum Computation - SIAM Publications Library
    Daniel Simon, On the power of quantum computation, IEEE Comput. Soc. Press ... This paper shows that quantum computation can be made fault-tolerant ...
  16. [16]
    [quant-ph/0302079] 3-Local Hamiltonian is QMA-complete - arXiv
    Feb 10, 2003 · It has been shown by Kitaev that the 5-local Hamiltonian problem is QMA-complete. Here we reduce the locality of the problem by showing that 3-local ...
  17. [17]
    Complexity Zoo:Q
    The class of decision problems for which a "yes" answer can be verified by a quantum computer with access to a classical proof. Also known as the subclass of of ...
  18. [18]
    [1805.11139] Quantum generalizations of the polynomial hierarchy ...
    May 28, 2018 · Abstract page for arXiv paper 1805.11139: Quantum generalizations of the polynomial hierarchy with applications to QMA(2)
  19. [19]
    A classical proof of quantum knowledge for multi-prover interactive ...
    Mar 17, 2025 · This paper presents a classical proof of quantum knowledge for multi-prover systems, extending proof of knowledge to multiple quantum provers ...Missing: 2020s | Show results with:2020s
  20. [20]
    QuEST and High Performance Simulation of Quantum Computers
    Jul 24, 2019 · But it is expensive to exactly simulate a quantum system using a classical system, since a high-dimensional complex vector must be maintained ...
  21. [21]
    [1008.3477] The density-matrix renormalization group in the age of ...
    Aug 20, 2010 · The density-matrix renormalization group method (DMRG) has established itself over the last decade as the leading method for the simulation of the statics and ...
  22. [22]
    Classically optimized variational quantum eigensolver with ...
    Dec 8, 2023 · We propose the classically optimized VQE (CO-VQE), where the whole process of optimization is efficiently conducted on a classical computer.
  23. [23]
    Quantum Circuits That Can Be Simulated Classically in Polynomial ...
    We show here that a significant class of these quantum computations can be simulated classically in polynomial time.
  24. [24]
    [1109.1674] A Linear-Optical Proof that the Permanent is #P-Hard
    Sep 8, 2011 · One of the crown jewels of complexity theory is Valiant's 1979 theorem that computing the permanent of an n*n matrix is #P-hard.
  25. [25]
    From estimation of quantum probabilities to simulation of quantum ...
    Jan 13, 2020 · The paper explores classical simulation of quantum circuits, focusing on "EPSILON-simulation" and "poly-box" simulators, to understand quantum ...
  26. [26]
    [quant-ph/9802049] Quantum Lower Bounds by Polynomials - arXiv
    Sep 30, 1998 · Title:Quantum Lower Bounds by Polynomials. Authors:Robert Beals (U of Arizona), Harry Buhrman (CWI), Richard Cleve (U of Calgary), Michele ...
  27. [27]
    [PDF] Ph/CS 219A Quantum Computation
    We'll distinguish between “classical” queries (bit strings) and “quantum queries” (superpositions of bit strings. In some cases, we'll establish a large ...
  28. [28]
    Quantum query complexity of some graph problems - arXiv
    Jan 15, 2004 · Abstract: Quantum algorithms for graph problems are considered, both in the adjacency matrix model and in an adjacency list-like array model.
  29. [29]
    Quantum Query Complexity of Some Graph Problems
    Quantum algorithms for graph problems are considered, both in the adjacency matrix model and in an adjacency list-like array model.
  30. [30]
    [PDF] Quantum Algorithms for Graph Traversals and Related Problems
    The quantum query complexity of the eulerian graph problem is. O(. √ n) in the adjacency list and O(n1.5) in the adjacency matrix model. Proof. In the adjacency ...
  31. [31]
    [PDF] Average-Case Quantum Query Complexity - CWI
    Similarly let Q(f) be the set of quantum algorithms that compute f with bounded-error. We de ne the following worst-case complexities: D(f) = min. A2D(f) max.
  32. [32]
    A fast quantum mechanical algorithm for database search - arXiv
    Nov 19, 1996 · View a PDF of the paper titled A fast quantum mechanical algorithm for database search, by Lov K. Grover (Bell Labs and 1 other authors. View ...
  33. [33]
    [quant-ph/0311001] Quantum walk algorithm for element distinctness
    Nov 1, 2003 · Authors:Andris Ambainis. View a PDF of the paper titled Quantum walk algorithm for element distinctness, by Andris Ambainis. View PDF. Abstract ...
  34. [34]
    Forrelation: A Problem that Optimally Separates Quantum ... - arXiv
    Nov 21, 2014 · We achieve essentially the largest possible separation between quantum and classical query complexities. We do so using a property-testing ...
  35. [35]
    Parameterized Quantum Query Algorithms for Graph Problems - arXiv
    In this paper, we consider the parameterized quantum query complexity for graph problems. We design parameterized quantum query algorithms for k ...Missing: 2025 | Show results with:2025
  36. [36]
    Rapid solution of problems by quantum computation - Journals
    ... Deutsch-Jozsa Algorithm?, International Journal of Theoretical Physics, 10.1007/s10773-009-0189-5, 49:1, (162-170), Online publication date: 1-Jan-2010 ...
  37. [37]
    [0907.4737] QIP = PSPACE - arXiv
    Jul 27, 2009 · We prove that the complexity class QIP, which consists of all problems having quantum interactive proof systems, is contained in PSPACE.Missing: 2011 | Show results with:2011
  38. [38]
    [2111.12257] Post-Quantum Zero Knowledge, Revisited (or - arXiv
    Nov 24, 2021 · We use this technique to prove that important interactive protocols, such as the Goldreich-Micali-Wigderson protocol for graph non-isomorphism ...
  39. [39]
    [2001.04383] MIP*=RE - Quantum Physics - arXiv
    Jan 13, 2020 · Using a known connection, undecidability of the entangled value implies a negative answer to Tsirelson's problem: we show, by providing an ...
  40. [40]
    The status of the quantum PCP conjecture (games version) - arXiv
    Mar 19, 2024 · In surveying the obstacles remaining towards a quantum games PCP for QMA, we highlight the importance and challenge of understanding gap ...
  41. [41]
    Oracle Separation of BQP and PH | Journal of the ACM
    The hard part of our result is the lower bound for bounded depth circuits distinguishing between D and the uniform distribution. Our proof uses Fourier analysis ...<|control11|><|separator|>
  42. [42]
    [PDF] Algebrization: A New Barrier in Complexity Theory - Scott Aaronson
    Just like with standard oracle separations, to prove an algebraic oracle separation one has to do two things: (1) Prove a concrete lower bound on the query ...
  43. [43]
    [PDF] BQP and the Polynomial Hierarchy - Scott Aaronson
    In their original 1993 paper defining BQP, Bernstein and Vazirani [13] showed that BPP ⊆ BQP ⊆ P#P.
  44. [44]
    k-forrelation optimally separates Quantum and classical query ...
    We prove their conjecture by showing a tight lower bound of Ω(N 1−1/k ) for the randomized query complexity of k-Forrelation, where δ = 2 −O(k).
  45. [45]
    Improved separation between quantum and classical computers for ...
    Oct 28, 2024 · This paper furthers existing evidence that quantum computers are capable of computations beyond classical computers.
  46. [46]
    [PDF] QMA-complete problems - arXiv
    Nov 21, 2013 · Abstract. In this paper we give an overview of the quantum computational complexity class QMA and a description of known QMA-complete ...
  47. [47]
  48. [48]
    Quantum Physics - Non-stabilizerness Entanglement Entropy - arXiv
    Sep 25, 2024 · Entanglement entropy also plays a pivotal role in understanding computational complexity in simulating quantum systems.
  49. [49]
    [PDF] Quantum Computing and Communication Complexity - CWI
    ... teleportation, Alice uses 2 classical bits and 1 EPR-pair to send 1 qubit to. Bob. Superdense coding achieves the opposite: using 1 qubit of communication.
  50. [50]
    [PDF] Quantum Computational Complexity - arXiv
    Apr 21, 2008 · The quantum complexity class. BQP/qpoly is now defined to be the class of promise problems that are solved by polynomial- time quantum ...
  51. [51]
    [PDF] A Survey of Quantum Learning Theory - CWI
    A quantum PAC learner is given access to several copies of the quantum example and per- forms a POVM, where each outcome is associated with an hypothesis. Its ...Missing: BQP/ | Show results with:BQP/
  52. [52]
    [PDF] The Learnability of Quantum States - Scott Aaronson
    Here BQP/qpoly is the class of problems solvable in quantum polynomial time, with help from a polynomial- size “quantum advice state” |ψni that depends only on ...
  53. [53]
    [PDF] quantum advantage for learning with kernels and noise - arXiv
    Using this access, we can harness quantum subroutines to estimate the entries of the kernel matrices used in the algorithm. The quantum subroutines we use are ...
  54. [54]
    Benchmarking quantum machine learning kernel training for ... - arXiv
    Aug 17, 2024 · This study focuses on quantum kernel methods in the context of classification tasks. In particular, it examines the performance of Quantum ...