Fact-checked by Grok 2 weeks ago

BQP

BQP, or Bounded-error Quantum Polynomial time, is a complexity class in computational complexity theory that consists of decision problems solvable by a quantum Turing machine in polynomial time, with the probability of error bounded away from 1/2—specifically, at least 2/3 probability of acceptance for "yes" instances and at most 1/3 for "no" instances. Introduced by Ethan Bernstein and Umesh Vazirani in their seminal 1993 paper, BQP formalizes the power of quantum computation for decision problems under bounded error, extending classical notions like BPP (Bounded-error Probabilistic Polynomial time) to incorporate quantum superposition and interference. The class is defined using quantum Turing machines or equivalent uniform families of polynomial-size quantum circuits, where the computation evolves unitarily and measurement yields the output with the specified success probability. Key inclusions establish BQP's position relative to classical classes: P ⊆ BPP ⊆ BQP ⊆ , reflecting that deterministic and probabilistic polynomial-time problems are feasible quantumly, while quantum computations remain within polynomial space. Additionally, BQP ⊆ P^#P, linking it to counting . The relationship to remains unresolved, with no known proof that ⊆ BQP or vice versa, though quantum algorithms like Shor's for (in BQP but believed outside ) suggest potential separations from classical intractable problems. Notable relativized separations demonstrate BQP's strict power beyond BPP: and Vazirani showed an relative to which BQP ≠ BPP via a recursive sampling problem solvable quantumly in time but requiring superpolynomial classical time. BQP also contains exact quantum classes like (Exact Quantum time) and supports subroutines and error reduction via repetition, making it robust for algorithm design. Problems such as Simon's periodicity finding further highlight separations from classes like and SZK.

Fundamentals

Definition

BQP, or Bounded-error Quantum Polynomial time, is the complexity class consisting of all decision problems solvable by a in polynomial time, where the machine accepts yes-instances with probability at least \frac{2}{3} and rejects no-instances with probability at least \frac{2}{3}, allowing for a bounded error probability of at most \frac{1}{3}. Formally, a L \subseteq \{0,1\}^* is in BQP if there exists a polynomial-time M such that for every input x \in \{0,1\}^n, if x \in L, then \Pr[M(x) = 1] \geq \frac{2}{3}, and if x \notin L, then \Pr[M(x) = 1] \leq \frac{1}{3}. This bounded-error model ensures reliable despite probabilistic outcomes inherent to quantum measurement, and the specific thresholds of \frac{2}{3} and \frac{1}{3} are conventional choices analogous to those in the classical BPP class. The error probability can be amplified to exponentially small values, such as $2^{-q(n)} for any q(n), by repeating the a number of times and taking the vote over the outcomes, without increasing the overall time beyond . The standard definition of BQP employs uniform quantum s or uniform families of polynomial-size quantum s, where the description of the machine or circuit for input size n can be generated in polynomial time by a classical deterministic . In contrast, non-uniform BQP (often denoted BQP/poly) allows for non-uniform circuit families augmented with polynomial-length classical strings that depend on the input length but not the specific input, enabling computations that may not be describable uniformly. BQP was introduced in 1993 by Ethan Bernstein and Umesh Vazirani as part of their foundational work on quantum complexity theory, where they established the equivalence between quantum Turing machines and quantum circuits as computational models for the class.

Quantum Computational Model

The quantum computational model underlying BQP is primarily formalized using the quantum Turing machine (QTM), which extends the classical Turing machine to operate on quantum states via unitary evolution. A QTM is defined by a finite set of states, an input alphabet, and a quantum transition function that maps the current state and tape symbol to a superposition of possible next states, symbols, and head movements, ensuring the evolution preserves the norm of the quantum state. The computation proceeds in discrete steps, with the machine's configuration represented as a superposition over classical configurations, evolving unitarily according to the transition rules. For inputs of length n, the running time is bounded by a polynomial p(n), meaning the number of steps is at most O(p(n)), and the tape length used is also polynomial in n. Upon halting, measurement in the computational basis yields a classical output string from the tape contents. Equivalent in expressive power to the QTM is the quantum circuit model, which represents s as families of unitary transformations applied to registers. In this model, a on an n- input is specified by a sequence of elementary unitary gates from a , such as the Hadamard gate H, controlled-NOT (CNOT), and single- phase gates, acting on a polynomial number of qubits (typically O(n^k) for some constant k). The corresponds to the depth of the , i.e., the longest path through the gates, which must be polynomial in n. Yao's theorem establishes that any polynomial-time QTM can be simulated by a polynomial-size , and vice versa, making the models interchangeable for defining complexity classes like BQP. The power of these models for BQP stems from core quantum principles: superposition, allowing the system to evolve as a of basis states; entanglement, correlating qubits such that their joint cannot be factored; and , where amplitudes from different computational paths add constructively or destructively to amplify desired outcomes. The initial is prepared as |0\rangle^{\otimes m} for m qubits, and the final before is |\psi\rangle = U |0\rangle^{\otimes m}, where U is the overall composed from the gates (or QTM transitions), satisfying unitarity U^\dagger U = I. This evolution enables exponential parallelism in exploring solution spaces compared to classical models. Classical inputs of length n are encoded by initializing a register of n qubits in the computational basis state |x\rangle, where x is the binary string representation, often padded with ancillary qubits initialized to |0\rangle for workspace. Computation proceeds on this register, with additional ancillas allowing reversible operations to avoid information loss. The output is obtained by measuring the qubits in the computational basis, yielding a probability distribution over $2^m basis states; for decision problems in BQP, acceptance is typically defined by the measurement outcome of the first qubit being 1 with probability at least $2/3 (or reject otherwise), bounded-error over the randomness of measurement.

Complete Problems

Promise-BQP

Promise problems provide a for computational tasks where the input is guaranteed to belong to one of two disjoint subsets of binary strings, denoted as Y (the "yes" instances) and N (the "no" instances), with Y \cap N = \emptyset. The promise restricts the algorithm to correctly distinguishing between Y and N only on inputs in Y \cup N, while behavior on inputs outside this set is irrelevant. This structure is particularly suited to quantum computation, where extending BQP to such restricted inputs allows for modeling realistic quantum algorithms that rely on input guarantees. The class Promise-BQP consists of all promise problems (Y, N) for which there exists a -time uniform family of quantum circuits \{Q_x\} such that, for every x \in Y, the probability that Q_x accepts (outputs 1 upon of the first ) is at least $2/3, and for every x \in N, the acceptance probability is at most $1/3. These probabilities can be amplified arbitrarily close to 1 and 0, respectively, without increasing the runtime. This definition extends the core BQP model to settings while preserving the bounded-error requirement. The motivation for Promise-BQP arises because many natural quantum problems are inherently promise-based rather than total decision problems over all inputs; for instance, evaluating quantum circuits often involves promises about input validity to avoid undecidability in total extensions. A representative example is the problem of approximating quantum circuit acceptance probabilities, which is Promise-BQP-complete. This formulation avoids pathological cases where total problems might be undecidable or trivialize complexity separations. The standard BQP, defined as languages over all binary strings with bounded-error quantum polynomial-time decidability (i.e., the trivial promise Y \cup N = \{0,1\}^*), is contained in Promise-BQP. BQP was originally introduced by and Vazirani in 1993. The formalization of promise problems in quantum complexity traces back to the 1990s, notably through Alexei Kitaev's work defining quantum NP (now QMA) as a promise class to capture verifiable quantum proofs.

APPROX-QCIRCUIT-PROB

APPROX-QCIRCUIT-PROB is a promise problem that is complete for the complexity class under polynomial-time reductions. It captures the essential computational power of quantum circuits by requiring the of a specific output probability, making it a natural for the of quantum . Other complete problems include the diagonal entry for powers of sparse matrices. This problem refines earlier formulations by focusing on efficient within a promise gap, ensuring it aligns closely with the bounded-error model of BQP. The formal definition of APPROX-QCIRCUIT-PROB is as follows: the input consists of a quantum circuit C of size polynomial in n (the number of qubits), specified by a string of length poly(n), acting on n + k qubits where k is the number of output qubits, and an accuracy parameter $1/\text{poly}(n). The task is to approximate the probability p that, upon initializing all qubits in the |0\rangle state and running C, measuring the first output qubit yields |1\rangle. The algorithm must output an estimate \hat{a} such that |\hat{a} - p| \leq 1/\text{poly}(n). The promise in APPROX-QCIRCUIT-PROB restricts instances to those where the true probability p is either at most $1/3 or at least $2/3; the algorithm must output "0" if p \leq 1/3 and "1" if p \geq 2/3, with rejection or error permitted only if p falls in the gap between $1/3 and $2/3. This gap ensures the problem is well-posed for bounded-error computation, mirroring the error tolerance in BQP definitions. The mathematical formulation for the probability is given by p = \left| \langle 1 | C | 0 \rangle \right|^2, where the inner product is over the first output qubit after applying C to the all-zero input state, and the approximation requirement is |\hat{a} - p| \leq 1/\text{poly}(n). To establish completeness, any language in Promise-BQP reduces to APPROX-QCIRCUIT-PROB in polynomial time. Given a Promise-BQP machine M deciding a L with success probability at least $2/3 for yes-instances and at most $1/3 for no-instances, construct a C that simulates M on input x of length n, using the standard encoding of quantum Turing machines into circuits of size. The first output of C is set to represent the acceptance bit of M, so the measurement probability p matches the acceptance probability of M. An solving APPROX-QCIRCUIT-PROB thus decides L by checking if the estimate \hat{a} indicates p \geq 2/3 or p \leq 1/3, preserving the promise gap. This reduction holds because quantum Turing machines are equivalent to quantum circuits up to overhead. For hardness, APPROX-QCIRCUIT-PROB is Promise-BQP-hard, meaning no efficient classical probabilistic can solve it unless BQP = ; details of classical hardness arguments, such as anti-concentration bounds or assumptions, are deferred to broader discussions of quantum versus classical separations.

Relationships to Other Classes

BQP and

The class consists of all decision problems that can be solved by a deterministic in time. It is a fundamental in classical computation, capturing efficient deterministic . The relationship between and BQP (bounded-error quantum time) begins with the strict P \subseteq \text{BQP}, established by the ability of quantum computers to classical deterministic computations efficiently. This relies on reversible quantum gates that implement classical operations without loss of information, incurring only a overhead in time and space compared to the original classical . Although P is contained in BQP, the classes are suspected to be properly separated, with \text{BQP} \not\subseteq P, indicating that quantum computers can achieve speedups over classical deterministic ones for certain problems. A seminal example is , the problem of finding the prime factors of a composite integer N. In 1994, developed a that solves this in polynomial time using quantum Fourier transforms to find the period of a modular exponential function, placing the decision version of FACTORING (determining if N has a factor in a given range) in BQP. This algorithm runs in O((\log N)^3) quantum steps, exponentially faster than the best known classical deterministic algorithms, which require subexponential time and are not believed to be in P due to the problem's role in the security of systems like cryptography. Shor's algorithm, formally published in 1997, provided the first concrete evidence suggesting \text{BQP} \properlysupseteq P, as factoring was long considered intractable for polynomial-time classical algorithms. Despite this, no proof exists that the inclusion is proper, leaving the equality P = \text{BQP} as a major open question in complexity theory. If P = \text{BQP} were shown, it would mean quantum computers provide no advantage for decision problems solvable deterministically in polynomial time, potentially diminishing the theoretical motivation for quantum hardware in classical simulation tasks.

BQP and BPP

The class BPP consists of decision problems that can be solved by a classical in polynomial time, where the machine has access to a source of random bits and accepts correct answers with probability at least $2/3 (or equivalently, error probability at most $1/3) for inputs in the , and rejects with probability at least $2/3 for inputs outside the . This bounded-error model captures randomized algorithms that tolerate some uncertainty while running efficiently. It is known that \BPP \subseteq \BQP, as a quantum computer can simulate classical probabilistic computation by using quantum measurements to generate pseudo-random bits, effectively derandomizing the process within the quantum framework without increasing the asymptotic . In this inclusion, the quantum model encompasses the randomness of BPP by leveraging superposition and to mimic or surpass classical random sampling. The broader chain \P \subseteq \BPP \subseteq \BQP reflects the from deterministic to bounded-error classical to quantum . Although \BPP \subseteq \BQP holds unconditionally, there is strong evidence suggesting a proper inclusion \BQP \properlysupseteq \BPP, with quantum algorithms demonstrating s unattainable by classical probabilistic methods. For instance, Grover's provides a for unstructured search problems, solving them in O(\sqrt{N}) queries compared to the \Theta(N) lower bound for BPP algorithms. More decisively, oracle separations exist where \BQP \not\subseteq \BPP; Simon's problem, for example, can be solved efficiently with quantum queries but requires time for any classical probabilistic algorithm relative to an . These separations highlight quantum advantages in exploiting structured beyond classical limits. No BPP-complete problems are known to lie outside BQP, consistent with the inclusion, as all BPP problems are simulable quantumly; however, the oracle separations imply that BQP contains problems incomparable to BPP in the unrelativized setting. While related classes like QMA (quantum analog of NP) intersect with these considerations through interactive proofs, their full implications for BQP versus BPP remain deferred to broader complexity analyses.

BQP and NP

The relationship between BQP and NP remains unresolved in quantum complexity theory, with neither BQP ⊆ NP nor NP ⊆ BQP established. It is conjectured that the classes are incomparable, containing problems outside the other, as supported by evidence from circuit complexity and quantum proof systems that suggest quantum computation transcends nondeterministic verification in certain ways. This uncertainty underscores the distinct paradigms: BQP relies on quantum superposition and interference for bounded-error decisions in polynomial time, while NP involves nondeterministic polynomial-time verification of certificates. A key example illustrating potential overlap is , which solves the unstructured search problem—whose decision version (determining if a solution exists in an unsorted database of size N) lies in —in O(\sqrt{N}) time with high probability, placing it firmly in BQP. Classically, verifying the absence of a solution requires \Omega(N) queries in the worst case, highlighting BQP's quadratic speedup for this NP problem, though it does not extend to NP-complete cases. This algorithm demonstrates how quantum methods can accelerate search tasks amenable to NP-style verification without resolving the broader inclusion question. If ⊆ BQP were true, quantum computers could efficiently solve NP-complete problems, collapsing the P vs. NP and enabling polynomial-time solutions to optimization challenges like SAT, but this is deemed improbable based on barriers in quantum speedups for classical hard problems. Relative to the (PH), of which NP is \Sigma_2^P, BQP is believed to intersect PH (containing P and some intermediate levels) without fully containing it, as quantum circuits can simulate PH problems but evidence from approximate counting and suggests separations. No problems are known to be BQP-complete under NP reductions, reflecting the challenge in bridging quantum and nondeterministic hardness via certificate-based mappings.

BQP and PP

The complexity class consists of all decision problems for which there exists a running in time such that the probability of outputting 1 () is greater than 1/2 if the input is a yes instance and less than 1/2 if it is a no instance; this allows for unbounded error probability as long as the favors the correct outcome. A fundamental result in is the inclusion BQP ⊆ , established by showing that any bounded-error can be simulated by a machine. The proof relies on restricting the to algebraic transition amplitudes (standard in the BQP model) and expressing the final measurement probability as a computable via counting problems. Specifically, using as a , the unfolds into a where the amplitude is the signed sum over 2^k paths (with k in the input size), corresponding to a difference of two #P functions (a GapP value); deciding whether this yields a probability exceeding 1/2 then equates to a test over the paths, which defines a . This containment places BQP within a classical complexity class known for its expressive power, as PP admits complete problems under parsimonious reductions that capture approximate counting tasks central to many hard problems. Consequently, quantum computation with bounded error does not exceed the decision capabilities of classical probabilistic machines allowing unbounded error, indicating that BQP offers no dramatic separation from probabilistic classical computing in this framework. Nonetheless, BQP = PP is widely regarded as improbable, since canonical PP-complete problems like MAJSAT—requiring majority satisfaction of a Boolean formula—appear intractable for quantum polynomial-time algorithms.

BQP and PSPACE

The class PSPACE consists of all decision problems that can be solved by a classical deterministic using an amount of that is in the input size. This class captures computations where the working memory is limited to size, but the running time can be . A key result in quantum is that BQP is contained in PSPACE, meaning every problem solvable by a bounded-error quantum -time can also be solved by a classical using only (albeit time). This containment was established in the early by and Vazirani, who showed that a running in time can be simulated classically within . The proof relies on representing the quantum computation as a product of unitary matrices applied to an initial , and computing the acceptance probability recursively. Specifically, the final for the accepting is expressed as a sum over exponentially many computational paths through the , but this sum can be evaluated using a divide-and-conquer approach analogous to , which converts nondeterministic - computations to deterministic ones without increasing the space bound beyond the square. This recursive evaluation avoids storing the entire -size at once, instead computing intermediate contributions on the fly and reusing . The space efficiency of this simulation is tight in the sense that O(n^2) space suffices for an n-qubit of size, as the depth is and each level requires storing only the current description, basis indices, and accumulated amplitudes, all of which scale quadratically with n. However, the remains exponential, reflecting the inherent challenge of classical of . No proof of separation between BQP and exists as of 2025, leaving open whether quantum computers provide any advantage in terms of space-bounded computation; if BQP = , then quantum polynomial-time algorithms offer no space-efficient speedup over classical polynomial-space ones.

BQP and EXP

The class , or exponential time, encompasses all decision problems solvable by a deterministic in time O(2^{n^k}) for some constant k \geq 1, formally defined as \mathrm{EXP} = \bigcup_{k \in \mathbb{N}} \mathrm{DTIME}(2^{n^k}). This class captures computations requiring exponential resources in the worst case. It is straightforward that \mathrm{BQP} \subseteq \mathrm{EXP}, since any quantum polynomial-time machine with n qubits and poly(n) gates can be simulated by a classical deterministic machine in time exponential in n, by enumerating all $2^{\mathrm{poly}(n)} possible computation paths and computing their amplitudes. Nevertheless, the inclusion \mathrm{EXP} \subseteq \mathrm{BQP} does not hold. This separation arises from the established containment \mathrm{BQP} \subseteq \mathrm{PSPACE} and the strict hierarchy \mathrm{PSPACE} \subsetneq \mathrm{EXP}. The proof of \mathrm{BQP} \subseteq \mathrm{PSPACE} relies on simulating the quantum using to recursively compute the probability via path summation, without storing the full . The strict inclusion \mathrm{PSPACE} \subsetneq \mathrm{EXP} follows from the deterministic time hierarchy theorem, which constructs languages decidable in time $2^{n^{k+1}} but not in time o(2^{n^{k+1}} / n), implying that - computations, simulable in time $2^{\mathrm{poly}(n)}, cannot capture all of EXP. To see the separation explicitly, assume for contradiction that \mathrm{EXP} \subseteq \mathrm{BQP}. Combined with \mathrm{BQP} \subseteq \mathrm{PSPACE}, this yields \mathrm{EXP} \subseteq \mathrm{PSPACE}. Since \mathrm{PSPACE} \subseteq \mathrm{EXP} (as any polynomial-space machine runs in at most time by exhaustive exploration of its ), it would follow that \mathrm{EXP} = \mathrm{PSPACE}, contradicting the time hierarchy theorem. The hierarchy theorems guarantee problems in EXP, such as languages requiring full time, that lie outside BQP. These results underscore the limits of quantum speedup: while BQP enables efficient solutions to select exponential-time problems (e.g., factoring large integers), it excludes the bulk of , confining quantum advantages to structured tasks rather than arbitrary exponential computations.

Theoretical Implications

Relativization and Oracles

Relativization refers to of studying complexity classes relative to an , which is an abstract black-box providing answers to membership queries for a . The Baker-Gill-Solovay demonstrates that there exist oracles A and B such that \mathrm{P}^A = \mathrm{NP}^A while \mathrm{P}^B \neq \mathrm{NP}^B, implying that any proof technique resolving \mathrm{P} versus \mathrm{NP} must be non-relativizing, as it fails to hold uniformly across all relativized worlds. This barrier extends to quantum complexity classes like BQP, where many proposed separations or inclusions between BQP and classical classes such as , , or the (PH) relativize, meaning they hold or fail depending on the oracle chosen. Specific oracle constructions highlight the variability in relativized BQP. There exists a trivial oracle A relative to which \mathrm{P}^A = \mathrm{BQP}^A, as quantum computation offers no advantage over classical deterministic computation when the oracle does not exploit quantum parallelism. In contrast, relative to another oracle B based on a problem requiring exponential classical queries but solvable efficiently by quantum algorithms—such as distinguishing certain periodic functions—\mathrm{P}^B \neq \mathrm{BQP}^B. These relativized results imply that resolving key questions like \mathrm{P} versus BQP requires non-relativizing proof techniques, which access the internal structure of computations rather than treating them as black boxes. Examples include interactive proof systems, as in the proof that \mathrm{[IP](/page/IP)} = \mathrm{[PSPACE](/page/PSPACE)}, which relies on arithmetization to sum over low-degree polynomials without relativizing. Another approach is , pursued in the Geometric Complexity Theory program to separate algebraic varieties corresponding to permanent and determinant polynomials, potentially yielding non-relativizing separations for related complexity questions. In the quantum setting, separations involving oracles for classes like QMA (quantum analog of NP) further illustrate barriers; for instance, quantum oracles can separate QMA from its classical-witness variant QCMA by exploiting superposition in query access. Recent work has strengthened black-box separations, showing that relative to certain oracles, BQP is not contained in , using problems like Forrelation that require quantum advantages undetectable by classical polynomial-time hierarchies.

Open Questions

One of the central open questions in quantum is whether equals BQP, which would imply that quantum computers provide no computational advantage over classical deterministic polynomial-time machines for decision problems. If = BQP holds, then problems solvable efficiently by quantum algorithms, such as via , could in principle be addressed classically in polynomial time, undermining claims of quantum speedup. Conversely, establishing ≠ BQP would confirm a strict separation, solidifying the practical value of quantum for certain tasks, though relativization barriers suggest this separation may require non-relativizing techniques to prove. The relationship between BQP and NP remains unresolved, particularly whether BQP contains NP-hard problems or if NP ⊆ BQP, which would dramatically expand the scope of quantum solvability. It is unknown if BQP intersects NP in a non-trivial way beyond problems like factoring, which lie in NP ∩ coNP; a resolution here could either place quantum computation within the (PH) or reveal quantum power beyond it. Recent query complexity results from 2023–2025, including lower bounds on quantum algorithms for search and decision problems, provide evidence suggesting separations between BQP and NP under certain oracles, but unconditional separations elude proof. For instance, advancements in adversary methods for non-collapsing measurements have strengthened lower bounds for BQP relative to classical classes, hinting at inherent quantum advantages without resolving the inclusion. Dequantization efforts seek to determine if problems in BQP can be efficiently simulated or derandomized using classical algorithms, potentially leveraging pseudorandom generators to mimic quantum randomness. While some quantum machine learning subroutines have been dequantized using classical methods like random Fourier features, the general question of whether BQP ⊆ BPP (or a derandomized variant) persists, tied to the construction of quantum-proof pseudorandom generators that fool BQP computations. Connections to cryptographic assumptions, such as the hardness of learning parities with noise, underscore that full dequantization would collapse quantum advantages but remains unachieved, with partial results showing limitations for specific quantum speedups. Experimental demonstrations of , such as 's 2019 achieving sampling tasks beyond classical simulation and IBM's 2023 advancements in error-corrected qubits, highlight practical quantum edges, yet theoretical gaps endure regarding BQP's exact position relative to BPP. In October 2025, announced quantum advantage with its processor, performing computations intractable for classical supercomputers in under five minutes. These experiments solve promise problems in approximate sampling, but lack formal proofs that the underlying distributions are hard for BPP, leaving open whether scalable quantum advantage implies BQP ⊈ BPP. Ongoing challenges include verifying supremacy claims against improved classical simulations, with no theoretical closure on whether such tasks generalize to useful BQP computations. Whether BQP lies within the () is another longstanding , with oracle separations showing BQP ⊈ in relativized worlds, but no unconditional result. Multi-prover interactive proof systems offer a pathway to characterize BQP, where quantum provers sharing entanglement can verify BQP computations with classical verifiers; recent work has compressed such proofs to logarithmic communication while preserving , but soundness gaps and scalability for general BQP remain unresolved. Developments in 2024–2025 have advanced problem complexities for BQP, including structural separations for quantum Merlin-Arthur classes and non-collapsing models that avoid collapses. For example, new lower bounds in query models with non-collapsing measurements separate BQP from classical analogs without implying collapse, while promise-BQP-complete problems like APPROX-QCIRCUIT-PROB continue to probe quantum hardness. These results refine separations but affirm no major collapses, leaving BQP's precise placement amid ongoing quantum cryptographic applications.