BQP, or Bounded-error Quantum Polynomial time, is a complexity class in computational complexity theory that consists of decision problems solvable by a quantum Turing machine in polynomial time, with the probability of error bounded away from 1/2—specifically, at least 2/3 probability of acceptance for "yes" instances and at most 1/3 for "no" instances.[1][2]Introduced by Ethan Bernstein and Umesh Vazirani in their seminal 1993 paper, BQP formalizes the power of quantum computation for decision problems under bounded error, extending classical notions like BPP (Bounded-error Probabilistic Polynomial time) to incorporate quantum superposition and interference.[1] The class is defined using quantum Turing machines or equivalent uniform families of polynomial-size quantum circuits, where the computation evolves unitarily and measurement yields the output with the specified success probability.[1][2]Key inclusions establish BQP's position relative to classical complexity classes: P ⊆ BPP ⊆ BQP ⊆ PSPACE, reflecting that deterministic and probabilistic polynomial-time problems are feasible quantumly, while quantum computations remain within polynomial space.[1][2] Additionally, BQP ⊆ P^#P, linking it to counting complexity.[1] The relationship to NP remains unresolved, with no known proof that NP ⊆ BQP or vice versa, though quantum algorithms like Shor's for integer factorization (in BQP but believed outside P) suggest potential separations from classical intractable problems.[2]Notable relativized separations demonstrate BQP's strict power beyond BPP: Bernstein and Vazirani showed an oracle relative to which BQP ≠ BPP via a recursive Fourier sampling problem solvable quantumly in polynomial time but requiring superpolynomial classical time.[1] BQP also contains exact quantum classes like EQP (Exact Quantum Polynomial time) and supports subroutines and error reduction via repetition, making it robust for algorithm design.[1][2] Problems such as Simon's periodicity finding further highlight oracle separations from classes like MA and SZK.[2]
Fundamentals
Definition
BQP, or Bounded-error Quantum Polynomial time, is the complexity class consisting of all decision problems solvable by a quantum Turing machine in polynomial time, where the machine accepts yes-instances with probability at least \frac{2}{3} and rejects no-instances with probability at least \frac{2}{3}, allowing for a bounded error probability of at most \frac{1}{3}.[3] Formally, a language L \subseteq \{0,1\}^* is in BQP if there exists a polynomial-time quantum Turing machine M such that for every input x \in \{0,1\}^n, if x \in L, then \Pr[M(x) = 1] \geq \frac{2}{3}, and if x \notin L, then \Pr[M(x) = 1] \leq \frac{1}{3}.[3][4]This bounded-error model ensures reliable computation despite probabilistic outcomes inherent to quantum measurement, and the specific thresholds of \frac{2}{3} and \frac{1}{3} are conventional choices analogous to those in the classical BPP class.[4] The error probability can be amplified to exponentially small values, such as $2^{-q(n)} for any polynomial q(n), by repeating the computation a polynomial number of times and taking the majority vote over the outcomes, without increasing the overall time beyond polynomial.[4]The standard definition of BQP employs uniform quantum Turing machines or uniform families of polynomial-size quantum circuits, where the description of the machine or circuit for input size n can be generated in polynomial time by a classical deterministic Turing machine.[4] In contrast, non-uniform BQP (often denoted BQP/poly) allows for non-uniform circuit families augmented with polynomial-length classical advice strings that depend on the input length but not the specific input, enabling computations that may not be describable uniformly.[5]BQP was introduced in 1993 by Ethan Bernstein and Umesh Vazirani as part of their foundational work on quantum complexity theory, where they established the equivalence between quantum Turing machines and quantum circuits as computational models for the class.[3]
Quantum Computational Model
The quantum computational model underlying BQP is primarily formalized using the quantum Turing machine (QTM), which extends the classical Turing machine to operate on quantum states via unitary evolution. A QTM is defined by a finite set of states, an input alphabet, and a quantum transition function that maps the current state and tape symbol to a superposition of possible next states, symbols, and head movements, ensuring the evolution preserves the norm of the quantum state. The computation proceeds in discrete steps, with the machine's configuration represented as a superposition over classical configurations, evolving unitarily according to the transition rules. For inputs of length n, the running time is bounded by a polynomial p(n), meaning the number of steps is at most O(p(n)), and the tape length used is also polynomial in n. Upon halting, measurement in the computational basis yields a classical output string from the tape contents.[1]Equivalent in expressive power to the QTM is the quantum circuit model, which represents computations as families of unitary transformations applied to qubit registers. In this model, a computation on an n-qubit input is specified by a sequence of elementary unitary gates from a universal set, such as the Hadamard gate H, controlled-NOT (CNOT), and single-qubit phase gates, acting on a polynomial number of qubits (typically O(n^k) for some constant k). The time complexity corresponds to the depth of the circuit, i.e., the longest path through the gates, which must be polynomial in n. Yao's theorem establishes that any polynomial-time QTM computation can be simulated by a polynomial-size quantum circuit, and vice versa, making the models interchangeable for defining complexity classes like BQP.[6][1]The power of these models for BQP stems from core quantum principles: superposition, allowing the system to evolve as a linear combination of basis states; entanglement, correlating qubits such that their joint state cannot be factored; and interference, where amplitudes from different computational paths add constructively or destructively to amplify desired outcomes. The initial state is prepared as |0\rangle^{\otimes m} for m qubits, and the final state before measurement is |\psi\rangle = U |0\rangle^{\otimes m}, where U is the overall unitary operator composed from the circuit gates (or QTM transitions), satisfying unitarity U^\dagger U = I. This evolution enables exponential parallelism in exploring solution spaces compared to classical models.[1]Classical inputs of length n are encoded by initializing a register of n qubits in the computational basis state |x\rangle, where x is the binary string representation, often padded with ancillary qubits initialized to |0\rangle for workspace. Computation proceeds on this register, with additional ancillas allowing reversible operations to avoid information loss. The output is obtained by measuring the qubits in the computational basis, yielding a probability distribution over $2^m basis states; for decision problems in BQP, acceptance is typically defined by the measurement outcome of the first qubit being 1 with probability at least $2/3 (or reject otherwise), bounded-error over the randomness of measurement.[1]
Complete Problems
Promise-BQP
Promise problems provide a framework for computational tasks where the input is guaranteed to belong to one of two disjoint subsets of binary strings, denoted as Y (the "yes" instances) and N (the "no" instances), with Y \cap N = \emptyset. The promise restricts the algorithm to correctly distinguishing between Y and N only on inputs in Y \cup N, while behavior on inputs outside this set is irrelevant. This structure is particularly suited to quantum computation, where extending BQP to such restricted inputs allows for modeling realistic quantum algorithms that rely on input guarantees.[7]The class Promise-BQP consists of all promise problems (Y, N) for which there exists a polynomial-time uniform family of quantum circuits \{Q_x\} such that, for every x \in Y, the probability that Q_x accepts (outputs 1 upon measurement of the first qubit) is at least $2/3, and for every x \in N, the acceptance probability is at most $1/3. These probabilities can be amplified arbitrarily close to 1 and 0, respectively, without increasing the polynomial runtime. This definition extends the core BQP model to promise settings while preserving the bounded-error requirement.[8]The motivation for Promise-BQP arises because many natural quantum problems are inherently promise-based rather than total decision problems over all inputs; for instance, evaluating quantum circuits often involves promises about input validity to avoid undecidability in total extensions. A representative example is the problem of approximating quantum circuit acceptance probabilities, which is Promise-BQP-complete. This formulation avoids pathological cases where total problems might be undecidable or trivialize complexity separations.[7][9]The standard BQP, defined as languages over all binary strings with bounded-error quantum polynomial-time decidability (i.e., the trivial promise Y \cup N = \{0,1\}^*), is contained in Promise-BQP. BQP was originally introduced by Bernstein and Vazirani in 1993.[1]The formalization of promise problems in quantum complexity traces back to the 1990s, notably through Alexei Kitaev's work defining quantum NP (now QMA) as a promise class to capture verifiable quantum proofs.[10]
APPROX-QCIRCUIT-PROB
APPROX-QCIRCUIT-PROB is a canonical promise problem that is complete for the complexity class Promise-BQP under polynomial-time reductions. It captures the essential computational power of quantum circuits by requiring the approximation of a specific output probability, making it a natural benchmark for the hardness of quantum computation. Other complete problems include the diagonal entry estimation for powers of sparse matrices.[9] This problem refines earlier formulations by focusing on efficient approximation within a promise gap, ensuring it aligns closely with the bounded-error model of BQP.The formal definition of APPROX-QCIRCUIT-PROB is as follows: the input consists of a quantum circuit C of size polynomial in n (the number of qubits), specified by a string of length poly(n), acting on n + k qubits where k is the number of output qubits, and an accuracy parameter $1/\text{poly}(n). The task is to approximate the probability p that, upon initializing all qubits in the |0\rangle state and running C, measuring the first output qubit yields |1\rangle. The algorithm must output an estimate \hat{a} such that |\hat{a} - p| \leq 1/\text{poly}(n).[11]The promise in APPROX-QCIRCUIT-PROB restricts instances to those where the true probability p is either at most $1/3 or at least $2/3; the algorithm must output "0" if p \leq 1/3 and "1" if p \geq 2/3, with rejection or error permitted only if p falls in the gap between $1/3 and $2/3. This gap ensures the problem is well-posed for bounded-error computation, mirroring the error tolerance in BQP definitions. The mathematical formulation for the probability is given byp = \left| \langle 1 | C | 0 \rangle \right|^2,where the inner product is over the first output qubit after applying C to the all-zero input state, and the approximation requirement is |\hat{a} - p| \leq 1/\text{poly}(n).[11]To establish completeness, any language in Promise-BQP reduces to APPROX-QCIRCUIT-PROB in polynomial time. Given a Promise-BQP machine M deciding a language L with success probability at least $2/3 for yes-instances and at most $1/3 for no-instances, construct a quantum circuit C that simulates M on input x of length n, using the standard encoding of quantum Turing machines into circuits of polynomial size. The first output qubit of C is set to represent the acceptance bit of M, so the measurement probability p matches the acceptance probability of M. An oracle solving APPROX-QCIRCUIT-PROB thus decides L by checking if the estimate \hat{a} indicates p \geq 2/3 or p \leq 1/3, preserving the promise gap. This reduction holds because quantum Turing machines are equivalent to quantum circuits up to polynomial overhead.For hardness, APPROX-QCIRCUIT-PROB is Promise-BQP-hard, meaning no efficient classical probabilistic algorithm can solve it unless BQP = P; details of classical hardness arguments, such as anti-concentration bounds or pseudorandomness assumptions, are deferred to broader discussions of quantum versus classical separations.
The class P consists of all decision problems that can be solved by a deterministic Turing machine in polynomial time. It is a fundamental complexity class in classical computation, capturing efficient deterministic algorithms. The relationship between P and BQP (bounded-error quantum polynomial time) begins with the strict inclusion P \subseteq \text{BQP}, established by the ability of quantum computers to simulate classical deterministic computations efficiently. This simulation relies on reversible quantum gates that implement classical logic operations without loss of information, incurring only a polynomial overhead in time and space compared to the original classical algorithm.[3][12]Although P is contained in BQP, the classes are suspected to be properly separated, with \text{BQP} \not\subseteq P, indicating that quantum computers can achieve speedups over classical deterministic ones for certain problems. A seminal example is integer factorization, the problem of finding the prime factors of a composite integer N. In 1994, Peter Shor developed a quantum algorithm that solves this in polynomial time using quantum Fourier transforms to find the period of a modular exponential function, placing the decision version of FACTORING (determining if N has a factor in a given range) in BQP.[13] This algorithm runs in O((\log N)^3) quantum steps, exponentially faster than the best known classical deterministic algorithms, which require subexponential time and are not believed to be in P due to the problem's role in the security of systems like RSA cryptography.[13]Shor's algorithm, formally published in 1997, provided the first concrete evidence suggesting \text{BQP} \properlysupseteq P, as factoring was long considered intractable for polynomial-time classical algorithms.[13] Despite this, no proof exists that the inclusion is proper, leaving the equality P = \text{BQP} as a major open question in complexity theory.[14][4] If P = \text{BQP} were shown, it would mean quantum computers provide no advantage for decision problems solvable deterministically in polynomial time, potentially diminishing the theoretical motivation for quantum hardware in classical simulation tasks.[4]
BQP and BPP
The class BPP consists of decision problems that can be solved by a classical probabilistic Turing machine in polynomial time, where the machine has access to a source of random bits and accepts correct answers with probability at least $2/3 (or equivalently, error probability at most $1/3) for inputs in the language, and rejects with probability at least $2/3 for inputs outside the language. This bounded-error model captures randomized algorithms that tolerate some uncertainty while running efficiently.It is known that \BPP \subseteq \BQP, as a quantum computer can simulate classical probabilistic computation by using quantum measurements to generate pseudo-random bits, effectively derandomizing the process within the quantum framework without increasing the asymptotic time complexity. In this inclusion, the quantum model encompasses the randomness of BPP by leveraging superposition and interference to mimic or surpass classical random sampling. The broader chain \P \subseteq \BPP \subseteq \BQP reflects the hierarchy from deterministic to bounded-error classical to quantum computation.Although \BPP \subseteq \BQP holds unconditionally, there is strong evidence suggesting a proper inclusion \BQP \properlysupseteq \BPP, with quantum algorithms demonstrating speedups unattainable by classical probabilistic methods. For instance, Grover's search algorithm provides a quadraticspeedup for unstructured search problems, solving them in O(\sqrt{N}) queries compared to the \Theta(N) lower bound for BPP algorithms. More decisively, oracle separations exist where \BQP \not\subseteq \BPP; Simon's problem, for example, can be solved efficiently with quantum queries but requires exponential time for any classical probabilistic algorithm relative to an oracle. These separations highlight quantum advantages in exploiting structured randomness beyond classical limits.No BPP-complete problems are known to lie outside BQP, consistent with the inclusion, as all BPP problems are simulable quantumly; however, the oracle separations imply that BQP contains problems incomparable to BPP in the unrelativized setting. While related classes like QMA (quantum analog of NP) intersect with these considerations through interactive proofs, their full implications for BQP versus BPP remain deferred to broader complexity analyses.
BQP and NP
The relationship between BQP and NP remains unresolved in quantum complexity theory, with neither BQP ⊆ NP nor NP ⊆ BQP established. It is conjectured that the classes are incomparable, containing problems outside the other, as supported by evidence from circuit complexity and quantum proof systems that suggest quantum computation transcends nondeterministic verification in certain ways.[15] This uncertainty underscores the distinct paradigms: BQP relies on quantum superposition and interference for bounded-error decisions in polynomial time, while NP involves nondeterministic polynomial-time verification of certificates.[3]A key example illustrating potential overlap is Grover's algorithm, which solves the unstructured search problem—whose decision version (determining if a solution exists in an unsorted database of size N) lies in NP—in O(\sqrt{N}) time with high probability, placing it firmly in BQP. Classically, verifying the absence of a solution requires \Omega(N) queries in the worst case, highlighting BQP's quadratic speedup for this NP problem, though it does not extend to NP-complete cases.[16] This algorithm demonstrates how quantum methods can accelerate search tasks amenable to NP-style verification without resolving the broader inclusion question.If NP ⊆ BQP were true, quantum computers could efficiently solve NP-complete problems, collapsing the P vs. NP hierarchy and enabling polynomial-time solutions to optimization challenges like SAT, but this is deemed improbable based on barriers in quantum speedups for classical hard problems. Relative to the polynomial hierarchy (PH), of which NP is \Sigma_2^P, BQP is believed to intersect PH (containing P and some intermediate levels) without fully containing it, as quantum circuits can simulate PH problems but evidence from approximate counting and pseudorandomness suggests separations.[17] No problems are known to be BQP-complete under NP reductions, reflecting the challenge in bridging quantum and nondeterministic hardness via certificate-based mappings.[18]
BQP and PP
The complexity class PP consists of all decision problems for which there exists a probabilistic Turing machine running in polynomial time such that the probability of outputting 1 (acceptance) is greater than 1/2 if the input is a yes instance and less than 1/2 if it is a no instance; this allows for unbounded error probability as long as the majority favors the correct outcome.A fundamental result in quantum complexity theory is the inclusion BQP ⊆ PP, established by showing that any bounded-error quantum polynomial-time computation can be simulated by a PP machine.[19] The proof relies on restricting the quantum Turing machine to algebraic transition amplitudes (standard in the BQP model) and expressing the final measurement probability as a rational function computable via counting problems. Specifically, using Hadamard and Toffoli gates as a universal set, the quantum circuit unfolds into a directed acyclic graph where the acceptance amplitude is the signed sum over 2^k paths (with k polynomial in the input size), corresponding to a difference of two #P functions (a GapP value); deciding whether this yields a probability exceeding 1/2 then equates to a majority test over the paths, which defines a PPpredicate.[19]This containment places BQP within a classical complexity class known for its expressive power, as PP admits complete problems under parsimonious reductions that capture approximate counting tasks central to many hard problems.[14] Consequently, quantum computation with bounded error does not exceed the decision capabilities of classical probabilistic machines allowing unbounded error, indicating that BQP offers no dramatic separation from probabilistic classical computing in this framework.[14] Nonetheless, BQP = PP is widely regarded as improbable, since canonical PP-complete problems like MAJSAT—requiring majority satisfaction of a Boolean formula—appear intractable for quantum polynomial-time algorithms.[14]
BQP and PSPACE
The class PSPACE consists of all decision problems that can be solved by a classical deterministic Turing machine using an amount of space that is polynomial in the input size. This class captures computations where the working memory is limited to polynomial size, but the running time can be exponential.A key result in quantum complexity theory is that BQP is contained in PSPACE, meaning every problem solvable by a bounded-error quantum polynomial-time algorithm can also be solved by a classical algorithm using only polynomialspace (albeit exponential time). This containment was established in the early 1990s by Bernstein and Vazirani, who showed that a quantum Turing machine running in polynomial time can be simulated classically within polynomialspace.[1] The proof relies on representing the quantum computation as a product of unitary matrices applied to an initial state vector, and computing the acceptance probability recursively. Specifically, the final amplitude for the accepting state is expressed as a sum over exponentially many computational paths through the circuit, but this sum can be evaluated using a divide-and-conquer approach analogous to Savitch's theorem, which converts nondeterministic polynomial-space computations to deterministic ones without increasing the space bound beyond the square. This recursive evaluation avoids storing the entire exponential-size state vector at once, instead computing intermediate contributions on the fly and reusing space.[1]The space efficiency of this simulation is tight in the sense that O(n^2) space suffices for an n-qubit quantum circuit of polynomial size, as the recursion depth is polynomial and each level requires storing only the current gate description, basis indices, and accumulated amplitudes, all of which scale quadratically with n. However, the time complexity remains exponential, reflecting the inherent challenge of classical simulation of quantum superposition. No proof of separation between BQP and PSPACE exists as of 2025, leaving open whether quantum computers provide any advantage in terms of space-bounded computation; if BQP = PSPACE, then quantum polynomial-time algorithms offer no space-efficient speedup over classical polynomial-space ones.[1]
BQP and EXP
The class EXP, or exponential time, encompasses all decision problems solvable by a deterministic Turing machine in time O(2^{n^k}) for some constant k \geq 1, formally defined as\mathrm{EXP} = \bigcup_{k \in \mathbb{N}} \mathrm{DTIME}(2^{n^k}).This class captures computations requiring exponential resources in the worst case.It is straightforward that \mathrm{BQP} \subseteq \mathrm{EXP}, since any quantum polynomial-time machine with n qubits and poly(n) gates can be simulated by a classical deterministic machine in time exponential in n, by enumerating all $2^{\mathrm{poly}(n)} possible computation paths and computing their amplitudes.[1]Nevertheless, the inclusion \mathrm{EXP} \subseteq \mathrm{BQP} does not hold. This separation arises from the established containment \mathrm{BQP} \subseteq \mathrm{PSPACE} and the strict hierarchy \mathrm{PSPACE} \subsetneq \mathrm{EXP}. The proof of \mathrm{BQP} \subseteq \mathrm{PSPACE} relies on simulating the quantum computation using polynomialspace to recursively compute the acceptance probability via path summation, without storing the full state vector.[1][1] The strict inclusion \mathrm{PSPACE} \subsetneq \mathrm{EXP} follows from the deterministic time hierarchy theorem, which constructs languages decidable in time $2^{n^{k+1}} but not in time o(2^{n^{k+1}} / n), implying that polynomial-space computations, simulable in time $2^{\mathrm{poly}(n)}, cannot capture all of EXP.To see the separation explicitly, assume for contradiction that \mathrm{EXP} \subseteq \mathrm{BQP}. Combined with \mathrm{BQP} \subseteq \mathrm{PSPACE}, this yields \mathrm{EXP} \subseteq \mathrm{PSPACE}. Since \mathrm{PSPACE} \subseteq \mathrm{EXP} (as any polynomial-space machine runs in at most exponential time by exhaustive exploration of its configurationgraph), it would follow that \mathrm{EXP} = \mathrm{PSPACE}, contradicting the time hierarchy theorem. The hierarchy theorems guarantee problems in EXP, such as diagonalization languages requiring full exponential time, that lie outside BQP.[1]These results underscore the limits of quantum speedup: while BQP enables efficient solutions to select exponential-time problems (e.g., factoring large integers), it excludes the bulk of EXP, confining quantum advantages to structured tasks rather than arbitrary exponential computations.[1]
Theoretical Implications
Relativization and Oracles
Relativization refers to the practice of studying complexity classes relative to an oracle, which is an abstract black-box providing answers to membership queries for a language. The Baker-Gill-Solovay theorem demonstrates that there exist oracles A and B such that \mathrm{P}^A = \mathrm{NP}^A while \mathrm{P}^B \neq \mathrm{NP}^B, implying that any proof technique resolving \mathrm{P} versus \mathrm{NP} must be non-relativizing, as it fails to hold uniformly across all relativized worlds.[20] This barrier extends to quantum complexity classes like BQP, where many proposed separations or inclusions between BQP and classical classes such as P, NP, or the polynomial hierarchy (PH) relativize, meaning they hold or fail depending on the oracle chosen.[14]Specific oracle constructions highlight the variability in relativized BQP. There exists a trivial oracle A relative to which \mathrm{P}^A = \mathrm{BQP}^A, as quantum computation offers no advantage over classical deterministic computation when the oracle does not exploit quantum parallelism.[21] In contrast, relative to another oracle B based on a problem requiring exponential classical queries but solvable efficiently by quantum algorithms—such as distinguishing certain periodic functions—\mathrm{P}^B \neq \mathrm{BQP}^B.These relativized results imply that resolving key questions like \mathrm{P} versus BQP requires non-relativizing proof techniques, which access the internal structure of computations rather than treating them as black boxes. Examples include interactive proof systems, as in the proof that \mathrm{[IP](/page/IP)} = \mathrm{[PSPACE](/page/PSPACE)}, which relies on arithmetization to sum over low-degree polynomials without relativizing. Another approach is algebraic geometry, pursued in the Geometric Complexity Theory program to separate algebraic varieties corresponding to permanent and determinant polynomials, potentially yielding non-relativizing separations for related complexity questions.In the quantum setting, separations involving oracles for classes like QMA (quantum analog of NP) further illustrate barriers; for instance, quantum oracles can separate QMA from its classical-witness variant QCMA by exploiting superposition in query access.[22] Recent work has strengthened black-box separations, showing that relative to certain oracles, BQP is not contained in PH, using problems like Forrelation that require quantum advantages undetectable by classical polynomial-time hierarchies.[23]
Open Questions
One of the central open questions in quantum complexity theory is whether P equals BQP, which would imply that quantum computers provide no computational advantage over classical deterministic polynomial-time machines for decision problems. If P = BQP holds, then problems solvable efficiently by quantum algorithms, such as integer factorization via Shor's algorithm, could in principle be addressed classically in polynomial time, undermining claims of quantum speedup. Conversely, establishing P ≠ BQP would confirm a strict hierarchy separation, solidifying the practical value of quantum hardware for certain tasks, though relativization barriers suggest this separation may require non-relativizing techniques to prove.[24][25]The relationship between BQP and NP remains unresolved, particularly whether BQP contains NP-hard problems or if NP ⊆ BQP, which would dramatically expand the scope of quantum solvability. It is unknown if BQP intersects NP in a non-trivial way beyond problems like factoring, which lie in NP ∩ coNP; a resolution here could either place quantum computation within the polynomial hierarchy (PH) or reveal quantum power beyond it. Recent query complexity results from 2023–2025, including lower bounds on quantum algorithms for search and decision problems, provide evidence suggesting separations between BQP and NP under certain oracles, but unconditional separations elude proof. For instance, advancements in adversary methods for non-collapsing measurements have strengthened lower bounds for BQP relative to classical classes, hinting at inherent quantum advantages without resolving the inclusion.[26][27][28]Dequantization efforts seek to determine if problems in BQP can be efficiently simulated or derandomized using classical algorithms, potentially leveraging pseudorandom generators to mimic quantum randomness. While some quantum machine learning subroutines have been dequantized using classical methods like random Fourier features, the general question of whether BQP ⊆ BPP (or a derandomized variant) persists, tied to the construction of quantum-proof pseudorandom generators that fool BQP computations. Connections to cryptographic assumptions, such as the hardness of learning parities with noise, underscore that full dequantization would collapse quantum advantages but remains unachieved, with partial results showing limitations for specific quantum speedups.[29][24]Experimental demonstrations of quantum supremacy, such as Google's 2019 Sycamore processor achieving sampling tasks beyond classical simulation and IBM's 2023 advancements in error-corrected qubits, highlight practical quantum edges, yet theoretical gaps endure regarding BQP's exact position relative to BPP. In October 2025, Google announced quantum advantage with its Willow processor, performing computations intractable for classical supercomputers in under five minutes.[30] These experiments solve promise problems in approximate sampling, but lack formal proofs that the underlying distributions are hard for BPP, leaving open whether scalable quantum advantage implies BQP ⊈ BPP. Ongoing challenges include verifying supremacy claims against improved classical simulations, with no theoretical closure on whether such tasks generalize to useful BQP computations.[31]Whether BQP lies within the polynomial hierarchy (PH) is another longstanding open problem, with oracle separations showing BQP ⊈ PH in relativized worlds, but no unconditional result. Multi-prover interactive proof systems offer a pathway to characterize BQP, where quantum provers sharing entanglement can verify BQP computations with classical verifiers; recent work has compressed such proofs to logarithmic communication while preserving completeness, but soundness gaps and scalability for general BQP remain unresolved.[17][32]Developments in 2024–2025 have advanced promise problem complexities for BQP, including structural separations for quantum Merlin-Arthur classes and non-collapsing oracle models that avoid hierarchy collapses. For example, new lower bounds in query models with non-collapsing measurements separate BQP from classical analogs without implying PH collapse, while promise-BQP-complete problems like APPROX-QCIRCUIT-PROB continue to probe quantum hardness. These results refine oracle separations but affirm no major collapses, leaving BQP's precise placement amid ongoing quantum cryptographic applications.[33][28][34]