Fact-checked by Grok 2 weeks ago

Noisy intermediate-scale quantum era

The Noisy Intermediate-Scale Quantum (NISQ) era represents the current developmental stage of quantum computing, characterized by quantum processors containing 50 to several thousand qubits that operate without comprehensive fault-tolerant error correction, resulting in computations plagued by noise from imperfect quantum gates, decoherence, and environmental interactions. This phase bridges early proof-of-concept demonstrations and the anticipated arrival of scalable, error-corrected quantum systems, enabling limited demonstrations of quantum advantage in specialized tasks while highlighting the engineering challenges of scaling up. The term "NISQ" was coined by theoretical physicist John Preskill in 2018 to encapsulate this intermediate regime, where quantum devices are large enough to explore complex quantum phenomena but too noisy for reliable, long-depth computations without hybrid classical-quantum approaches. As of November 2025, superconducting platforms like IBM's have exceeded 1,000 qubits (1,121 in the Condor processor), while trapped-ion systems from companies like IonQ (100 qubits in Tempo, achieving 99.99% two-qubit gate fidelity) and Quantinuum (98 qubits in Helios) have reached around 100 qubits, yet error rates remain in the range of 0.1% to 1% per gate for many operations, constraining circuit depths to tens of operations. These advances have facilitated milestones such as quantum supremacy claims in random circuit sampling and partial simulations of molecular systems, underscoring NISQ's role in pushing the boundaries of computational physics. Key challenges in the NISQ era include mitigating noise through techniques like error mitigation, dynamical decoupling, and variational quantum algorithms, which trade depth for approximate solutions, as full demands exponentially more physical qubits for logical ones. Despite these limitations, NISQ devices hold promise for near-term applications in for , materials science simulations, and optimization problems in and , where they can outperform classical methods in niche scenarios. Ongoing research focuses on improving fidelity, , and times to extend the utility of this era and pave the way toward fault-tolerant by the early 2030s.

Definition and Historical Context

Defining the NISQ Era

The term "Noisy Intermediate-Scale Quantum" (NISQ) era was coined by physicist John Preskill in 2018 to describe a phase of quantum computing characterized by devices comprising 50 to several thousand qubits, as of 2025, that operate without implementing full quantum error correction. These systems exhibit significant noise primarily arising from decoherence, which causes qubits to lose their quantum state over time, and imperfect gate operations that introduce errors during computation. Key features of NISQ devices include their intermediate scale, far short of the millions of physical s required for fault-tolerant , yet sufficient to explore quantum phenomena beyond classical in select tasks. Noisy operations are typical, with error rates typically below 0.1% per two-qubit (fidelity >99.9%), with leading systems achieving 99.99% as of 2025, limiting the depth and complexity of executable quantum circuits. Despite these imperfections, NISQ hardware holds potential for achieving quantum utility—demonstrating advantages over classical computers in areas like simulations—through hybrid quantum-classical approaches that tolerate noise. Basic performance metrics include two-qubit F > 99.9\% and coherence times T_1 (relaxation) and T_2 (dephasing) typically on the order of 100 to 300 microseconds, with leading demonstrations exceeding 1 millisecond as of 2025. The NISQ era is distinguished from earlier pre-NISQ developments, which focused on small-scale (fewer than 50 qubits) proofs-of-principle demonstrating basic quantum operations without practical utility, and from the anticipated post-NISQ future, where fault-tolerant quantum computers will employ error-corrected logical qubits to enable scalable, reliable for broad applications. In the pre-NISQ overwhelmed most attempts at meaningful , whereas post-NISQ systems aim to suppress errors below thresholds that allow arbitrary depths, marking a transition to transformative quantum advantage.

Key Milestones and Evolution

The noisy intermediate-scale quantum (NISQ) era emerged from early demonstrations of quantum hardware in the 2010s, which highlighted the potential and limitations of noisy quantum devices despite their small scale. In 2011, D-Wave Systems released the D-Wave One, a 128-qubit quantum annealer designed for optimization problems, marking the first commercially available quantum computer and sparking debates on for specific tasks. This was followed by incremental advancements, such as Google's partnership with in 2013 to explore applications using a 512-qubit D-Wave Two system. A pivotal claim came in 2019 when Google announced using its 53-qubit , which performed a random circuit sampling task in 200 seconds that would take a supercomputer approximately 10,000 years, though this was contested by classical simulations achieving similar results in days. The formal conceptualization of the NISQ era was introduced by physicist John Preskill in 2018, who described it as a of with 50–1,000 s operating under significant noise, shifting research focus from fault-tolerant ideals to practical applications on imperfect hardware. This terminology encapsulated the transition from theoretical toward exploiting noisy devices for near-term utility, influencing funding and development priorities worldwide. From 2020 to 2025, scaling accelerated across major platforms, underscoring the NISQ era's maturation. unveiled its 127- Eagle processor in 2021, followed by the 433- Osprey in 2022, with a public roadmap targeting over 1,000 s by 2023 and modular systems exceeding 100,000 s by the late . advanced superconducting technology with the 105- Willow chip in December 2024, achieving error rates below the threshold for in certain benchmarks, enabling computations that outperformed classical simulations by orders of magnitude. In trapped-ion systems, reported a world-record 99.99% two- gate fidelity in 2025, supporting applications in quantum networking and simulations. Rigetti, focusing on superconducting s, launched a 36- multi-chip system in 2025 and planned delivery of over 100- processors by year-end, emphasizing hybrid integration with classical AI. In 2025, announced new quantum processors and breakthroughs supporting quantum advantage by 2026. The NISQ era's evolution emphasized hybrid quantum-classical algorithms, starting with early variational methods in the mid-2010s that optimized noisy circuits through classical feedback loops, enabling practical computations beyond pure quantum approaches. The further propelled remote access to quantum resources; IBM's Quantum Experience, launched in 2016, saw increased usage for collaborative research, while D-Wave offered free hybrid quantum cloud access in 2020 to support pandemic-related modeling, democratizing NISQ experimentation globally.

Quantum Hardware in the NISQ Era

Major Platforms and Technologies

The noisy intermediate-scale quantum (NISQ) era is characterized by several leading hardware platforms, each leveraging distinct physical implementations to realize s and quantum operations, with ongoing efforts to balance , , and . These platforms operate under noisy conditions inherent to current technology, where error rates remain a fundamental challenge across architectures. Superconducting systems, pioneered by institutions like and , form one of the most mature platforms in the NISQ era. These devices typically employ s, which are superconducting circuits designed to minimize charge noise through a large shunt , enabling longer times compared to earlier charge qubits. Control is achieved via pulses for single-qubit rotations and flux or voltage tuning for two-qubit s, with the entire setup requiring cryogenic cooling to approximately 10 millikelvin to suppress thermal noise and maintain superconductivity. is pursued through 2D planar integration on silicon chips for dense qubit arrays, with emerging packaging techniques allowing vertical stacking to increase qubit counts and connectivity without excessive . Trapped-ion platforms, utilized by companies such as and (now ), offer another prominent approach, relying on electromagnetic traps to confine ions like or calcium in a vacuum. Qubits are encoded in the ions' internal electronic states, manipulated using visible or near-infrared for precise state preparation, readout via detection, and entangling through shared motional modes in ion chains. This method achieves two-qubit gate fidelities exceeding 99.9% in many demonstrations, attributed to the of ions from materials that could introduce decoherence. However, gate operations are relatively slow, spanning microseconds to milliseconds due to the need for laser addressing and sympathetic cooling, which limits circuit depth in NISQ applications. involves linear or 2D ion trap arrays with microfabricated electrodes, though shuttling ions for arbitrary connectivity introduces additional overhead. Photonic and neutral-atom platforms represent alternative paradigms with potential for room-temperature operation and distributed quantum computing. In photonic systems, as developed by , qubits are encoded in , path, or time-bin , with operations performed using linear optical elements like beam splitters and phase shifters, supplemented by single-photon sources and detectors. This approach benefits from existing fiber-optic infrastructure for scalability, but faces challenges in generating high-fidelity entangled states without measurement-based feedback, and limited qubit due to probabilistic Bell-state measurements. Neutral-atom arrays, exemplified by QuEra's work with Rydberg atoms in , trap atoms like in 2D lattices formed by laser beams. Excitations to Rydberg states enable strong dipole-dipole interactions for fast entangling gates (~μs), with reconfigurable arrays allowing programmable ; however, achieving uniform trapping and low crosstalk in large-scale lattices remains a key hurdle. Hybrid approaches have also contributed to early NISQ prototypes, integrating multiple qubit types or modalities for enhanced functionality. Nuclear magnetic resonance (NMR) systems, used in initial quantum simulations, employ ensembles of nuclear spins in liquid solutions controlled by radiofrequency pulses and magnetic fields, offering room-temperature operation but limited to small-scale circuits due to ensemble averaging. Topological qubits, explored in Microsoft-backed research, aim to encode information in non-local quasiparticles like Majorana fermions in semiconductor-superconductor hybrids, promising inherent protection against local noise; while prototypes have demonstrated braiding operations, full-scale NISQ devices remain developmental. A notable achievement in superconducting platforms is IBM's 156-qubit Heron processor, released in 2023, which incorporates tunable couplers to dynamically adjust qubit interactions, improving connectivity and reducing unwanted crosstalk in multi-qubit operations. This design advances toward fault-tolerant quantum computing by enabling higher-fidelity compilation of quantum circuits.

Qubit Scaling and Noise Characteristics

In the noisy intermediate-scale quantum (NISQ) era, quantum devices typically feature 50 to 500 physical qubits, with leading systems such as IonQ's Tempo processor reaching 100 qubits and Quantinuum's Helios at 98 qubits as of November 2025. In November 2025, Quantinuum announced the Helios system with 98 qubits, achieving record gate fidelities, while IBM unveiled the Nighthawk processor with 120 qubits and enhanced connectivity. Princeton engineers also demonstrated a superconducting qubit with coherence times exceeding 1 millisecond, supporting improved scalability toward larger systems. Projections as of late 2025 include scaling through modular designs, with IBM's Nighthawk enabling systems with over 1,000 connected qubits by 2028. However, achieving full qubit scaling remains constrained by connectivity limitations; most NISQ platforms employ nearest-neighbor or sparse graphs rather than all-to-all connectivity, necessitating additional swap gates that increase circuit depth and error accumulation. Noise in NISQ hardware arises from multiple sources, including decoherence via T1 relaxation (amplitude damping) and T2 (phase damping), which limit coherence times to microseconds in superconducting systems and milliseconds in trapped-ion setups. infidelity, typically stemming from imperfect control pulses, affects single-qubit gates at error rates below 0.1% but rises for two-qubit entangling gates, with average infidelities of 0.5–2% across platforms like IBM's and Google's processors. Readout errors, often 1–5% due to signal discrimination challenges, and from unintended interactions between adjacent qubits further degrade performance, while control errors from drift exacerbate these issues in multi-qubit operations. Key metrics highlight these limitations: two-qubit gate error rates ε range from 0.5% to 2%, restricting viable depths to approximately 100 layers before overwhelms coherent evolution, as observed in benchmarks on devices like IonQ's systems where recent advances push to 99.99% but general NISQ averages remain higher. is commonly modeled using Pauli error channels, where the probability of error P_{\text{err}} = 1 - F quantifies , with F as the average gate ; for a two-qubit gate, this channels the ideal unitary into a probabilistic of Pauli operators I, [X, Y](/page/X%2C_Y), Z applied to the qubits. P_{\text{err}} = 1 - F This model captures depolarizing noise dominant in NISQ devices. The implications of these noise characteristics are profound: errors accumulate exponentially with circuit depth, roughly as (1 - \epsilon)^d for depth d, rendering deep computations infeasible without mitigation and compelling NISQ algorithms to rely on shallow circuits of limited layers to preserve quantum advantage.

Core Algorithms for NISQ Devices

Variational Quantum Eigensolver

The (VQE) is a hybrid quantum-classical designed to approximate the ground-state energy of a quantum system, making it particularly suitable for noisy intermediate-scale quantum (NISQ) devices due to its shallow circuit depths and tolerance to errors. Introduced in , VQE leverages the from , which states that the expectation value of the for any trial wavefunction provides an upper bound to the true ground-state energy. In this approach, a parameterized prepares a trial state on the quantum processor, while a classical computer measures the energy expectation value and iteratively optimizes the parameters to minimize it. This structure allows VQE to address complex problems like molecular simulations that are intractable on classical computers alone. The workflow of VQE begins with mapping the problem Hamiltonian H (e.g., for a molecular system) into a representation, followed by the preparation of a trial wavefunction |\psi(\theta)\rangle using a parameterized , where \theta denotes the variational parameters. The value is then computed as E(\theta) = \langle \psi(\theta) | H | \psi(\theta) \rangle, which is equivalent to \mathrm{Tr}[\rho(\theta) H] with \rho(\theta) = |\psi(\theta)\rangle \langle \psi(\theta)| for the pure trial state. This value is minimized iteratively using classical optimization techniques, such as gradient-based methods like or derivative-free algorithms like COBYLA, until to an of the ground-state . The quantum part involves repeated executions of the to estimate expectation values of Pauli terms in H, while the classical optimizer updates \theta based on these measurements. Central to VQE is the choice of , the parameterized form of the trial wavefunction, which must balance expressivity, circuit depth, and compatibility with hardware constraints. Hardware-efficient consist of alternating layers of single-qubit rotations and entangling gates tailored to the qubit connectivity of the device, enabling shallow that mitigate accumulation; for instance, such were used to simulate small molecules on superconducting processors. Problem-inspired , in contrast, incorporate domain-specific structure, such as the unitary coupled cluster singles and doubles (UCCSD) form for , which mimics classical coupled-cluster theory by applying unitary transformations corresponding to excitations. UCCSD have been shown to achieve high accuracy for molecular ground states by capturing strong correlation effects relevant to chemical bonding. VQE has found primary applications in molecular simulations within the NISQ era, where exact diagonalization is infeasible classically. A landmark demonstration in 2017 involved computing the ground-state energy of the molecule \mathrm{H_2} on IBM's superconducting quantum processors using a hardware-efficient , achieving chemical accuracy despite device . More recent advances (2023–2025) have extended VQE to larger molecular systems through techniques like fragment-based approaches, enabling simulations of small molecular clusters like systems (H₃⁺ to H₂₄) by dividing them into qubit-efficient fragments; for example, the fragment molecular orbital-based VQE has facilitated ground-state calculations, reducing qubit requirements from potentially dozens to 4-16 qubits per fragment. As of 2025, parallel-VQE approaches have enhanced parameter optimization efficiency. These developments highlight VQE's role in advancing toward practical utility on current hardware.

Quantum Approximate Optimization Algorithm

The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical variational algorithm designed for solving problems on noisy intermediate-scale quantum (NISQ) devices. It aims to approximate solutions to NP-hard problems by preparing a parameterized and optimizing its parameters classically to minimize a . QAOA is particularly suited to NISQ due to its reliance on shallow s, which mitigate the effects of noise while leveraging for exploring solution spaces. Unlike fully quantum algorithms requiring , QAOA uses a classical optimizer to iteratively refine parameters, making it practical for current devices with limited times. The algorithm's structure consists of alternating layers of two unitary operators: the problem Hamiltonian H_C, which encodes the cost function of the optimization problem, and the mixer Hamiltonian H_M, typically a transverse-field operator that facilitates exploration of the solution space. These layers are parameterized by angles \gamma and \beta, where \gamma controls the evolution under H_C to align the state with low-cost solutions, and \beta governs the mixing to maintain superposition. For a p-layer QAOA state, the wavefunction is prepared as: |\psi(p)\rangle = e^{-i\beta_p H_M} e^{-i\gamma_p H_C} \cdots e^{-i\beta_1 H_M} e^{-i\gamma_1 H_C} |s\rangle, where |s\rangle is the equal superposition state over all possible bit strings, serving as the initial uniform ansatz. The expectation value \langle \psi(p) | H_C | \psi(p) \rangle is then measured on the quantum device and maximized (or minimized, depending on the formulation) via classical optimization techniques, such as gradient descent or COBYLA, to achieve an approximation ratio relative to the optimal solution. This hybrid loop—quantum state preparation, measurement, and classical feedback—enables QAOA to navigate the combinatorial landscape despite hardware noise. Early demonstrations around 2019 on superconducting quantum processors achieved viable results for small graphs (up to 8 vertices), demonstrating approximation ratios around 0.7 with p=1 layers, though noise limited deeper circuits. By 2024, advancements in QAOA on trapped-ion platforms have shown improved performance for —a () variant—with instances up to 10 assets, yielding approximation ratios around 0.8 for p=2-3 layers in some cases. In 2025, extensions like QAOA-GPT have improved handling of higher-order optimization problems. These benchmarks highlight QAOA's within NISQ constraints, with circuit depths typically under 100 gates for small-to-medium problem sizes. Theoretical analyses suggest QAOA offers potential quantum advantage through speedups over classical methods for certain NP-hard problems, particularly when restricted to shallow circuits (p \leq 10), where it can approximate solutions in time with high probability for problems like MaxCut on certain classes. For instance, on 3-regular graphs, QAOA with layers achieves a constant-factor approximation better than some classical algorithms, potentially enabling practical utility on NISQ hardware before full . However, performance degrades with increasing , and the classical optimization step can become a for large parameter spaces, underscoring the need for noise-robust implementations.

Error Mitigation Techniques

Zero-Noise Extrapolation

Zero-noise extrapolation (ZNE) is a post-processing error mitigation technique designed to estimate the results of an ideal, noise-free quantum computation from measurements performed on noisy , without requiring modifications to the underlying quantum or additional qubits. Introduced as a to suppress errors in short-depth quantum circuits, ZNE operates by intentionally amplifying the noise in the executed circuits, collecting expectation values at several elevated noise levels, and then using classical fitting to extrapolate back to the zero-noise regime. This approach assumes that the noise can be parameterized by a scaling factor η, where η = 1 corresponds to the native noise level of the , and higher values of η represent amplified noise. Noise amplification is typically achieved through practical modifications such as shortening pulse durations to increase coherent errors or inserting additional identity gates to boost incoherent errors like decoherence. The core of ZNE involves measuring the expectation value ⟨O⟩_η of an observable O across a set of noise factors η_i > 1, often 3 to 5 points for stability. These measurements are then fitted to a low-order model of the form f(\eta) \approx \sum_{k=0}^{n} c_k \eta^k, where n is typically small (e.g., 1 or 2 for linear or quadratic fits) to avoid , and the mitigated estimate is obtained by evaluating f(0) at the zero-noise point. This leverages the fact that for many models, errors perturb the result perturbatively, allowing recovery of the leading-order noiseless value. Global ZNE applies uniform scaling across the entire , assuming homogeneous rates, while local variants target subsystems or specific partitions to address spatially varying , such as in multi-qubit gates or modular architectures. Local ZNE can improve accuracy for correlated errors by independently extrapolating subsystems, though it increases the number of required measurements. Early implementations demonstrated ZNE's efficacy in variational quantum algorithms; for instance, it was applied to the (VQE). More recent extensions have scaled ZNE to larger systems, including 127-qubit circuits on devices, enabling accurate expectation values for complex many-body physics simulations beyond classical capabilities. These advancements often combine ZNE with other post-processing steps like readout error correction to handle higher noise floors in scaled hardware. Despite its hardware-agnostic nature, ZNE has limitations rooted in its reliance on multiple circuit executions—typically 10–100 times more samples than a single run—to gather data for fitting, which can exacerbate statistical noise in high-depth or large-qubit scenarios. It is most effective for low-order errors where the polynomial approximation holds, but higher-order noise or non-Markovian effects can lead to instabilities, requiring careful selection of noise factors and fit orders to ensure reliability. Ongoing research focuses on adaptive noise scaling and hybrid fitting functions to extend its applicability in the NISQ era.

Probabilistic Error Cancellation and Symmetry Verification

Probabilistic error cancellation (PEC) is a quantum error mitigation technique designed to counteract in near-term quantum devices by inverting the through quasi-probabilistic sampling. The approach begins by decomposing the effective \Lambda acting on the quantum operation into a of implementable operations: \Lambda = \sum_i q_i G_i, where the G_i are physically realizable quantum channels (such as Pauli gates or identity), the coefficients q_i satisfy \sum_i q_i = 1 but may include negative values, enabling the representation of non-physical (inverting) operations. To obtain an unbiased estimate of the , one samples from a probability distribution p_i = |q_i| / \sum_j |q_j|, executes the corresponding circuit G_i followed by the , and applies sign inversion in post-processing via weighted averaging. The mitigated observable is given by O_{\text{mit}} = \sum_i \frac{q_i}{p_i} \langle O_i \rangle, where \langle O_i \rangle is the measured expectation value from the i-th sampled circuit, and the variance scales with the l_1-norm of the quasi-probability distribution, \sum_i |q_i|, which quantifies the sampling overhead. Symmetry verification complements PEC by providing a low-overhead post-selection method to filter errors that violate inherent symmetries of the quantum problem, such as conservation of particle number or total spin in many-body Hamiltonians. This technique involves augmenting the circuit with measurements of symmetry-projecting operators (e.g., parity checks or number operators) and discarding outcomes outside the desired symmetry subspace, thereby mitigating relaxation-induced errors and readout biases without requiring noise model inversion. For instance, in systems conserving particle number, post-selection on fixed excitation sectors suppresses leakage errors that alter occupancy. When combined with PEC, symmetry verification enhances overall fidelity by ensuring sampled outcomes respect problem symmetries before quasi-probabilistic combination. Experimental demonstrations of PEC have validated its efficacy on NISQ hardware, with a 2022 implementation on superconducting processors using sparse Pauli-Lindblad noise models achieving bias-free mitigation for circuits up to 10 qubits affected by , improving observable estimates by factors of 2-5. However, PEC's practical remains constrained by an sampling overhead in the gate error rate \epsilon, scaling as (1 + O(\epsilon))^d for circuit depth d, thus limiting it to shallow, low-noise regimes.

Quantum Utility and Advantage

Current Demonstrations and Benchmarks

In the noisy intermediate-scale quantum (NISQ) era, demonstrations of quantum utility have focused on tasks where quantum devices provide practical value over classical methods, particularly in and . A notable example is IBM's 2023 application of the (VQE) to simulate reactions on magnesium surfaces. This work used a hybrid quantum-classical embedding approach on devices including the 127-qubit ibmq_sherbrooke processor with error mitigation, yielding energies compatible with noiseless simulations and improving upon classical approximations in certain regimes for capturing complex electron correlation effects. Benchmarks have emerged to quantify NISQ device performance and progress toward utility. IBM's Q-score metric measures the number of executable quantum operations before errors exceed 1%, providing an application-oriented assessment of hardware capability; for instance, IBM's 2023 processors achieved Q-scores enabling simulations of molecular Hamiltonians beyond classical tractability limits. Similarly, Google's 2025 Willow quantum chip demonstrated beyond-classical performance in random circuit sampling (), completing a in under five minutes that would require approximately 10^25 years on a , underscoring scalable in sampling tasks despite noise. Specific experimental achievements illustrate NISQ potential in physics and optimization. In 2023, executed error-mitigated simulations of the transverse-field Ising model's time dynamics on a 127-qubit superconducting , accurately measuring values over 60 layers of two-qubit gates—equivalent to circuit depths infeasible for classical simulation without approximation—using zero-noise extrapolation to suppress errors. In 2025, in collaboration with demonstrated a hybrid quantum-classical workflow on its 36-qubit Forte Enterprise system for solving unit commitment problems in power grid optimization, improving solution quality over classical heuristics for instances involving 24 time periods and 26 generators. These demonstrations come with trade-offs inherent to NISQ error mitigation. Techniques like zero-noise extrapolation and PEC typically require 10–100 times more measurement shots to achieve reliable results, increasing computational overhead on current hardware. However, in select applications such as molecular energy calculations, mitigated NISQ simulations have shown improvements in accuracy over classical methods in capturing correlations.

Theoretical Separations and Limitations

The noisy intermediate-scale quantum (NISQ) era operates within a restricted subset of the complexity class BQP, which encompasses problems solvable in polynomial time on a fault-tolerant quantum computer with bounded error. Unlike BQP, which allows for deep circuits enabling algorithms like Shor's for integer factorization or Grover's for unstructured search, NISQ computations are fundamentally limited by hardware noise and qubit counts, confining them to shallow-depth circuits that cannot implement these full-scale quantum algorithms. This restriction places NISQ problems in an intermediate complexity class between classical BPP (bounded-error probabilistic polynomial time) and BQP, where noise prevents the exponential speedups promised by ideal quantum computing. Theoretical separations highlight potential quantum speedups achievable even in noisy settings, distinguishing NISQ from classical . For instance, random quantum circuits that approximate unitary t-designs—ensembles mimicking the Haar-random up to t moments—demonstrate classical hardness for , as the output probabilities require exponential resources to compute classically under standard conjectures. This property underpins demonstrations of quantum advantage in sampling tasks, where shallow noisy circuits suffice to exceed classical capabilities. Similarly, remains computationally hard in the presence of realistic noise levels, such as photon loss or distinguishability, preserving its #P-hardness and providing a pathway for NISQ-era speedups without full . Despite these separations, inherent limitations underscore the challenges of NISQ computations. Noise induces barren plateaus in the training landscapes of variational quantum algorithms, where gradients vanish exponentially with number, rendering optimization intractable for large systems due to the concentration of the around a constant value. Additionally, simulating fault-tolerant on NISQ devices incurs an exponential overhead in physical qubits and gates, as the code distance required to suppress errors grows quadratically with the desired logical , outstripping current scales. For the quantum approximate optimization (QAOA), bounds the approximation ratio to $1 - O(1/p) for p layers, where performance saturates classical limits under high , preventing convergence to the adiabatic optimum. These results emphasize that while NISQ enables quantum-classical workflows for specific tasks, broad separations from classical computing remain elusive without mitigation scaling to .

Path Beyond NISQ

Transition to Fault-Tolerant Computing

The transition from the noisy intermediate-scale quantum (NISQ) era to relies on implementing (QEC) codes that can suppress s below critical thresholds, enabling reliable logical operations despite underlying physical . A prominent example is the surface , which achieves when physical error rates are below approximately 1% for depolarizing models, allowing the construction of logical s from a large ensemble of physical ones. This typically requires an overhead of around 1,000 physical qubits per logical qubit to maintain low logical error rates, scaling with the desired distance to protect against error propagation. NISQ devices play a crucial role in validating these QEC approaches by benchmarking decoders and syndrome extraction protocols on current hardware, providing empirical data to refine fault-tolerant designs. For instance, in 2024, Google Quantum AI demonstrated a distance-7 surface code memory using 49 data s on their Willow processor, operating below the threshold with real-time decoding and achieving exponential error suppression as the code distance increased. These experiments highlight how NISQ-scale systems can test the practical feasibility of error correction primitives, bridging the gap to larger-scale implementations despite inherent limitations in qubit fidelity and . Hybrid computational paradigms offer intermediate paths toward , leveraging NISQ-compatible resources while incorporating error-resilient elements. Measurement-based quantum computation (MBQC), which uses pre-entangled cluster states and adaptive measurements, enables fault-tolerant schemes through topological encodings that tolerate local errors up to a , serving as a bridge by requiring fewer adaptive gates than gate-based models. Similarly, adiabatic quantum computing can transition to fault tolerance via protected evolutions in encoded subspaces, where slow changes mitigate decoherence, as explored in hybrid methodologies that combine adiabatic paths with error-corrected stabilizers. Significant challenges persist in this transition, including the substantial overhead in physical count and computational runtime required for syndrome processing and correction cycles, which can extend logical gate times by orders of magnitude. Prototypes addressing these issues, such as Microsoft's Majorana 1 topological processor unveiled in 2025, utilize Majorana zero modes in topoconductors to inherently suppress errors through non-local encoding, potentially reducing overhead compared to conventional codes (though the claims have faced scientific debate regarding their experimental validation). Key markers of progress beyond NISQ include achieving "" performance, where logical gate operations outperform equivalent physical ones in fidelity and speed, with projections indicating scalable demonstrations by 2030 through iterative improvements in hardware and decoding algorithms.

Industry Roadmaps and Projections

IBM has outlined a detailed roadmap for advancing quantum computing, targeting the demonstration of the first scientific quantum advantage in 2026 through integration with high-performance computing, with a large-scale fault-tolerant quantum computer by 2029. In November 2025, IBM unveiled the Nighthawk and Loon processors, advancing the roadmap toward these goals. This modular approach emphasizes scalable error correction using quantum low-density parity-check codes, building toward larger systems. By 2029, IBM plans to deploy the Starling system with 200 logical qubits capable of 100 million quantum operations, serving as a foundation for the Blue Jay processor. Ultimately, by 2033, the company aims to scale fault-tolerant quantum computers to support circuits of 1 billion gates on up to 2,000 logical qubits via modular architectures. Google Quantum AI's roadmap focuses on achieving a useful, error-corrected quantum computer by 2029, with intermediate milestones including demonstrations of scalable logical s and below-threshold error correction. The prioritizes applications in materials discovery, leveraging quantum simulations to accelerate breakthroughs in and chemical modeling. Recent advancements, such as the Willow processor, have shown exponential improvement in error-corrected performance, supporting projections for practical utility in scientific simulations by the late . Other industry players are pursuing aggressive scaling targets. released its 84-qubit Ankaa-2 superconducting processor in 2024, achieving 98% median two-qubit gate fidelity and enabling real-time demonstrations. The company aims to expand to over 100 physical qubits by late 2025, with long-term goals focused on fault-tolerant systems through quantum-classical architectures. In , the Jiuzhang 3.0 photonic quantum computer demonstrated supremacy in 2023 by processing Gaussian tasks 10^16 times faster than classical supercomputers using 255 detected photons. Chinese projections emphasize continued photonic and superconducting advancements, supported by national initiatives to achieve global leadership in by 2030. Global investments in quantum technologies have surpassed $44.5 billion in cumulative public funding by 2024, with private reaching $3.77 billion in the first three quarters of 2025 alone. In the United States, the Quantum Economic Development Consortium (QED-C) coordinates industry, academia, and government efforts to accelerate and workforce development. The European Union's Quantum Flagship initiative, with €1 billion allocated over 10 years since , supports in fault-tolerant and quantum communication. Projections indicate that the NISQ era will peak in utility between 2025 and 2030, enabling quantum-classical applications in optimization and , before transitioning to fault-tolerant systems post-2030. These fault-tolerant machines are expected to unlock transformative impacts in , through accurate molecular simulations, and , via algorithms like Shor's for factoring large numbers.

References

  1. [1]
    [1801.00862] Quantum Computing in the NISQ era and beyond - arXiv
    Jan 2, 2018 · Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the near future. Quantum computers with 50-100 qubits may be able to ...
  2. [2]
    The complexity of NISQ | Nature Communications
    Sep 26, 2023 · Computation in the NISQ era is modeled by hybrid computation consisting of a classical computer and a noisy quantum device.
  3. [3]
    Quantum computing's six most important trends for 2025 - Moody's
    Feb 4, 2025 · On September 10, Microsoft and Quantinuum announced that they had entangled 12 logical qubits, triple the logical qubit count from six months ...
  4. [4]
    [2310.01431] NISQ Computers: A Path to Quantum Supremacy - arXiv
    Sep 29, 2023 · In this review, we critically examine the quantum supremacy experiments conducted thus far, shedding light on their implications and contributions to the ...
  5. [5]
    What is NISQ Computing? Pros and Cons | Definition from TechTarget
    May 14, 2025 · NISQ devices typically contain tens to a few hundred qubits, with some recent models featuring up to 1,000 qubits. While this scale enables the ...
  6. [6]
    Noisy intermediate-scale quantum algorithms | Rev. Mod. Phys.
    Feb 15, 2022 · One of the goals in the NISQ era is to extract the maximum quantum computational power from current devices while also continuing to develop ...Abstract · Article Text · Other NISQ Approaches · Programming and Maximizing...
  7. [7]
    [PDF] Reliability Modeling of NISQ-Era Quantum Computers - Huiyang Zhou
    The quantum computers with a number of qubits ranging from fifty to a few hundred are termed as Noisy. Intermediate Scale Quantum (NISQ) computers. Although.Missing: 50 | Show results with:50
  8. [8]
    NISQ computing: where are we and where do we go? | AAPPS Bulletin
    Sep 27, 2022 · In the long term, we should view the NISQ era as a step towards full fault tolerance and the development of more powerful quantum devices. We do ...
  9. [9]
    IonQ Reaches #AQ 64 Milestone on 100-Qubit Tempo System ...
    Sep 27, 2025 · IonQ (NYSE: IONQ) has announced that its IonQ Tempo system, a 100-qubit trapped-ion quantum computer, has achieved an algorithmic qubit ...
  10. [10]
    New World-Record in Quantum Volume - Quantinuum
    May 12, 2025 · Now, with 98 physical qubits, we've been able to make 94 logical qubits, fully entangled in one of the largest GHZ states ever recorded. We did ...
  11. [11]
    The Future of Quantum Computing Is Modular - IEEE Spectrum
    Mar 7, 2025 · Later this year, IBM plans to connect up to seven Herons to create a modular Flamingo processor with more than 1,000 qubits. The Road Map to ...
  12. [12]
  13. [13]
    Efficient Qubit Routing for a Globally Connected Trapped Ion ...
    Jul 7, 2020 · The cost of enabling connectivity in noisy intermediate-scale quantum (NISQ) devices is an important factor in determining computational ...
  14. [14]
    (PDF) Behavioural Limitations of Qubits: A Review of Stability and ...
    Jul 6, 2025 · We review sources of decoherence, gate infidelity, readout errors, environmental noise, and crosstalk in current leading technologies such ...<|separator|>
  15. [15]
    Error-Divisible Two-Qubit Gates | Phys. Rev. Applied
    Feb 15, 2023 · We introduce a simple widely applicable formalism for designing “error-divisible” two-qubit gates: a quantum gate set where fractional rotations have ...Missing: rates 0.5-2%
  16. [16]
    Error Mitigation in the NISQ Era: Applying Measurement ... - MDPI
    IBMO features 127 qubits but presents a slightly lower error rate of 3% EPLG utilizing the same Eagle r3 architecture. However, it has a higher median readout ...
  17. [17]
    Accelerating Towards Fault Tolerance: Unlocking 99.99% Two-Qubit ...
    Oct 21, 2025 · IonQ has set the new world record in two-qubit gate fidelity and have also done so without ground-state cooling, which can be resource-intensive ...
  18. [18]
    Scientists develop new and improved quantum gates - It Ain't Magic
    Dec 6, 2024 · Though fidelity of 99.9 percent has been routinely achieved for single-qubit gates, error rates for two-qubit gates are typically 0.5 percent or ...
  19. [19]
    Bounding quantum gate error rate based on reported average fidelity
    Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress ...
  20. [20]
    [PDF] An Accurate and Efficient Analytic Model of Fidelity Under ... - arXiv
    Apr 21, 2025 · In this work, we present a comprehensive theoretical framework to predict the fidelity of quantum circuits under depolarizing noise. Building on ...Missing: infidelity | Show results with:infidelity
  21. [21]
    A variational eigenvalue solver on a photonic quantum processor
    Jul 23, 2014 · The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution.
  22. [22]
    The Variational Quantum Eigensolver: a review of methods and best ...
    Nov 9, 2021 · The variational quantum eigensolver (or VQE) uses the variational principle to compute the ground state energy of a Hamiltonian.
  23. [23]
    An adaptive variational algorithm for exact molecular simulations on ...
    Jul 8, 2019 · Similar to the original VQE, our new ADAPT-VQE algorithm seeks to further minimize the circuit depth with an increased number of measurements.
  24. [24]
    Fragment molecular orbital-based variational quantum eigensolver ...
    Jan 29, 2024 · This method combines the fragment molecular orbital (FMO) approach with VQE and efficiently utilizes qubits for quantum chemistry simulations.Missing: caffeine | Show results with:caffeine
  25. [25]
    [1612.02058] Error mitigation for short-depth quantum circuits - arXiv
    Dec 6, 2016 · Title:Error mitigation for short-depth quantum circuits. Authors:Kristan Temme, Sergey Bravyi, Jay M. Gambetta. View a PDF of the paper titled ...
  26. [26]
    You need 100 qubits to accelerate discovery with quantum - IBM
    Oct 26, 2023 · They also applied a variety of error mitigation techniques, combining zero-noise extrapolation, Pauli twirling, readout error mitigation, and ...
  27. [27]
    Best practices for quantum error mitigation with digital zero-noise ...
    Jul 7, 2023 · Abstract:Digital zero-noise extrapolation (dZNE) has emerged as a common approach for quantum error mitigation (QEM) due to its conceptual ...Missing: seminal | Show results with:seminal
  28. [28]
    Error Mitigation for Short-Depth Quantum Circuits | Phys. Rev. Lett.
    Nov 3, 2017 · Error Mitigation for Short-Depth Quantum Circuits. Kristan Temme, Sergey Bravyi, and Jay M. Gambetta. IBM T. J. Watson Research Center ...
  29. [29]
    Experimental error mitigation via symmetry verification in a ...
    Jul 31, 2019 · Symmetry verification improves the energy and state estimates by mitigating the effects of qubit relaxation and residual qubit excitation, which ...Abstract · Article Text · ACKNOWLEDGMENTS
  30. [30]
    [2101.03151] Quantum Error Mitigation using Symmetry Expansion
    Jan 8, 2021 · In this article, we develop a general framework named symmetry expansion which provides a wide spectrum of symmetry-based error mitigation schemes beyond ...
  31. [31]
    Probabilistic error cancellation with sparse Pauli-Lindblad models ...
    Jan 24, 2022 · Abstract page for arXiv paper 2201.09866: Probabilistic error cancellation with sparse Pauli-Lindblad models on noisy quantum processors.Missing: seminal | Show results with:seminal
  32. [32]
    Extending quantum probabilistic error cancellation by noise scaling
    Aug 4, 2021 · We propose a general framework for quantum error mitigation that combines and generalizes two techniques: probabilistic error cancellation (PEC) and zero-noise ...
  33. [33]
    Quantum computation of reactions on surfaces using local embedding
    Sep 12, 2023 · In this work, we outline a workflow to model the adsorption and reaction of molecules on surfaces using quantum computing algorithms.
  34. [34]
    Meet Willow, our state-of-the-art quantum chip - The Keyword
    Dec 9, 2024 · Our new quantum chip demonstrates error correction and performance that paves the way to a useful, large-scale quantum computer.
  35. [35]
    Evidence for the utility of quantum computing before fault tolerance
    Jun 14, 2023 · ZNE is either a polynomial or exponential extrapolation method for noisy expectation values as a function of a noise parameter. This requires ...Noise Model · Tensor Network Methods · Acknowledgements
  36. [36]
    IonQ Partners with Oak Ridge National Laboratory, Demonstrating ...
    IonQ Partners with Oak Ridge National Laboratory, Demonstrating Quantum Power Grid Optimization Advancements. July 31, 2025. New hybrid quantum-classical ...Missing: 32- PEC-
  37. [37]
    [2210.07234] The Complexity of NISQ - arXiv
    Oct 13, 2022 · NISQ is a complexity class for problems solved by a classical computer with a NISQ device, which can noisily initialize qubits, apply noisy ...Missing: foundations | Show results with:foundations
  38. [38]
    [2203.16571] Random quantum circuits are approximate unitary $t
    Mar 30, 2022 · The applications of random quantum circuits range from quantum computing and quantum many-body systems to the physics of black holes. Many of ...
  39. [39]
    Noise-induced barren plateaus in variational quantum algorithms
    Nov 29, 2021 · We rigorously prove a serious limitation for noisy VQAs, in that the noise causes the training landscape to have a barren plateau (ie, vanishing gradient).
  40. [40]
    [2007.01265] Multi-exponential Error Extrapolation and Combining ...
    Jul 2, 2020 · Abstract page for arXiv paper 2007.01265: Multi-exponential Error Extrapolation and Combining Error Mitigation Techniques for NISQ Applications.
  41. [41]
    Surface code quantum computing with error rates over 1%
    Feb 18, 2011 · The precise maximum tolerable error rate depends on the error model, and we calculate values in the range 1.1–1.4% for various physically ...
  42. [42]
    IBM Tackles New Approach to Quantum Error Correction
    Jun 10, 2025 · One of the most popular approaches is known as a surface code, which requires roughly 1,000 physical qubits to make up one logical qubit.
  43. [43]
    Quantum error correction below the surface code threshold - Nature
    Dec 9, 2024 · We present two below-threshold surface code memories on our newest generation of superconducting processors, Willow: a distance-7 code, and a distance-5 code.
  44. [44]
    [0707.0021] Towards Fault Tolerant Adiabatic Quantum Computation
    Jul 2, 2007 · I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology.Missing: bridge | Show results with:bridge
  45. [45]
    Towards Fault Tolerant Adiabatic Quantum Computation
    Apr 25, 2008 · I show how to protect adiabatic quantum computation (AQC) against decoherence and certain control errors, using a hybrid methodology.Missing: bridge | Show results with:bridge
  46. [46]
    Microsoft unveils Majorana 1, the world's first quantum processor ...
    Feb 19, 2025 · Built with a breakthrough class of materials called a topoconductor, Majorana 1 marks a transformative leap toward practical quantum computing.Microsoft Quantum Innovator... · From Physics To Engineering · Darpa's Recognition Of Our...
  47. [47]
    Q-Day Revisited - RSA-2048 Broken by 2030: Detailed Analysis
    Jun 20, 2025 · The result: RSA-2048 could be factored with about 6,000 logical qubits in an 8-hour quantum computation. Under reasonable error-correction ...
  48. [48]
    [PDF] Quantum Technology Monitor - McKinsey
    scale quantum computing architectures. “Encoding a magic state with beyond break-even fidelity”. Superconducting quantum circuit is being prepared in so ...
  49. [49]
    2026 - IBM Quantum Roadmap
    We will demonstrate the first examples of quantum advantage using a quantum computer with HPC. Why this matters for our clients and the world. Users and ...
  50. [50]
    IBM Sets the Course to Build World's First Large-Scale, Fault ...
    Jun 10, 2025 · It will be the foundation for IBM Quantum Blue Jay, which will be capable of executing 1 billion quantum operations over 2,000 logical qubits. A ...
  51. [51]
    IBM Quantum Roadmap
    2026 ... Scale fault-tolerant quantum computers to run circuits of 1 billion gates on up to 2000 qubits, unlocking the full power of quantum computing.
  52. [52]
    Quantum Computing Roadmaps & Predictions of Leading Players
    May 16, 2025 · The roadmap outlines a trajectory from 100+ qubits today to 10,000 qubits by 2026, with an emphasis on scalable logical qubits and Quantum Error ...
  53. [53]
    Making quantum error correction work - Google Research
    Dec 9, 2024 · We introduce Willow, the first quantum processor where error-corrected qubits get exponentially better as they get bigger.<|control11|><|separator|>
  54. [54]
    Rigetti Announces Public Availability of Ankaa-2 System with a 2.5x ...
    Jan 4, 2024 · The Ankaa-2 system has achieved a 98% median 2-qubit fidelity, a 2.5x performance improvement compared to the Company's previous QPUs.
  55. [55]
    China's computational power gains new strength with 255-detected ...
    Oct 12, 2023 · Chinese scientists unveiled a quantum computer prototype named "Jiuzhang 3.0" with 255 detected photons on Wednesday, once again pushing the ...Missing: projections | Show results with:projections
  56. [56]
    How China's quantum leap is set to redefine future of computing
    Mar 5, 2025 · Meanwhile, China hit new quantum computing milestones with Zuchongzhi 2.0 and Jiuzhang 3.0. By 2024, Sycamore had again highlighted its quantum ...
  57. [57]
    The Global Quantum Technology Industry 2025 - Future Markets, Inc
    Government backing remains crucial, with $44.5 billion in cumulative public funding and $3.1 billion added in 2024. ... Technology roadmap for quantum computing ...
  58. [58]
    Explosive Growth and Strategic Investment in 2025 - SpinQ
    Oct 31, 2025 · By September 2025, the first three quarters had seen $3.77 billion in total equity funding, representing a dramatic acceleration that positions ...Missing: cumulative | Show results with:cumulative
  59. [59]
    State of the Global Quantum Industry - QED-C
    Through the State of the Global Quantum Industry report, QED-C seeks to capture key metrics that characterize the size and impact of the global quantum industry ...Missing: Flagship | Show results with:Flagship
  60. [60]
    Homepage of Quantum Flagship | Quantum Flagship
    The Quantum Flagship was launched in 2018 as one of the largest and most ambitious research initiatives of the European Union.Missing: investment QED-
  61. [61]
    The Year of Quantum: From concept to reality in 2025 - McKinsey
    Jun 23, 2025 · Explore the latest advancements in quantum computing, sensing, and communication with our comprehensive Quantum Technology Monitor 2025.Gaining Momentum In... · The Rise Of Quantum... · Advances In Quantum Sensing
  62. [62]
    Quantum Computing Moves from Theoretical to Inevitable
    Sep 23, 2025 · Quantum computing is advancing, with up to $250 billion impact possible. But full potential isn't guaranteed and may be gradual.Table Of Contents · At A Glance · Quantum's Big Market...