Fact-checked by Grok 2 weeks ago

Quantum volume

Quantum volume is a that quantifies the performance of near-term quantum computers by measuring the largest size of random square —equal in width (number of ) and depth (number of layers)—that a device can execute with sufficient to produce the expected output distribution. Introduced by researchers in 2019, it serves as an architecture-neutral for noisy intermediate-scale quantum (NISQ) devices, capturing the interplay of count, error rates, connectivity, and compilation efficiency in a single value, V_Q = 2^k, where k is the maximum for which such a succeeds. To compute quantum volume, experiments generate ensembles of random circuits using the QuantumVolume library in , consisting of m qubits, d layers of random two-qubit unitaries (with m/2 per layer), and single-qubit depth-3 circuits between layers, followed by random permutations to simulate full . Success is determined by the heavy output probability (), the fraction of output bitstrings with the highest ideal probabilities; a circuit passes if the average HOP exceeds 2/3 over at least 100 trials, with statistical confidence above 97.7% (using a z-score of 2). The value of k is then min(m, d(m)), maximized over possible m, providing a pragmatic measure of usable quantum computation volume despite imperfect hardware. Since its proposal, quantum volume has become a key industry standard for tracking progress, with IBM's early systems achieving V_Q = 16 in 2019 and subsequent advancements pushing boundaries—such as reaching V_Q = 2^{25} = 33,554,432 in September 2025—highlighting improvements in error mitigation and scaling. While it emphasizes practical NISQ capabilities, limitations include reliance on classical for validation and assumptions of balanced width-depth scaling, prompting extensions like volumetric benchmarks for broader testing. This metric underscores the path from current noisy devices toward fault-tolerant , influencing hardware design and algorithmic development across major players like and .

Overview

Purpose and Significance

Quantum volume (QV) is defined as a single-number metric that quantifies the largest size of a square —characterized by equal width n (number of s) and depth d (number of layers of quantum operations)—that a quantum computer can execute successfully with , such that the average heavy output probability (HOP)—the fraction of the most probable output bitstrings—exceeds 2/3 over multiple trials with high statistical confidence. This metric encapsulates multiple hardware aspects, including qubit count, fidelity, , and errors, providing a holistic rather than isolated figures of merit. The primary purpose of quantum volume is to facilitate fair and standardized comparisons across diverse noisy intermediate-scale quantum (NISQ) devices, moving beyond simplistic metrics like raw numbers that fail to capture overall system performance. By incorporating factors such as two-qubit gate errors, readout errors, and , QV evaluates how effectively a device can handle realistic workloads in the NISQ era, where noise limits computation without full error correction. Introduced in by researchers amid the rapid proliferation of NISQ hardware, it addressed the need for a that reflects the compounded effects of hardware imperfections on computational utility. The significance of quantum volume lies in its role as a bridge between raw hardware capabilities and the execution of practical NISQ algorithms, such as the (VQE) for molecular simulations and the quantum approximate optimization algorithm (QAOA) for combinatorial problems. A higher QV indicates greater potential for running deeper and wider circuits required by these hybrid quantum-classical methods, thereby signaling progress toward useful quantum advantage in applications like and optimization. This metric thus guides hardware development priorities, emphasizing balanced improvements in and over mere qubit scaling.

Relation to Quantum Computing Performance

Quantum volume serves as a key metric for assessing the overall usability of a quantum computer in executing quantum algorithms, particularly by quantifying the largest size of random square circuits that can be run with acceptable . This directly correlates with algorithm feasibility, as higher quantum volumes enable more complex computations before errors dominate. For example, a quantum volume of $2^{10} = [1024](/page/1024) allows for the reliable execution of circuits comprising 10 qubits and up to 10 layers of two-qubit gates, which is adequate for basic simulations in , such as approximating molecular ground states via variational quantum eigensolvers (VQE). In the noisy intermediate-scale quantum (NISQ) era, quantum volume facilitates for practical applications where depth is severely limited by noise accumulation. It evaluates system performance in tasks like energy calculations for small molecules or problems, providing a standardized way to gauge whether a device can support hybrid quantum-classical workflows without excessive post-processing overhead. By capturing the interplay of width, depth, and error thresholds, quantum volume highlights the practical utility of NISQ hardware for these error-prone yet valuable computations. Achieving higher quantum volumes involves inherent trade-offs between scaling the number of qubits and maintaining high coherence times and gate fidelities. For instance, expanding qubit counts without parallel improvements in error mitigation techniques can diminish the effective quantum volume, as increased and exacerbate in deeper circuits. This balance underscores the need for holistic hardware-software co-design to push beyond current limitations. Broader implications of quantum volume extend to forecasting pathways toward quantum advantage, where it acts as an indicator of a system's readiness for demonstrating computational superiority in hybrid setups. Companies such as leverage quantum volume milestones to guide and planning, targeting enhanced for fault-tolerant in applications spanning and optimization. Similarly, metrics like quantum volume inform strategic decisions at firms like , aligning hardware advancements with the pursuit of scalable, advantage-yielding quantum processors.

Formulation

Original Definition

Quantum volume was originally proposed in 2018 by N. Moll and colleagues as a single-number to assess the capability of noisy intermediate-scale quantum (NISQ) devices in executing useful quantum circuits. The metric aims to quantify the largest "useful" volume of quantum circuits—defined as the product of circuit width and depth—that a device can reliably execute before errors dominate and render the computation ineffective, thereby highlighting key limitations in NISQ-era hardware. The formulation assumes square circuits where width equals depth, employs random two-qubit gates drawn from a over all possible pairs of qubits (assuming full ), and defines success theoretically as maintaining output above 2/3 before errors dominate, based on error propagation models. Under these conditions, the core equation for quantum volume V_Q is given by V_Q = \max_{n < N} \left[ \min \left( n, \frac{1}{n \varepsilon_{\text{eff}}(n)} \right) \right]^2, where N is the total number of physical available on the device, n is the circuit width (and depth), and \varepsilon_{\text{eff}}(n) represents the effective error rate per two-qubit gate in n-qubit circuits, incorporating both gate errors and readout inaccuracies. The derivation stems from error propagation models in random , where the effective depth d achievable before the output fidelity drops below the 2/3 threshold is approximated as d \approx 1 / (n \varepsilon_{\text{eff}}(n)), accounting for the accumulation of errors across O(n^2 d) two-qubit gates in an n \times d circuit. Quantum volume then emerges as the square of the minimum between this effective depth and the width n, maximized over feasible n < N, providing a volume-like measure that balances qubit count against error susceptibility. This original definition laid the groundwork for benchmarking systems, though subsequent refinements by hardware providers like adapted it for practical scalability.

IBM's Redefinition

In 2019, IBM researchers refined the quantum volume metric to better capture the performance of near-term quantum devices, as detailed in the work by Cross et al. The updated formulation expresses quantum volume on a logarithmic scale, defined as \log_2 V_Q = \arg\max_{n \leq N} \min [n, d(n)], where N is the number of available qubits, n is the number of qubits used in the circuit, and d(n) represents the maximum circuit depth achievable for n-qubit circuits with a success probability exceeding $2/3 in the heavy output generation task. This yields V_Q = 2^k, where k is the integer value of the logarithm, emphasizing exponential growth in computational capability. A primary change in this redefinition is the adoption of the logarithmic scale, which aligns quantum volume directly with the complexity of classical simulation: simulating a quantum volume V_Q = 2^k requires approximately k^3 2^k operations on a classical computer. This linkage provides a concrete benchmark for when quantum advantage becomes feasible, moving beyond the original metric's simpler product of width and depth. The protocol integrates randomized model circuits composed of single-qubit rotation gates and two-qubit entangling gates, designed to stress-test the full system including compilation and execution. If the hardware lacks full qubit connectivity, the benchmark emulates it through additional swap gates, ensuring the metric reflects realistic algorithmic performance rather than topology limitations. This redefinition enhances comparability across devices by focusing on verifiable sampling success rates, mitigating the original metric's sensitivity to minor error fluctuations and promoting scalable benchmarking for evolving quantum hardware.

Measurement Process

Circuit Depth and Width

In the benchmark, circuits are constructed as square architectures with width n (number of qubits) and depth d(n) (number of layers), where the goal is to balance width and depth. Each layer consists of a random permutation of the qubit labels followed by random two-qubit unitaries sampled from the Haar measure on SU(4), applied to n/2 disjoint pairs of qubits to create interactions. In implementations such as the Qiskit library, single-qubit gates (e.g., random rotations) may be inserted between two-qubit layers to increase circuit realism, with depth-3 single-qubit circuits between layers. These gates are selected randomly to generate diverse, representative circuits that probe the system's capabilities without favoring specific algorithms. The width n represents the number of qubits engaged in the circuit and is maximized up to the total available qubits N on the device, though hardware connectivity often imposes limits by necessitating additional operations. In devices without all-to-all connectivity, such as linear or grid topologies, swap gates must be inserted to route qubits for required two-qubit interactions, which increases the effective circuit depth and overhead. This connectivity constraint can reduce the feasible width, as swaps consume coherence time and amplify error accumulation. The random permutations before each layer help average over possible pairings, mitigating some connectivity issues. Circuit depth d(n) is defined as the maximum number of layers executable while maintaining sufficient fidelity for success, scaling upward with improved qubit coherence times but diminishing as n grows due to cumulative errors across more qubits and gates. For non-ideal topologies, the insertion of swaps further erodes achievable depth by extending the total gate count per layer. For example, on devices with limited connectivity like linear topologies, realizing full width requires multiple swaps to enable distant interactions, often capping d(n) below n and thus limiting the overall benchmark. IBM's metric incorporates this interplay by taking $2^{\min[n, d(n)]}, emphasizing balanced performance in both dimensions.

Error Rates and Sampling Requirements

The effective error rate \epsilon_\text{eff} in quantum volume circuits quantifies the cumulative impact of gate and measurement errors on overall performance. It incorporates the average per-gate error rates, with typical two-qubit gate errors \epsilon_{2qg} \approx 0.5\%–$1\%, single-qubit gate errors \epsilon_{1qg} < 0.1\%, and readout errors \epsilon_\text{read} \approx 1\%–$5\%. For typical circuit models and topologies, the effective rate scales approximately as \epsilon_\text{eff}(n) \approx (a \sqrt{n} + b) \epsilon_{2qg}, where a \approx 1.29 and b \approx -0.78 for a square grid, reflecting routing overhead from connectivity. A key success criterion for validating quantum volume circuits is the fidelity threshold based on heavy output generation. The heavy outputs are the bitstrings with ideal probabilities greater than or equal to the median probability in the output distribution (roughly the most probable half of the $2^n bitstrings). The circuit is considered successful if the heavy output probability (HOP)—the measured probability of obtaining one of these heavy outputs—exceeds $2/3. This threshold robustly accounts for depolarizing noise, ensuring that the device's output distribution remains distinguishable from a fully mixed state. The sampling protocol estimates this heavy output probability with sufficient statistical confidence by executing ensembles of at least 100 randomized circuits, with each circuit run a number of shots scaling as $2^{n+2} to $2^{n+4} (e.g., 200–5000 total shots depending on n). Heavy output generation (HOG) enables this verification efficiently, avoiding the computational overhead of full quantum state tomography while confirming that the circuit preserves the intended non-uniform probability distribution. Success requires the average HOP > 2/3 with >97.7% confidence (z-score >2). Error mitigation techniques, such as readout error correction, can enhance the effective achievable depth d(n) by reducing measurement , thereby allowing slightly larger circuits to pass the . However, quantum volume assessments demand unmitigated to accurately intrinsic device without post-processing aids. Practical scalability is limited by error accumulation; for a two-qubit gate error rate \epsilon_{2qg} = 0.01, the maximum n \approx 20 before errors dominate, even assuming low single-qubit and readout errors. This bound arises from the approximate relation d(n) \approx 1/(n \epsilon_\text{eff}), beyond which circuit drops below the required .

Historical Achievements

Early Milestones (2018–2022)

The Quantum Volume (QV) metric, introduced by in 2019, enabled the first systematic benchmarking of near-term quantum hardware, with early demonstrations relying on small-scale superconducting systems. In 2018, IBM retrospectively calculated a QV of 8 (2^3) for its conceptual benchmarks on simulated 3- circuits, reflecting initial explorations of circuit depth and limits. On physical hardware, early 5- devices like the IBM Q experience achieved QV values of 2 to 4, constrained by high error rates and limited connectivity in these prototype superconducting s. By 2019, advancements in gate fidelity and calibration allowed IBM's 20-qubit processor to reach a QV of 16 (2^4), marking the first hardware demonstration under the formalized and highlighting improvements in two-qubit . This milestone coincided with IBM's redefinition of QV to incorporate practical sampling thresholds, facilitating more reliable benchmarks on noisy intermediate-scale quantum (NISQ) devices. In January 2020, IBM's 28-qubit Raleigh system, featuring an enhanced for better qubit connectivity, attained a QV of 32 (2^5), doubling the previous year's record through optimized error mitigation and faster execution times. Later that year, upgrades to the 27-qubit processor pushed QV to 64, demonstrating scalable circuit execution up to moderate depths. Meanwhile, Google's 53-qubit , while achieving in random circuit sampling tasks, did not formally report QV but explored analogous fidelity and depth metrics in its superconducting architecture. IBM continued its progress in 2021 with the 27-qubit ( r5) system reaching a QV of 128 (2^7), enabled by refinements in dynamical decoupling and readout error correction that extended coherent circuit depths. In March 2021, Quantinuum's H1 system, a trapped-ion precursor to later H-series models, demonstrated a QV of 512 on its 20-qubit configuration, showcasing all-to-all connectivity advantages over fixed superconducting layouts. Rigetti Computing's 80-qubit Aspen-M processor, a multi-chip superconducting design, emphasized modular scaling despite challenges in inter-chip coherence. In 2022, 's r10 achieved a QV of 512 (2^9), driven by heavy-hex and improved single-qubit fidelities exceeding 99.9%. also unveiled its 127-qubit , advancing scale but with QV performance aligned to ongoing improvements. Throughout this period, superconducting qubits dominated early QV achievements due to their rapid fabrication cycles and integration with cryogenic infrastructure, with leading demonstrations on platforms accessible via the . However, QV growth trailed exponential increases in qubit counts—such as from 20 to 127 qubits—primarily because error rates scaled unfavorably with system size, limiting effective circuit volumes to below 2^{10} despite architectural innovations.

Recent Advances (2023–2025)

In 2023, Quantinuum's H1-1 system, featuring 20 qubits, achieved a Quantum Volume of 524,288 (2^{19}) in June, marking a significant leap in trapped-ion performance. Meanwhile, IBM's Prague processor reached a Quantum Volume of 512 (2^9), highlighting continued refinements in superconducting systems. By 2024, advanced further to a Quantum Volume of 1,048,576 (2^{20}), driven by enhancements in gate fidelity and error mitigation techniques. IonQ's system, with 25 qubits, reported 25 algorithmic qubits (#AQ 25), underscoring competitive strides in trapped-ion architectures for reliable circuit execution. In 2025, Quantinuum's H2-1 processor, scaling to 56 qubits, attained a Quantum Volume of 8,388,608 (2^{23}) in May, demonstrating exponential progress through improved coherence times. IBM's processor, with 133 qubits, focused on modular scaling for quantum-centric supercomputing. By September, Quantinuum's H2-2 variant, also at 56 qubits, set a new record with a Quantum Volume of 33,554,432 (2^{25}), enabled by reducing two-qubit gate error rates to below 0.1%. As of November 2025, no systems have surpassed the H2-2 record, with industry attention increasingly shifting toward utility-scale demonstrations beyond metrics, though H2-2 remains the leader. This period reflects a notable shift, where trapped-ion platforms like those from and have outperformed superconducting approaches, such as IBM's, primarily due to superior coherence and lower error accumulation in deeper circuits.

Extensions and Alternatives

Volumetric Benchmarks

Volumetric benchmarks extend the to rectangular , where the number of qubits n (width) and circuit depth d are uncoupled, enabling a more nuanced evaluation of performance across diverse circuit shapes. This addresses the limitations of the square-circuit in standard quantum volume by allowing the exploration of trade-offs between spatial and temporal resources, which is crucial for mapping practical algorithms that may require either broad parallelism or extended sequential operations. The methodology involves executing test suites of random or structured circuits \mathcal{C}(n, d) for various pairs of n and d, assessing success based on criteria such as ideal outcome probabilities exceeding 2/3 after error mitigation. Feasible (n, d) pairs are plotted in a two-dimensional space, with the "volumetric frontier" defined as the Pareto envelope of points representing the boundary of reliable performance; points below this frontier indicate regions where the device fails to meet the success threshold. Unlike the single scalar value of quantum volume, this frontier provides a visual and quantitative profile, where an effective volume can be approximated as n \times d along frontier points to gauge overall capacity without reducing to a solitary metric. These benchmarks reveal hardware-specific strengths, such as superior performance in high-depth, low-width regimes for time-series analysis algorithms or high-width, low-depth setups for tasks, aiding in optimal algorithm-to-hardware . For instance, a achieving a quantum volume of $2^{10} might support rectangular circuits like n=20, d=2 for wide simulations or n=5, d=20 for deeper computations, informing practical deployments. Since 2023, companies like have incorporated volumetric benchmarks in their evaluations to demonstrate beyond-square capabilities, integrating them with traditional quantum volume for comprehensive hardware profiling.

Comparisons to Other Metrics

Quantum volume (QV) differs from simple qubit counts by incorporating error rates and circuit depth, thereby penalizing systems with high levels; for instance, a device with 100 low-fidelity might yield a lower QV than one with 20 high-fidelity , emphasizing quality over mere scale. This approach addresses the limitations of raw qubit metrics, which can mislead assessments of practical utility in noisy intermediate-scale quantum (NISQ) devices. In contrast to randomized benchmarking (RB), which quantifies average gate fidelity—such as 99.9% for two-qubit gates—without evaluating full-circuit performance, QV integrates RB-derived error rates (like effective error per gate, ε_eff) into a broader assessment of scalable circuit execution. While RB provides a foundational measure of gate reliability, it lacks insight into system-wide factors like and that QV captures holistically. CLOPS (Circuit Layer Operations Per Second) focuses on execution throughput for deep circuits, measuring how rapidly a handles layers of quantum operations, whereas QV prioritizes and over speed. These metrics are complementary: QV assesses the complexity of reliable circuits, while CLOPS evaluates runtime feasibility, together informing overall system utility for practical applications. QED-C benchmarks, introduced in 2023 and expanded through 2025, establish cross-platform standards by computing medians over ensembles of submissions for tasks like , incorporating QV as one of its core metrics for device comparability. Unlike the device-specific nature of QV, QED-C emphasizes standardized, application-oriented evaluations, blending QV-like volumetric elements with broader algorithmic tests to enable fair inter-vendor comparisons. QV has notable limitations in comparisons to other metrics, as it overlooks runtime overheads, cryogenic cooling requirements, and challenges, focusing solely on executability rather than operational efficiency. It is particularly suited to NISQ-era evaluations but less relevant for fault-tolerant , where metrics like logical fidelity become paramount for error-corrected operations. By 2025, hybrid benchmarking combining QV, , and CLOPS has become standard practice, providing a multifaceted view of quantum . For example, Quantinuum's Model H2 achieved a QV of 2^{25} (33,554,432) alongside top-tier scores, including 99.921% two-qubit gate fidelity, demonstrating alignment between volumetric scale and gate-level reliability.

Limitations

Conceptual Shortcomings

The quantum volume metric is predicated on the assumption that Haar-random quantum circuits provide a representative for overall system performance, yet this overlooks the distinct error profiles and structures of practical algorithms. Random circuits, drawn from a over SU(4) unitaries, tend to be more sensitive to noise than structured algorithms like the quantum approximate optimization algorithm (QAOA), potentially overestimating a device's for real-world applications that tolerate errors differently. A key conceptual bias in quantum volume arises from its emphasis on square circuits, where the number of qubits equals the number of gate layers, which does not align with the rectangular shapes required by many quantum algorithms that demand greater depth relative to width. While extensions like volumetric benchmarks address some rectangular needs, the core quantum volume remains a single-point measure that inadequately represents these diverse circuit geometries. The metric also inadequately models qubit connectivity, incorporating only partial adjustments for swap overhead due to hardware topology, while assuming an idealized all-to-all connectivity that is rarely achieved in practice. This oversight can lead to inflated estimates of performance on devices with sparse or fixed connectivity graphs. Scalability poses another theoretical limitation, as the logarithmic scale of quantum volume (log₂ QV = k) advances slowly with hardware improvements and fails to indicate transitions toward fault-tolerant regimes or the integration of hybrid error-correction schemes. It remains tied to noisy intermediate-scale quantum (NISQ) assumptions, without capturing the qualitative shifts needed for scalable, error-corrected computing. Interpretability is hindered by the formulation, where values like 2^{25} convey scale but obscure practical implications, such as whether the system can execute meaningful algorithms without extensive error mitigation. Community analyses often recommend quoting log₂ QV for clarity, underscoring its abstract . Post-2020 literature has critiqued quantum volume for NISQ , viewing it as a useful starting point for comparison but insufficient as a comprehensive measure due to its -centric focus and limited relevance to software or application-specific .

Practical Challenges

Achieving and verifying quantum volume involves substantial verification overhead, as the heavy hexagon sampling requires thousands of per configuration (n, d) to estimate heavy output probabilities with sufficient statistical , typically demanding at least 1,000 to achieve 95% reliability in distinguishing non-uniform distributions. This process becomes time-intensive, with complete for a single quantum volume value often spanning hours to days on accessible , exacerbated by delays on platforms and the need for multiple circuit subsets. Statistical poses an additional risk, potentially invalidating entire runs if the signal-to- ratio is low, necessitating repeated executions to ensure robust results. Hardware variability further complicates quantum volume measurements, as the metric is highly sensitive to environmental factors like cryogenic drifts, where coherence times in superconducting qubits can fluctuate suddenly over hours or days despite stable millikelvin temperatures, degrading circuit . between adjacent qubits introduces correlated errors that propagate through random circuits, amplifying deviations from ideal outputs. For example, 2025 achievements on stable trapped-ion systems, such as Quantinuum's processor, outperformed fluctuating superconducting architectures by maintaining consistent performance under these conditions. Reproducibility across devices remains a key barrier, stemming from vendor-specific protocols that differ in gate implementations and qubit connectivity; IBM's superconducting QPUs, for instance, rely on transmon-based two-qubit gates, while Quantinuum's ion traps use native all-to-all connectivity, leading to discrepancies in quantum volume outcomes even for comparable hardware scales. The absence of standardized software for optimizing swaps and routing in diverse topologies fuels debates over fair comparisons, as seen in multi-vendor benchmarking studies evaluating up to 156 qubits. Conducting full quantum volume assessments demands extensive cloud access, particularly on platforms like , where benchmarking consumes significant quantum execution time under pay-as-you-go or subscription models, with costs accruing per job and limiting availability for non-premium users such as academic groups. This resource intensity restricts widespread replication, as queue priorities favor enterprise access over exploratory runs. As of 2025, quantum volume continues to serve as a key for tracking hardware progress, with records such as Quantinuum's system achieving 2^{25} = 33,554,432 in September 2025, though there is growing interest in application-specific benchmarks and scalable alternatives that better capture practical utility beyond exhaustive QV evaluations. Mitigation efforts include emerging automated tools like the Benchpress suite, which streamline generation and execution for quantum volume tests, reducing manual overhead and improving consistency. However, challenges persist at scales beyond 100 qubits, where amplified and verification demands continue to hinder reliable .

References

  1. [1]
    Quantum Volume - Qiskit Experiments 0.12.0 - GitHub Pages
    Aug 13, 2025 · Quantum Volume (QV) is a single-number metric that can be measured using a concrete protocol on near-term quantum computers of modest size.
  2. [2]
    A volumetric framework for quantum computer benchmarks
    Nov 15, 2020 · We propose a very large family of benchmarks for probing the performance of quantum computers. We call them volumetric benchmarks (VBs).
  3. [3]
    Quantinuum Achieves New Quantum Volume Record with H2 System
    May 13, 2025 · Quantinuum has announced a new industry benchmark, reporting a Quantum Volume (QV) of 2²³ = 8,388,608 on its H2 quantum computer, ...
  4. [4]
  5. [5]
  6. [6]
    Validating quantum computers using randomized model circuits
    **Title:** Validating quantum computers using randomized model circuits
  7. [7]
    IBM quantum computers: evolution, performance, and future directions
    Apr 1, 2025 · This paper explores IBM Quantum's journey in quantum computing, highlighting key technological achievements, current challenges, and future prospects.
  8. [8]
    Validating quantum computers using randomized model circuits
    Sep 20, 2019 · The quantum volume is a pragmatic way to measure and compare progress toward improved system-wide gate error rates for near-term quantum ...
  9. [9]
    IBM has come up with a new way of measuring the progress of ...
    Mar 4, 2019 · It's promoting a yardstick called “quantum volume,” which it claims is doubling every year—an equivalent to Moore's Law in conventional ...Missing: original paper
  10. [10]
    IBM Doubles Its Quantum Computing Power Again - Forbes
    Jan 8, 2020 · Raleigh reached a Quantum Volume of 32 this year, up from 16 last year. Raleigh draws on an improved hexagonal lattice connectivity structure ...
  11. [11]
    IBM Doubles Quantum Volume with 28 Qubit Raleigh System
    Jan 9, 2020 · The company's new Raleigh 28-qubit quantum computer has achieved the company's goal of doubling its Quantum Volume.
  12. [12]
    IBM Delivers Its Highest Quantum Volume to Date, Expanding the ...
    Aug 20, 2020 · Quantum Volume measures the length and complexity of circuits – the higher the Quantum Volume, the higher the potential for exploring solutions ...
  13. [13]
    IBM Achieves a New Quantum Volume Level of 128
    Jul 2, 2021 · In the race between IBM and others to see who can quote the highest Quantum Volume number, IBM just disclosed that they have now reached a ...
  14. [14]
    [PDF] Investor Presentation - Rigetti Computing
    Aspen-M is the world's first multi-chip quantum processor, solving a critical scaling challenge to achieve quantum advantage. The Aspen-M processor leverages ...
  15. [15]
    The 2022 IBM Research annual letter
    Jan 12, 2023 · We can understand quantum volume as the biggest square circuit that we can expect to run successfully in a quantum computer. QV takes into ...Missing: Bucharest | Show results with:Bucharest
  16. [16]
    Quantinuum Sets Industry Record for Hardware Performance with ...
    Quantinuum Sets Industry Record for Hardware Performance with New Quantum Volume Milestone. Customers the first to benefit as Quantinuum extends its lead in ...
  17. [17]
    Superconducting quantum computers: who is leading the future?
    Aug 19, 2025 · This review examines the state of superconducting quantum technology, with emphasis on qubit design, processor architecture, scalability, and ...
  18. [18]
    Quantinuum H-Series quantum computer accelerates through 3 ...
    Jun 30, 2023 · In February we announced that H1-1 had leapfrogged 214 and achieved a quantum volume of 215. In May 2023, we launched H2-1 with 32 qubits at a ...
  19. [19]
    IBM Unveils Breakthrough 127-Qubit Quantum Processor
    Nov 16, 2021 · 'Eagle' is IBM's first quantum processor developed and deployed to contain more than 100 operational and connected qubits.
  20. [20]
    Quantinuum extends its significant lead in quantum computing ...
    Apr 16, 2024 · Quantinuum achieved 99.914(3)% 2-qubit gate fidelity, a Quantum Volume of 1,048,576, and the most reliable logical qubits on record.
  21. [21]
    IonQ Aria: Practical Performance
    Jan 8, 2025 · With an #AQ of 20, IonQ Aria is currently the best publicly-disclosed quantum computer in the world. Its trapped-ion architecture offers all-to- ...
  22. [22]
    New World-Record in Quantum Volume - Quantinuum
    May 12, 2025 · Its ability to physically move qubits around and entangle any qubit with any other qubit enables algorithms and error-correcting codes that are ...
  23. [23]
    IBM releases r3 beta QPU, surpasses quantum volume thresholds
    Aug 21, 2025 · A point where a quantum computer edges past conventional computers in terms of accuracy, cost, or efficiency. As late as July, IBM scientists ...Missing: explanation | Show results with:explanation
  24. [24]
    Quantinuum Achieves Quantum Volume of 2²⁵ on System Model H2
    Sep 18, 2025 · Quantinuum has announced that its System Model H2 has achieved a Quantum Volume (QV) of 225, or 33,554,432. This represents a 4x improvement ...
  25. [25]
    Setting the Benchmark: Independent Study Ranks Quantinuum #1 in ...
    Mar 18, 2025 · A recent independent study comparing 19 quantum processing units (QPUs) on the market today has validated what we've long known to be true: Quantinuum's ...
  26. [26]
    (PDF) Application-Oriented Performance Benchmarks for Quantum ...
    volumetric benchmarks, we define a normalized measure of. result quality ... application-oriented benchmarks on the Quantinuum H1-1. For these tests the ...
  27. [27]
    Scalable Randomized Benchmarking of Quantum Computers Using ...
    Oct 6, 2022 · Yet current randomized benchmarks have one of two scaling problems. Quantum volume and cross-entropy benchmarking require classical computations ...Abstract · Article Text
  28. [28]
    What Is Quantum Computing? | IBM
    Currently, CLOPS is a measure of how quickly processors can run quantum volume circuits in series, acting as a measure of holistic system speed, incorporating ...
  29. [29]
    QED-C - Metriq - Community-driven Quantum Benchmarks
    Metriq is the open platform to share application-oriented benchmarks of quantum cloud providers. · QED-C Benchmark Tasks · QED-C Benchmark Submissions.Missing: 2024 | Show results with:2024
  30. [30]
    Quantum Algorithm Exploration using Application-Oriented ... - QED-C
    Feb 16, 2024 · The QED-C suite of Application-Oriented Benchmarks provides the ability to gauge performance characteristics of quantum computers as applied to real-world ...
  31. [31]
    QED-C Issues New Quantum Benchmarking Paper - HPCwire
    Feb 20, 2024 · The Quantum Economic Development Consortium last week released a new paper on benchmarking – Quantum Algorithm Exploration using Application-Oriented ...
  32. [32]
    Quantum Volume reaches 5 digits for the first time - Quantinuum
    Feb 23, 2023 · It features 98 fully connected physical qubits with single-qubit gate fidelity of 99.9975% and two-qubit gate fidelity of 99.921% across all ...Missing: weighted readout
  33. [33]
    Quantum Computing in the NISQ era and beyond
    Aug 6, 2018 · NISQ devices will be useful tools for exploring many-body quantum physics, and may have other useful applications.
  34. [34]
    SoK: Benchmarking the Performance of a Quantum Computer - NIH
    Oct 14, 2022 · In this paper, we review the existing performance benchmarking protocols, models, and metrics. We classify the benchmarking techniques into three categories.Missing: rectangular | Show results with:rectangular<|control11|><|separator|>
  35. [35]
    Comprehensive Review of Metrics and Measurements of Quantum ...
    Our analysis highlighted the shortcomings of hardware-oriented metrics, such as Quantum Volume (QV) and CLOPS, and examined how COSMIC-based approaches can ...
  36. [36]
    Benchmarking quantum computers - Nature Reviews Physics
    ### Critiques of Quantum Volume from the Review Paper
  37. [37]
  38. [38]
    Automated Quantum Volume Test - IOPscience
    No readable text found in the HTML.<|control11|><|separator|>
  39. [39]
    Increasing the Measured Effective Quantum Volume with Zero Noise ...
    Jun 28, 2023 · ZNE is an error mitigation technique that estimates the noiseless expectation value using circuit folding to amplify errors by known scale factors.
  40. [40]
    Materials challenges and opportunities for quantum computing ...
    Apr 16, 2021 · The coherence times can drift or change suddenly over hours and days, even while the device is held at cryogenic temperatures (89, 95, 126, 127) ...Missing: variability | Show results with:variability
  41. [41]
    Detecting crosstalk errors in quantum information processors
    Sep 11, 2020 · In this paper, we introduce a comprehensive framework for crosstalk errors and a protocol for detecting and localizing them.Missing: variability cryogenic
  42. [42]
    Evaluating the performance of quantum processing units at large width and depth
    ### Summary of Benchmarking and Reproducibility Issues from arXiv:2502.06471
  43. [43]
    Manage cost on the Pay-As-You-Go Plan - IBM Quantum Platform
    How to manage costs of running jobs on QPUs when using the Pay-As-You-Go Plan on IBM Quantum Platform.
  44. [44]
    Researchers Propose Scalable Alternative to Quantum Volume ...
    Feb 11, 2025 · Researchers propose a modified Quantum Volume test that eliminates the need for classical simulations, addressing scalability challenges.Missing: shift specific
  45. [45]
    Benchmarking the performance of quantum computing software for ...
    Apr 18, 2025 · We present Benchpress, a benchmarking suite for evaluating the performance and range of functionality of multiple quantum computing software development kits.