Fact-checked by Grok 2 weeks ago

TOP500

The TOP500 is a project that biannually ranks the 500 most powerful non-distributed supercomputers in the world based on their measured performance using the High-Performance LINPACK (HPL) benchmark, which evaluates the sustained floating-point operations per second (FLOPS) achieved when solving a dense system of linear equations. Launched in 1993 by researchers Hans Werner Meuer, Erich Strohmaier, Jack Dongarra, and Horst Simon to update and standardize earlier supercomputer statistics from the University of Mannheim, the TOP500 provides a reliable, comparable metric for tracking advancements in high-performance computing hardware, architectures, and vendors. The lists are published every June and November, coinciding with major international supercomputing conferences, and have become the de facto standard for assessing global HPC capabilities, revealing trends such as the shift toward accelerator-based systems and the progression toward exascale computing. While the HPL benchmark prioritizes peak theoretical performance under idealized conditions, it has been noted for not fully capturing diverse real-world workloads, though its consistency enables long-term trend analysis across decades of exponential growth in computational power.

Overview

Definition and Purpose

The TOP500 is a biannual compilation ranking the 500 most powerful non-classified systems worldwide, based on their measured performance using the High-Performance Linpack (HPL) benchmark. This benchmark evaluates sustained computational capability by solving a dense , reporting results as Rmax, the achieved floating-point operations per second () under standardized conditions. Unlike theoretical peak performance (Rpeak), which represents maximum hardware potential without workload constraints, Rmax captures realistic efficiency on a specific, memory-bound task, serving as a proxy for (HPC) hardware prowess rather than diverse real-world application performance. Initiated in 1993 by Hans Werner Meuer of the , Erich Strohmaier, and , the built upon earlier statistics to establish a consistent, verifiable metric for HPC progress. The ranking excludes classified military systems, focusing instead on publicly disclosed, commercially oriented installations to provide transparency into accessible technology frontiers. The primary purpose of the TOP500 is to deliver an empirical overview of evolving HPC landscapes, including dominant architectures, system scales, and trajectories, thereby enabling researchers, vendors, and policymakers to identify trends in hardware innovation and deployment. Lists are released every June during the International Supercomputing Conference (ISC) and every November at the Supercomputing Conference (SC), fostering community benchmarking and competition without prescribing operational utility beyond the HPL metric. This approach prioritizes standardized comparability over comprehensive workload representation, highlighting aggregate shifts like the rise of accelerator-based designs while acknowledging HPL's limitations in mirroring scientific simulations.

Ranking Methodology

The TOP500 list ranks supercomputers based on their performance in the High Performance Linpack (HPL) benchmark, which solves a dense system of linear equations Ax = b, where A is an n × n nonsymmetric matrix, using LU factorization with partial pivoting and iterative refinement to estimate the solution. The measured performance, denoted Rmax, represents the highest achieved floating-point rate in gigaflops (GFlop/s) from a valid HPL run, with the problem size Nmax selected to maximize this value while ensuring numerical stability and convergence. Theoretical peak performance, Rpeak, is calculated as the product of the number of cores, clock frequency in GHz, and the maximum double-precision floating-point operations per cycle per core (typically 8 for vectorized units or 16 with AVX-512 extensions), using advertised base clock rates without accounting for turbo boosts unless specified. System owners or vendors submit HPL results voluntarily via the official TOP500 portal, including detailed hardware specifications such as core count, processor architecture, interconnect topology, memory capacity, and power consumption measured at the facility level during the benchmark run. Submissions occur biannually, with deadlines preceding the June and November releases, a schedule maintained since the inaugural list in June 1993. Classified military systems are excluded, as their performance data is not publicly verifiable or submitted, ensuring the list reflects only disclosed, civilian-accessible installations. Rankings are determined by sorting submissions in descending order of Rmax; ties are resolved first by descending Rpeak, then by size per , , and alphabetical order of system name. While HPL implementations may incorporate vendor-specific optimizations for libraries like BLAS or communication routines, the TOP500 requires reproducible results under standard conditions, with the project coordinators reserving the right to audit submissions for compliance, though no formal threshold (e.g., 80% of Rpeak) is mandated—top-ranked systems typically achieve 70-90% through balanced of compute, , and . Collected beyond Rmax and Rpeak enables trend analyses, such as aggregate installed capacity (sum of Rmax across all 500 entries) and shifts in processor families or operating systems.

History

Inception and Early Development

The TOP500 project originated in spring 1993, initiated by Hans Werner Meuer and Erich Strohmaier of the , , to systematically track advancements in through biannual rankings of the world's most powerful systems based on the Linpack benchmark. , developer of the Linpack software, contributed to its methodology from the outset. The inaugural list was published on June 24, 1993, during the International Supercomputing Conference (ISC'93) in , amid a period of increasing commercialization in following the end of the , which facilitated greater transparency and reporting of system capabilities previously constrained by classification. The June 1993 list ranked systems primarily using processors, with the top entry being the Thinking Machines CM-5/1024 at , delivering 59.7 GFLOPS of sustained Linpack performance. Early editions highlighted a pivotal shift from specialized vector processors—dominant in prior decades via vendors like Research—to scalable architectures, such as those from Thinking Machines and , driven by the need for higher concurrency to handle growing computational demands in scientific simulations. This transition reflected underlying engineering realities: vector systems excelled in sequential floating-point operations but scaled poorly beyond certain limits, whereas parallel designs leveraged commodity-like components for cost-effective expansion, though initial implementations faced challenges in interconnect efficiency and programming complexity. By June 1997, the ninth list featured Intel's at as the first system to surpass 1 TFLOPS, achieving 1.068 TFLOPS with 7,264 processors, underscoring the viability of microprocessor-based clusters for terascale . Sustained submissions from global HPC sites enabled the lists to consistently reach 500 entries by the mid-1990s, transforming TOP500 into a indicator of technological leadership and institutional prestige in supercomputing.

Major Performance Milestones

The aggregate performance of the TOP500 list began modestly, totaling approximately 60 teraflops (TFLOPS) in June 1993. This marked the inception of tracked exponential growth in high-performance computing (HPC), roughly paralleling advancements in semiconductor scaling akin to Moore's Law, with performance doubling approximately every 14 months through the 1990s and early 2000s. A pivotal occurred in June 2008 when the Roadrunner supercomputer achieved 1.026 petaflops (PLOPS), becoming the first system to surpass the petaflop barrier on the High Performance LINPACK (HPL) benchmark and topping the TOP500 list. Roadrunner's hybrid architecture, combining AMD Opteron processors with Cell chips, signaled the waning dominance of specialized vector processors, as commodity clusters began leveraging for superior scalability. By June 2019, every system on the TOP500 delivered at least 1 PLOPS, establishing the list as a universal "petaflop club." The integration of graphics processing units (GPUs) post-2009 accelerated growth, with systems like China's Tianhe-1A in 2010 incorporating Fermi GPUs, contributing to sharper inflection points in aggregate performance. This shift propelled total TOP500 performance from under 100 exaflops (EFLOPS) in the early to multi-exaflop scales by the mid-2020s, while x86 architectures achieved near-total dominance over custom designs by the , comprising over 95% of systems due to their cost-effectiveness and ecosystem maturity. The exaflop era dawned in June 2022 with the U.S. Department of Energy's Frontier supercomputer debuting at over 1 EFLOPS, specifically 1.102 EFLOPS on HPL, as the first verified exascale system. Frontier's AMD-based design underscored the efficacy of integrated CPU-GPU processors for extreme-scale HPC. By June 2025, aggregate TOP500 performance exceeded 20 EFLOPS, driven by multiple exascale deployments, with El Capitan claiming the top spot at 1.742 EFLOPS, further exemplifying sustained scaling through advanced accelerators and interconnects.

Top Systems as of June 2025

As of the June 2025 TOP500 list, the El Capitan supercomputer at Lawrence Livermore National Laboratory, operated by the U.S. Department of Energy's National Nuclear Security Administration, ranks first with a LINPACK Rmax performance of 1.742 exaFLOPS. This HPE Cray EX255a system employs AMD 4th Generation EPYC processors (24 cores at 1.8 GHz), AMD Instinct MI300A accelerators, Slingshot-11 interconnects, and the TOSS operating system, marking it as the third publicly verified exascale system following Frontier's deployment in 2022 and Aurora's in 2023. El Capitan's architecture emphasizes integrated CPU-GPU computing for nuclear stockpile stewardship and high-energy physics simulations. Frontier, at under the DOE's Office of , holds the second position with 1.353 exaFLOPS Rmax, utilizing HPE Cray EX235a nodes with AMD 3rd Generation EPYC processors (64 cores at 2 GHz), AMD Instinct MI250X accelerators, and Slingshot-11 networking on HPE Cray OS. Aurora, installed at and also DOE-funded, remains third at approximately 1 exaFLOPS Rmax, based on HPE Cray EX architecture with Intel Xeon CPU Max processors and Intel Data Center GPU Max accelerators. These top three systems, all U.S. Department of Energy installations, represent the only verified exascale capabilities on the list, underscoring a concentration of leading-edge performance in American federally sponsored facilities amid global competition constraints. Beyond the top three, performance declines sharply, with the fourth-ranked system— a PRIMEHPC FX1000 deployment for Japan's and the —achieving under 0.5 exaFLOPS Rmax using A64FX processors and interconnects. No non-U.S. systems reach exascale thresholds, reflecting submission gaps from major competitors; for instance, China's Sunway TaihuLight, once the list leader in 2017, has not reappeared since November 2018 due to unverifiable High-Performance LINPACK results, exacerbated by U.S. export controls limiting access to advanced semiconductors for benchmark validation. This pattern highlights reliance on transparent, reproducible testing protocols in TOP500 rankings, which prioritize empirical verifiability over unconfirmed domestic claims.
RankSystem NameSiteRmax (exaFLOPS)ArchitectureCores (millions)Country
1LLNL (DOE/NNSA)1.742HPE Cray EX255a ( + MI300A)~9.2
2ORNL (DOE/SC)1.353HPE EX235a ( + MI250X)8.7
3ANL (DOE/SC)~1.0HPE EX (Intel Xeon Max + GPU Max)~10
4RIKEN/U. Tokyo<0.5Fujitsu PRIMEHPC FX1000 (A64FX)~4Japan

Aggregate Performance and Growth Rates

The aggregate Rmax performance of the TOP500 list reached 13.84 exaflops (EFlop/s) as of the June 2025 edition, surpassing the previous November 2024 total of 11.72 EFlop/s and marking a semi-annual increase of approximately 18%. This cumulative performance reflects the sustained scaling of high-performance computing (HPC) systems, driven primarily by accelerator integration and architectural optimizations, though constrained by power dissipation limits that have tempered growth in recent exascale-era lists. Historically, the total Rmax has exhibited exponential growth since the inaugural June 1993 list, which recorded 1.13 TFlop/s across the top systems. Over the subsequent 32 years, this represents a multiplication factor exceeding 12 million, implying a long-term compound annual growth rate (CAGR) of roughly 58%, calculated as (13.84 \times 10^{18} / 1.13 \times 10^{12})^{1/32} - 1, where the exponent derives from the number of years between lists. Early decades saw annual doublings or faster due to rapid advances in processor density and parallelism, outpacing ; however, post-2022 exascale deployments have slowed this to semi-annual gains of 15-20%, or an annualized rate near 30-40%, attributable to diminishing returns from thermal and electrical power envelopes that cap feasible clock speeds and node densities. Efficiency metrics, measured as the ratio of achieved Rmax to theoretical Rpeak, have trended upward across the list, rising from averages below 50% in vector-processor eras to over 60-70% in recent GPU-accelerated systems. This improvement stems from specialized hardware like tensor cores and optimized linear algebra libraries that better exploit dense matrix operations in the High-Performance LINPACK (HPL) benchmark, with top entries routinely achieving 75-80% fractions. Parallel scaling is evidenced by escalating core counts, with the average system concurrency reaching 275,414 cores in June 2025, up from 257,970 six months prior and a far cry from the thousands typical in 1990s lists. Aggregate cores across the now exceed 100 million, enabling massive parallelism but highlighting reliance on heterogeneous computing to mitigate bottlenecks in communication overhead.

Distribution and Dominance

By Country

As of the June 2025 TOP500 list, the United States maintains overwhelming dominance in both the number of listed systems and their aggregate computational performance, reflecting sustained federal investments in high-performance computing through agencies like the Department of Energy. The U.S. hosts 171 systems, comprising 34% of the total entries, and accounts for over 60% of the list's combined Rmax performance, driven by exascale machines such as , , and . This leadership underscores policy priorities favoring unrestricted access to cutting-edge technologies and substantial public funding, enabling rapid scaling to multi-exaflop capabilities. China's representation has sharply declined from its mid-2010s peak, when it held over 200 systems in November 2016, often comprising a mix of mid-tier installations that inflated entry counts but contributed modestly to performance shares. By June 2025, fields only 7 systems, or 1.4% of entries, with a collective Rmax of approximately 158 PFlop/s, equating to under 2% of the total—far below 10% since U.S. export controls on advanced chips took effect in 2019. These restrictions, aimed at curbing proliferation of high-end processors like those from and , have limited verified submissions of competitive systems, as Chinese supercomputers increasingly rely on domestic alternatives with inferior . Other nations trail significantly, with Europe's fragmented efforts—bolstered by EU-funded initiatives—yielding collective shares below U.S. levels despite standout entries like at rank 4. Japan follows with 37 systems (7.4%), anchored by Fugaku at rank 7, while has 47 (9.4%), 23 (4.6%), and the 17 (3.4%). These distributions highlight how national policies on R&D funding and international tech collaborations shape outcomes, with no single non-U.S. country exceeding 10% of systems or performance.
CountrySystems% of SystemsApprox. Total Rmax (PFlop/s)% of Rmax
17134.26,500>60
479.41,200~10
377.4900~7
234.6400~3
71.4158<2

By Institution and Funding Source

The leading positions in the TOP500 list are overwhelmingly occupied by supercomputers operated by U.S. Department of Energy (DOE) national laboratories, underscoring a heavy dependence on federal public funding for peak performance achievements. As of the June 2025 ranking, the top three exascale systems—El Capitan (1,742 PFlop/s at Lawrence Livermore National Laboratory), Frontier (1,353 PFlop/s at Oak Ridge National Laboratory), and Aurora (1,012 PFlop/s at Argonne National Laboratory)—are all deployed at DOE facilities under the Exascale Computing Project, a multiyear initiative that has secured over $1.8 billion in DOE appropriations since 2017 to deliver these systems for national security and scientific applications. Beyond DOE labs, other government-backed research entities play secondary but significant roles, with funding drawn from national or supranational public sources. Japan's Center for Computational Science operates systems like the former Fugaku (which held the top spot from 2020 to 2022), supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT) investments exceeding $1 billion for prior generations, reflecting Japan's strategy of state-directed development. In , the EuroHPC Undertaking—a entity co-funded by the (contributing the majority via its multiannual budget) and participating member states—manages multiple TOP500 entrants, including the fourth-ranked (deployed in ) and others in the top 50, with total program funding approaching €8 billion through 2027 for petascale and exascale infrastructure. Private industry involvement as operators remains marginal in the upper echelons of the TOP500, as commercial entities prioritize proprietary clusters optimized for workloads like training over the High-Performance Linpack benchmark, often declining submissions to protect competitive advantages or due to internal classification. While vendors such as HPE, , and supply hardware under government contracts, the scale of leading systems—requiring coordinated public subsidies in the billions across major economies—demonstrates that sustained dominance relies on taxpayer-funded programs rather than market-driven private investment alone.

Technical Specifications

Processor Architectures and Vendors

The x86 architecture remains predominant in TOP500 supercomputers, powering over 90% of total cores across listed systems due to its established ecosystem and performance in high-performance computing workloads. Intel processors equip 58.8% of the June 2025 list's systems, a decline from 61.8% in the prior edition, while AMD's EPYC series appears in 162 systems, including exascale machines like El Capitan and Frontier. ARM-based designs hold a niche role, exemplified by Japan's Fugaku, which briefly topped the list in 2020 but now ranks lower as x86 hybrids with accelerators dominate top performance tiers. Accelerators have become integral to top systems since the , with CPU-GPU hybrids enabling exaflop-scale computing; 232 of the June 2025 entries incorporate such accelerators. GPUs historically lead adoption, powering a majority of accelerated systems through architectures like the , though AMD's Instinct MI300A has surged in compute share, notably in the top-ranked with its integrated CPU-GPU design. This shift reflects vendor strategies prioritizing unified memory and high-bandwidth integration for dense floating-point operations. Processor vendors and control the bulk of CPU deployments, with accelerators split between NVIDIA's ecosystem and AMD's platform, the latter gaining traction in U.S. Department of Energy systems amid diversification efforts. System integrators like HPE, incorporating EX platforms, dominate top placements, with seven of the top ten June 2025 systems using HPE hardware featuring Slingshot-11 interconnects for low-latency scaling. holds a 34% share of interconnects, favored for its capabilities, marking a transition from proprietary networks like older designs to commoditized high-speed fabrics. Domestic Chinese processors, such as Phytium's ARM-derived chips in systems like Sunway, face marginalization in global rankings due to U.S. export controls enacted since , which restrict access to advanced fabrication and components, limiting scalability and performance against Western x86-GPU stacks. These sanctions, including placements, have prompted to halt orders from Phytium, forcing reliance on older nodes and reducing China's presence in upper TOP500 echelons.

Operating Systems and Interconnects

Linux-based operating systems have dominated the TOP500 lists since November 2017, with every one of the 500 fastest supercomputers running a variant as of June 2025. This complete market share reflects Linux's advantages in scalability, customizability, and open-source ecosystem support for (HPC) workloads. Common distributions include customized versions such as the Tri-Lab Operating System Suite (TOSS), developed for U.S. Department of Energy laboratories, SUSE Linux Enterprise Server for HPC, and with HPC optimizations. Earlier lists featured Unix derivatives and proprietary systems, but these were supplanted by Linux by the mid-2010s due to superior and community-driven development. High-speed interconnects enable efficient communication among thousands of nodes, with InfiniBand holding primacy for low-latency, high-bandwidth needs in top-ranked systems. NVIDIA's InfiniBand solutions, including HDR variants post-Mellanox acquisition, power 254 of the TOP500 systems as of November 2024, outperforming Ethernet in performance-critical deployments. RoCE-enabled Ethernet connects 111 systems but trails in share among the highest performers, as InfiniBand's remote direct memory access (RDMA) features minimize overhead for parallel computing. Specialized alternatives like HPE Cray Slingshot-11 underpin U.S. exascale machines such as El Capitan and Frontier, delivering sub-microsecond latencies optimized for extreme-scale simulations. Recent trends emphasize ecosystem standardization, with via tools like Apptainer gaining traction on stacks to facilitate reproducible environments without compromising or performance isolation. Remnants of non- HPC OS, including Windows variants, have vanished from the lists, underscoring 's unchallenged position.

Energy Efficiency via

The list complements the TOP500 by ranking the same supercomputers according to their , calculated as HPL performance in giga divided by power consumption in watts during the benchmark run (GFlops/W). This metric reveals the substantial electrical demands underlying , which the TOP500's focus on raw omits, thereby emphasizing trade-offs in system design where power efficiency may conflict with peak throughput. In the June 2025 Green500 edition, the top-ranked system is (JUPITER Exascale Development Instrument), a of the EuroHPC operated by in , attaining 72.73 GFlops/W alongside 4.5 PFlops of performance. By contrast, —the June 2025 TOP500 leader with over 2 exaflops—ranks 25th on the at 58.89 GFlops/W, underscoring a weak between peak performance and efficiency. Such disparities arise because HPL favors dense linear algebra computations that underutilize I/O, , and other subsystems critical to overall workload viability, allowing efficiency-optimized systems to outperform raw-power giants in flops-per-watt despite lower absolute speeds. Historical trends in the demonstrate energy efficiency roughly doubling with successive generations, driven by advances in processors, accelerators, and cooling, though absolute draw has escalated. Exascale systems exemplify this, typically requiring 20-30 megawatts (MW) at peak—such as Frontier's approximately 21 MW or El Capitan's 30 MW—potentially scaling to 60 MW for future iterations amid denser integrations and higher clock rates. This progression highlights ongoing challenges in balancing computational density with sustainable envelopes, as efficiency gains lag behind performance scaling.

Specialized Benchmarks for AI and Other Workloads

The HPL-MxP benchmark, an evolution of the HPL- proposal, adapts the High-Performance Linpack test for mixed-precision floating-point operations and structures common in training and . This variant measures sustained performance in lower-precision computations (e.g., FP16 or BF16), yielding higher throughput than double-precision HPL while better approximating workload demands. As of 2024, the system at topped HPL-MxP rankings with 11.6 Exaflop/s, followed by at 11.4 Exaflop/s, demonstrating exascale capabilities tailored for mixed workloads. However, HPL-MxP submissions remain optional and sparse, with fewer than a dozen systems reporting results per TOP500 list, highlighting limited integration despite growing relevance. Complementary benchmarks expose gaps in TOP500's dense linear algebra focus, emphasizing I/O, irregular access patterns, and end-to-end pipelines. The IO500 suite assesses holistic performance through bandwidth, metadata operations, and I/O patterns representative of HPC and data movement, with production lists updated biannually at ISC and conferences. Systems like those powered by DDN have dominated recent IO500 rankings, achieving superior results in real-world /HPC scenarios where ingestion bottlenecks exceed compute limits. Similarly, the Graph500 evaluates and single-source shortest path kernels on large-scale graphs, targeting analytics workloads that stress irregular memory access over sustained . Top performers, such as NVIDIA-based clusters, underscore hardware optimizations for traversal, contrasting TOP500's bias toward predictable, compute-bound tasks. MLPerf benchmarks provide rigorous, vendor-agnostic evaluations of and across diverse models, including large models and tasks, prioritizing time-to-train metrics over raw . In MLPerf v5.0 (June 2025), NVIDIA's Blackwell GPUs set records for scaling to thousands of accelerators, reflecting tuned for tensor operations and massive parallelism in pipelines. Unlike TOP500, MLPerf incorporates full-stack system effects like interconnect latency and software efficiency, revealing divergences where HPL-optimized machines underperform in sparse, memory-bound scenarios. These adjunct lists illustrate HPC diversification, as -driven submissions to TOP500 incorporate MxP testing but retain HPL primacy, with exascale systems like prioritizing simulation fidelity over iterative model demands.

Criticisms and Limitations

Methodological Flaws in Linpack Benchmark

The High-Performance Linpack (HPL) benchmark, which underpins TOP500 rankings, primarily evaluates throughput by solving dense systems of linear equations via factorization with partial pivoting, emphasizing compute-intensive operations over other system capabilities. This focus renders HPL largely compute-bound, with arithmetic intensity around 2.5 per byte for multiplications, imposing modest demands on —typically requiring 40-80 GB/s per socket for optimal runs—while largely disregarding irregular memory access patterns, sensitivities, and I/O dependencies prevalent in scientific simulations. Real-world (HPC) applications, such as climate modeling or , often exhibit memory-bound or communication-bound behaviors, achieving sustained performance at 10-30% of a system's HPL-measured Rmax (the benchmark's reported rate), compared to HPL's own 50-90% efficiency relative to theoretical peak (Rpeak). This divergence arises because HPL's regular, predictable data access allows near-peak utilization of compute units, whereas applications involve sparse matrices, non-local dependencies, and filesystem interactions that amplify and constraints, sometimes limiting effective throughput to fractions of HPL scores. Vendors and system integrators extensively optimize HPL implementations—tuning parameters like block sizes (), process grids, and BLAS libraries (e.g., via vendor-specific accelerations)—to maximize Rmax, often at the expense of generalizability to untuned workloads. Such "benchmark gaming" has led to architectures prioritized for HPL over balanced I/O or sustained application performance, with reports of systems engineered specifically to inflate TOP500 entries rather than enhance broad HPC utility. Additionally, TOP500 submissions exclude classified supercomputers, which national security programs operate without public disclosure, thereby skewing rankings toward unclassified, often academic or open-science systems and underrepresenting total global or HPC . This omission favors transparent installations while potentially distorting perceptions of technological leadership in opaque domains like simulations.

Broader Interpretive and Geopolitical Issues

The TOP500 list is frequently misinterpreted as a comprehensive proxy for national or overall prowess, despite measuring only peak performance on the High-Performance Linpack , which correlates poorly with real-world scientific utility or broader capacity. This overreliance has fueled geopolitical narratives, such as viewing rankings as indicators of military or economic dominance, yet the list excludes undisclosed systems, private-sector deployments, and non-submitted entries, distorting assessments of aggregate national compute resources. The sustained dominance in recent TOP500 rankings—holding the top three positions as of November 2024—owes significantly to export controls imposed since , which have restricted China's access to advanced semiconductors and components, leading to a sharp decline in verified Chinese submissions from over 200 in to fewer than 10 by 2023. These measures, expanded under both and Biden administrations to target high-performance chips like GPUs, have isolated China's sector and prompted non-participation in TOP500 submissions since around , though smuggling and circumvention may partially offset impacts on unlisted systems. Pursuit of TOP500 prestige often prioritizes benchmark optimization over productive scientific output, with exascale systems like the U.S. Department of Energy's Frontier costing approximately $600 million yet yielding marginal advancements relative to input, as evidenced by the benchmark's narrow focus amid escalating expenses for power and maintenance exceeding $100 million annually per site. China's assertions of superior unlisted supercomputing capacity, estimated by TOP500 co-founder Jack Dongarra to potentially exceed global totals, remain unverified due to opacity and lack of independent benchmarking, raising doubts about their scale and military applications amid U.S. sanctions. The list's emphasis on government-submitted, publicly benchmarked systems biases it toward state-funded initiatives, understating commercial where private entities now control about 80% of global AI-oriented clusters by , many of which—such as those from hyperscalers like or undisclosed corporate training setups—eschew TOP500 participation to avoid revealing proprietary capabilities or due to incompatible workloads. This private-sector shift highlights how TOP500 captures only a fraction of deployable compute, particularly in -driven applications where clustered GPUs prioritize training efficiency over Linpack scores.

Impact and Future Outlook

Contributions to Scientific Computing

Supercomputers tracked by the TOP500 list have enabled empirical advancements in plasma physics, particularly for fusion energy research, by performing simulations that capture multiscale turbulence effects unattainable with prior computational scales. The Frontier system, ranked first on the TOP500 since May 2022, facilitated gyrokinetic modeling using the CGYRO code to simulate plasma temperature fluctuations driven by ion-temperature-gradient turbulence, yielding data on particle and heat transport that inform confinement optimization in tokamak devices. Similarly, Frontier-supported optimizations of fusion codes have pushed predictive modeling of energy losses in plasmas, aiding performance enhancements in experimental reactors. In , TOP500 systems like deliver exascale performance for molecular simulations, accelerating and binding affinity predictions that process terabytes of chemical in hours rather than years. This capability stems from heterogeneous architectures combining CPUs and GPUs, as seen in facilities, where such compute resolves protein-ligand interactions at resolutions previously limited by classical methods. 's role exemplifies how TOP500-tracked exascale platforms enable precision medicine workflows, with outputs validated in peer-reviewed studies on therapeutic candidates. Climate modeling benefits from TOP500 systems' capacity for high-fidelity, petabyte-scale simulations of atmospheric and oceanic dynamics, resolving fine-scale phenomena like microphysics that clusters cannot match in resolution or speed. Systems such as , ranked in the global top 10, integrate AI-driven parameterizations to refine ensemble forecasts, improving predictive accuracy for events. NVIDIA-accelerated TOP500 machines further these efforts by handling coupled Earth system models, producing verifiable hindcasts that align with observational data from satellites and ground stations. The standardization of GPU architectures in TOP500 environments has spurred optimizations that extend to broader scientific workflows, though relative to alternatives like cloud-distributed systems requires case-specific economic analysis beyond raw performance metrics. Causal evidence for these contributions lies in domain-specific publications citing TOP500 , rather than rankings alone, underscoring the need for reproducible simulations over aggregate .

Exascale Achievements and Challenges

The United States achieved the first verified exascale supercomputers in the TOP500 list, with Frontier at Oak Ridge National Laboratory reaching 1.102 EFlop/s on the High-Performance Linpack benchmark in June 2022, marking the initial operational milestone for sustained exascale performance at 64-bit precision. Aurora at Argonne National Laboratory followed, entering the TOP500 in 2023 and achieving 1.012 EFlop/s by June 2025, while El Capitan at Lawrence Livermore National Laboratory became the third system to surpass 1 EFlop/s, debuting at 1.742 EFlop/s in November 2024 and retaining the top position through June 2025. These three Department of Energy systems, all built by Hewlett Packard Enterprise, dominate the list's upper ranks, with no other nations reporting independently verified exascale capabilities in TOP500 submissions as of mid-2025. Europe and China have pursued exascale systems but lag in verified performance; EuroHPC's , touted as Europe's first exascale machine, ranked in the global top 10 by June 2025 but did not exceed 1 EFlop/s on Linpack, while Chinese efforts remain unverified in TOP500 despite prior claims of advanced prototypes. This U.S. lead stems from coordinated investments under the Exascale Computing Project, enabling full deployment of heterogeneous architectures combining and processors with advanced accelerators. Exascale systems face persistent challenges in power consumption, with operating at approximately 21 MW to deliver its performance, though ideal targets aimed for 20 MW per exaflop, necessitating efficiencies around 50 GFLOPS/W that remain difficult to scale uniformly. poses another barrier, as systems with millions of cores (e.g., 's 8.7 million) experience mean times between failures dropping to minutes during full-scale runs, requiring software mechanisms for checkpointing and recovery amid extreme parallelism involving billions of tasks. Cooling innovations, such as direct liquid cooling in , address heat dissipation from dense node packing, reducing energy overheads compared to air-based methods but introducing complexities in maintenance and scalability for future zettascale designs.

Evolving Role in AI and Geopolitical Competition

As workloads proliferate, the TOP500's reliance on the High Performance Linpack (HPL) benchmark, optimized for dense linear algebra in double precision, increasingly misaligns with demands for sparse operations and mixed-precision computing. Variants like HPL-MxP, which emulate training through reduced precision, have gained traction in submissions; for instance, achieved 8.73 EFlop/s on HPL-AI in June 2025 evaluations, highlighting HPC-AI convergence. Yet TOP500 has not integrated these as core metrics, limiting its relevance amid commercial shifts where NVIDIA's -optimized hardware, powering over half the top systems by 2024, favors proprietary benchmarks like MLPerf over standardized HPC tests. Geopolitical rivalries amplify these dynamics, with U.S. export controls since 2022 curtailing China's acquisition of advanced GPUs and interconnects, resulting in fewer disclosed Chinese entries and a pivot to indigenous chips like those from Huawei. This has preserved U.S. leadership, with American systems claiming the top three spots in June 2025, while Europe advances sovereignty via projects like JUPITER, Europe's first exascale machine activated in September 2025 at Forschungszentrum Jülich, delivering over 1 exaFLOP/s for AI and simulations under EU control. Prospects for zettaflop-scale systems face thermodynamic and cost barriers, with energy demands exceeding practical limits for on-premises deployments; cloud AI clusters, such as Oracle's Zettascale10 unveiled in October 2025 with 16 zettaFLOPs peak from 800,000 GPUs, exemplify a trend toward scalable, proprietary infrastructure that bypasses TOP500 scrutiny. If such clouds dominate AI innovation, TOP500 risks marginalization, supplanted by workload-specific rankings that better reflect economic viability over raw peak .

References

  1. [1]
    The Linpack Benchmark - TOP500
    The benchmark used in the LINPACK Benchmark is to solve a dense system of linear equations. For the TOP500, we used that version of the benchmark.
  2. [2]
    TOP500 Description
    The TOP500 table shows the 500 most powerful commercially available computer systems known to us. To keep the list as compact as possible, we show only a part ...
  3. [3]
    About | TOP500
    The TOP500 project was launched in 1993 to improve and renew the Mannheim supercomputer statistics, which had been in use for seven years.
  4. [4]
    Introduction and Objectives | TOP500
    The main objective of the TOP500 list is to provide a ranked list of general purpose systems that are in common use for high end applications.
  5. [5]
    Lists | TOP500
    TOP500. TOP500 Lists since 1993 ; HPCG. HPCG Lists since 2017 ; GREEN500. GREEN500 Lists since 2013.Top-500 · June 2024 · June 2025 · Statistics
  6. [6]
    Call for Participation in the TOP500 / Green500 Lists
    The TOP500 project was started in 1993 to provide a reliable basis for tracking and detecting trends in high-performance computing.
  7. [7]
    Erich Strohmaier - TOP500
    Working with Prof. Hans Meuer, Erich Strohmaier created the first TOP500 list in June 1993. On his first day at the new job at the University of Mannheim, ...
  8. [8]
    [PDF] The TOP500 Project
    Jan 20, 2008 · Abstract. The TOP500 project was launched in 1993 to provide a reliable basis for tracking and detecting trends in high.
  9. [9]
  10. [10]
    Sublist Generator - TOP500
    Rpeak values are calculated using the advertised clock rate of the CPU. For the efficiency of the systems you should take into account the Turbo CPU clock rate ...
  11. [11]
    TOP500: Home -
    The 64th edition of the TOP500 reveals that El Capitan has achieved the top spot and is officially the third system to reach exascale computing after Frontier ...
  12. [12]
    Frequently Asked Questions - TOP500
    (This is the benchmark use for the Top500 report). This benchmark attempts to measure the best performance of a machine in solving a system of equations.
  13. [13]
    TOP500 Founder Erich Strohmaier on the List's Evolution
    The TOP500 list was developed to provide the community with a simple but meaningful point of reference. In the 1980s the vector machines were clearly delineated ...
  14. [14]
    June 1993 | TOP500
    ### Summary of June 1993 TOP500 List
  15. [15]
    [PDF] Changing Technologies of HPC - The Netlib
    Jan 6, 1997 · We will discuss in this paper the different developments based on the Top500 lists of supercomputer sites available since June 1993 [1] and ...
  16. [16]
    [PDF] Biannual Top-500 Computer Lists Track Changing Environments ...
    It wrests high-level computing away from the privileged few and makes low-cost parallel-processing systems available to those with modest resources. Research.
  17. [17]
    June 1997 | TOP500
    TOP 10 Sites for June 1997 ; 1, ASCI Red, Intel Sandia National Laboratories United States, 7,264, 1,068.00, 1,453.00 ; 2, CP-PACS/2048, Hitachi/Tsukuba Center ...Missing: teraflop | Show results with:teraflop<|separator|>
  18. [18]
    Performance Development | TOP500
    Performance Development ; Jun 1, 1993, 1,128.57, 59.7 ; Nov 1, 1993, 1,493.35, 124 ; Jun 1, 1994, 2,317.01, 143.4 ; Nov 1, 1994, 2,732.24, 170 ...
  19. [19]
    Roadrunner: Los Alamos National Laboratory | TOP500
    By November 2008, Roadrunner had been slightly enhanced and posted a Linpack benchmark performance of 1.105 petaflops. This allowed the system to narrowly fend ...
  20. [20]
    TOP500 Becomes a Petaflop Club for Supercomputers
    Jun 16, 2019 · All 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark, with the entry level to the list now at 1.022 petaflops.
  21. [21]
  22. [22]
    June 2025 - TOP500
    The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position. With El Capitan, Frontier, and Aurora, there are now 3 Exascale ...
  23. [23]
    Highlights - June 2025 - TOP500
    The systems of the TOP500 are ranked by how much computational performance they deliver on the HPL benchmark per Watt of electrical power consumed.
  24. [24]
    Sluggish Performance Growth May Portend Slowing HPC Market
    Jul 3, 2017 · The latter explains why aggregate performance on the TOP500 list historically grew somewhat faster than the rate of Moore's Law. But this no ...
  25. [25]
    Efficiency, Power, ...
    Rmax and Rpeak values are in GFlops. For more details about other fields, check the TOP500 description. TOP500 Release. June 2025, November 2024, June 2024 ...Missing: Rpeak trends
  26. [26]
    An overview of the late 2024 supercomputing landscape in 6 charts
    Dec 31, 2024 · Recent data shows that most TOP500 systems exceed 60% of their theoretical peak (Rpeak) in practice. For instance, HPC6 (#5) delivers nearly 79 ...
  27. [27]
    China's Isolation in High-Performance Computing (HPC) Market
    Oct 28, 2024 · However, since 2019, China's presence in the Top500 has decreased following trade restrictions imposed by the US. Today, China's most ...
  28. [28]
    Top500: Frontier Still No. 1. Where's China? - IEEE Spectrum
    Nov 15, 2022 · The Frontier Supercomputer retains its top ranking on the latest entry in the Top500 list of the world's fastest high-performance computers.
  29. [29]
    Secretary of Energy Rick Perry Announces $1.8 Billion Initiative for ...
    Apr 10, 2018 · Secretary of Energy Rick Perry Announces $1.8 Billion Initiative for New Supercomputers · Systems Will Solidify U.S. Leadership in the “Exascale” ...
  30. [30]
    US Plans $1.8 Billion Spend on DOE Exascale Supercomputing
    Apr 11, 2018 · The United States Department of Energy announced its intention to procure up to three exascale supercomputers at a cost of up to $1.8 billion.
  31. [31]
    Superstumble for Japan's Supercomputer | Science | AAAS
    The Next-Generation Supercomputer is a 7-year, $1 billion national project funded by the Ministry of Education. ... RIKEN. Scientists are writing code that would ...
  32. [32]
    EuroHPC Supercomputers Put Europe at the Forefront of Global ...
    Jun 10, 2025 · The latest editions of the TOP500 and Green500 lists were released on Tuesday 10 June 2025 on the first day of the ISC High Performance ...
  33. [33]
    EUROPE HPC: Top 500 Computers
    EuroHPC is a joint collaboration between European countries and the European Union about developing and supporting exascale supercomputing by 2022/2023.
  34. [34]
    DOE Lands Top Two Spots on List of Fastest Supercomputers
    May 13, 2024 · The Oak Ridge team has steadily improved Frontier's performance since its debut as #1 on the Top500 in 2022. The Argonne and Oak Ridge teams are ...
  35. [35]
  36. [36]
    Winners and losers in the Top500 supercomputer ranking
    Jun 12, 2025 · In the June 2025 ranking, AMD's Epyc processors are found in 162 systems, covering the 2nd through 4th generation of the Zen architecture.Missing: statistics | Show results with:statistics<|separator|>
  37. [37]
    Top500 Supers: Even Accelerators Can't Bend Performance Up To ...
    Jun 10, 2025 · All told, the five hundred machines on the list had 20.61 exaflops of aggregate performance and had 137.6 million cores of concurrency. One ...
  38. [38]
    Top500 Supers: This Is Peak Nvidia For Accelerated Supercomputers
    May 13, 2024 · To even get on the list this time around, you need to have a cluster that delivers at least 2.13 petaflops. Thanks to the addition of a slew of ...
  39. [39]
    El Capitan Retains Top Spot in 65th TOP500 List as Exascale Era ...
    The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position. With El Capitan, Frontier, and Aurora, there are now 3 Exascale ...
  40. [40]
    HPE leads industry with top-ranked supercomputers and AI servers
    Jun 10, 2025 · HPE today announced its architecture continues to lead the bi-annual TOP500 and Green500 rankings of the world's fastest and most power-efficient high ...
  41. [41]
    HPE Upgrades Supercomputer Lineup Top To Bottom In 2025
    Nov 26, 2024 · In fact, seven of the top ten supercomputers ranked on the November 2024 Top500 rankings are based on the Cray EX platform; ten of the top ...
  42. [42]
  43. [43]
    supercomputer sanctions on China begin to bite as Taiwan's TSMC ...
    Apr 13, 2021 · TSMC said to be no longer accepting new orders from Phytium, one of seven Chinese entities blacklisted by the US last week in an effort to ...
  44. [44]
    Biden administration effectively slaps bans on seven Chinese ...
    Apr 9, 2021 · The bans may not impact the Sunway TaihuLight supercomputer, Earth's mightiest computing machine as of 2016 according to the Top 500 list.
  45. [45]
    Linux Runs on All of the Top 500 Supercomputers, Again! - It's FOSS
    Linux is the supercomputer operating system by choice. As per the latest report from Top 500, Linux now runs on all the fastest 500 supercomputers in the world.Missing: distribution | Show results with:distribution<|separator|>
  46. [46]
    TOP500 List - June 2025
    TOP500 List - June 2025 ; 7, Supercomputer Fugaku - Supercomputer Fugaku, A64FX 48C 2.2GHz, Tofu interconnect D, Fujitsu RIKEN Center for Computational ScienceMissing: history | Show results with:history
  47. [47]
    Open source and collaboration propel RHEL to the top of the TOP500
    Jun 28, 2021 · Red Hat Enterprise Linux (RHEL) provides the operating system cornerstone for the top three supercomputers in the world according to the June 2021 TOP500 ...Missing: dominance | Show results with:dominance<|separator|>
  48. [48]
    Linux Runs All of the World's Fastest Supercomputers
    Top500.org revealed that Linux powers every one of the world's 500 fastest supercomputers. Linux is the right match for the needs of the supercomputing ...
  49. [49]
    InfiniBand and RoCE Advances Further in the TOP500 November ...
    Dec 4, 2024 · The November 2024 TOP500 list underscores InfiniBand's vital role in shaping the future of supercomputing. With products supporting the IBTA's ...
  50. [50]
  51. [51]
    Green500 - TOP500
    June 2025. The 25th Green500 List was published June 14, 2025 in Hamburg. November 2024. The 24th Green500 List was published Nov.June 2025 · June 2024 · June 2022 · June 2020
  52. [52]
    June 2025 | TOP500
    With 16,128 cores total it achieved 2.529 PFlop/s HPL performance and an efficiency of 69.1 GFlops/Watt. The El Capitan system and the Frontier system both ...Missing: aggregate | Show results with:aggregate
  53. [53]
    Green500 List - June 2025 - TOP500
    Green500 Data ; 68, 243, Dhabi Supercomputer - Apollo 6500, AMD EPYC 7763 64C 2.45GHz, NVIDIA A100 SXM4 80 GB, Infiniband HDR, HPE Technology Innovation ...
  54. [54]
    El Capitan reigns supreme across three major supercomputing ...
    Jun 16, 2025 · 12 on the TOP500 and is a crucial platform for open science applications, including AI-assisted fusion research, materials science, earthquake ...
  55. [55]
    The Beating Heart of the World's First Exascale Supercomputer
    Jun 24, 2022 · The lab says its world-leading supercomputer consumes about 21 megawatts. “Everyone up and down the line went after efficiency.”
  56. [56]
    El Capitan: NNSA's first exascale machine
    While it is one of the world's most energy-efficient supercomputers, El Capitan requires about 30 megawatts (MW) of energy to run at peak—enough power to run a ...
  57. [57]
    Results - HPL-MxP
    HPL-MxP (Eflop/s), TOP500 Rank, HPL Rmax (Eflop/s), Speedup. 1, DOE/SC/LLNL, El Capitan, 11,039,616, 16.680, 1, 1.7420, 9.6. 2, DOE/SC/ANL, Aurora, 8,159,232 ...
  58. [58]
    HPCG - June 2023 - TOP500
    The HPL-MxP benchmark seeks to highlight the use of mixed precision computations. Traditional HPC uses 64-bit floating point computations. Today we see hardware ...
  59. [59]
    Highlights - November 2024 - TOP500
    The HPL-MxP benchmark demonstrates that by using mixed precision during the computation much higher performance is possible (see the Top 5 from the HPL-MxP ...
  60. [60]
    IO500 - ISC25 - Production List
    Ranking of production system submissions. This is a subset of the Full List of submissions, showing only one highest-scoring result per storage system.10 Node Production ISC25 List · About · Steering · Lists
  61. [61]
    DDN Tops IO500 Benchmark for Real-World AI and HPC ... - HPCwire
    Jun 18, 2025 · DDN today announced it has achieved the top ranking on the IO500 benchmark, the industry standard for measuring real-world storage performance in AI and HPC.
  62. [62]
    Benchmark Specification - Graph 500
    Jun 21, 2017 · Graphs are a core part of most analytics workloads. Backed by a steering committee of over 30 international HPC experts from academia, industry, ...Missing: BigData50 | Show results with:BigData50
  63. [63]
    Complete Results - Graph 500
    Benchmark Specification ... NVIDIA HGX H100, InfiniBand, CoreWeave datacentre in Dallas Texas, Dallas TX, United States, 2023, Industry, AI HPC GPU computing ...Missing: BigData50 | Show results with:BigData50
  64. [64]
    Benchmark Work | Benchmarks MLCommons
    MLCommons ML benchmarks help balance the benefits and risks of AI through quantitative tools that guide responsible AI development.MLPerf Inference: Datacenter · MLPerf Training · MLPerf Client · AILuminate<|separator|>
  65. [65]
    Blackwell GPUs Lift Nvidia to the Top of MLPerf Training Rankings
    Jun 4, 2025 · Nvidia's latest MLPerf Training 5.0 results show its Blackwell GB200 accelerators sprinting through record time-to-train scores.
  66. [66]
    MLPerf Inference - MLCommons
    He also brings valuable experience from working with other industry-standard benchmarks, including TOP500, STAC, etc. ML Commons Better AI for Everyone. Stay ...Missing: integration | Show results with:integration
  67. [67]
    Memory Bandwidth Requirements of the HPL benchmark
    Sep 11, 2014 · It suggests that we will need about 40 GB/s of memory bandwidth for a single-socket HPL run and about 80 GB/s of memory bandwidth for a 2-socket run.
  68. [68]
    The changing face of supercomputing: why traditional benchmarks ...
    Sep 25, 2025 · HPCG (High Performance Conjugate Gradients) complements HPL by addressing the memory bandwidth and latency gap. Unlike LINPACK's dense ...
  69. [69]
    [PDF] The TOP500 List and Progress in High- Performance Computing
    Nov 2, 2015 · IN MEMORIAM: HANS WERNER MEUER (1936−2014). Hans W. Meuer started the TOP500 list together with Erich Strohmaier, Jack Dongarra, and Horst. D.
  70. [70]
    What the Top500 Doesn't Tell Us - HPCwire
    Jul 13, 2007 · But since most HPC applications exhibit much more complex behavior than Linpack, the benchmark isn't that useful in determining real-world ...Missing: flaws criticisms
  71. [71]
    [PDF] Optimizing High-Performance Linpack for Exascale Accelerated ...
    Apr 20, 2023 · Abstract—We detail the performance optimizations made in rocHPL, AMD's open-source implementation of the High-. Performance Linpack (HPL) ...
  72. [72]
    Supercomputing: What have we learned from the TOP500 project?
    Classified sites are excluded from this analysis. scientists might find themselves in a position having access to compute resources,Missing: criticism | Show results with:criticism
  73. [73]
    [PDF] Statement of Dr. Simon Szykman - NITRD.gov
    Jul 19, 2006 · which surveys the world's 500 fastest supercomputers (excluding classified systems) as ranked by a well-known benchmark clearly confirms ...
  74. [74]
    Council on Competitiveness Defends TOP500 Usefulness - HPCwire
    Dec 6, 2012 · Community insiders understand that this single-metric test is not representative of most supercomputing workloads. Still no open science system ...
  75. [75]
    Racing Against China, U.S. Reveals Details of $500 Million ...
    Mar 18, 2019 · Supercomputers, which play a major role in tasks such as weapons design and code-breaking, have long been considered a proxy for national ...
  76. [76]
    China Is Rapidly Becoming a Leading Innovator in Advanced ...
    Sep 16, 2024 · This is why, in 2020, in 7 of 10 advanced industries, China led in global production, with the United States leading in only 3. (See table 1.).
  77. [77]
    Top500: China Opts Out of Global Supercomputer Race
    including cutting-edge NVIDIA GPUs — to China. The ...
  78. [78]
    China's HPC sector increasingly isolated amid US sanctions - Verdict
    Oct 1, 2024 · The US Bureau of Industry and Security has placed extensive limits on the export of advanced chips, chip design software and manufacturing tools ...
  79. [79]
    The Fed - The State of AI Competition in Advanced Economies
    Oct 6, 2025 · Widespread smuggling of high-end chips and other circumvention of US export controls may also result in underestimating China's overall compute ...
  80. [80]
    The Journey to Frontier | ORNL
    Nov 14, 2023 · Atchley and fellow scientists had spent months tuning and tweaking Frontier, the $600 million supercomputer installed at the Department of ...
  81. [81]
    Frontier: Step By Step, Over Decades, To Exascale - The Next Platform
    May 30, 2022 · While Oak Ridge can deploy up to 100 megawatts for its computing, it costs roughly a dollar per watt per year to do this – so $100 million – and ...
  82. [82]
    US guru says China's supercomputer power may exceed all countries
    Sep 14, 2023 · Jack Dongarra, professor, Turing laureate and co-founder of TOP500 list, says China still produces the most supercomputers.
  83. [83]
    An American ban hits China's supercomputer industry - The Economist
    Jun 29, 2019 · On June 21st America's Commerce Department blacklisted another five Chinese supercomputing entities on the grounds that they too pose a threat to national ...<|separator|>
  84. [84]
    Private-sector companies own a dominant share of GPU clusters
    Jun 5, 2025 · The private sector's share of global AI computing capacity has grown from 40% in 2019 to 80% in 2025.
  85. [85]
    Supercomputer 'exascale' era merges with private AI race
    Aug 6, 2025 · Many new private AI clusters do not submit to the Top500 or other benchmarks, a deviation from traditional HPC development. Epoch AI, a ...
  86. [86]
    Trends in AI Supercomputers - arXiv
    We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global ...
  87. [87]
    Using ORNL's Frontier supercomputer, researchers discover new ...
    Jun 24, 2024 · Researchers used the CGYRO gyrokinetic code to create a multiscale simulation of plasma temperature fluctuations driven by turbulence.
  88. [88]
    Research Team Optimizes Code to Push Nuclear Fusion to the Next ...
    Aug 14, 2024 · This includes the AMD MI250X GPU-based Frontier exascale supercomputer at the Department of Energy (DOE) Oak Ridge National Laboratory (ORNL).
  89. [89]
    Supercomputer speeds drug discovery, enabling precision medicine
    Feb 21, 2024 · Oak Ridge National Laboratory's Frontier, the world's first exascale supercomputer with its dizzying 1.1 exaflop speed, is a game-changer ...
  90. [90]
    Applying high-performance computing in drug discovery and ...
    This review mainly focuses on the application of HPC to the fields of drug discovery and molecular simulation at the Chinese Academy of Sciences.
  91. [91]
    Alps Supercomputer Powers AI and Climate Modeling Advancements
    Jan 14, 2025 · It now ranks among the top 10 most powerful supercomputers globally, according to the latest TOP500 list. During the discussion, Schulthess ...Missing: breakthroughs | Show results with:breakthroughs
  92. [92]
    NVIDIA Accelerates Majority of World's Supercomputers
    Nov 20, 2024 · The latest TOP500 list reveals 384 systems run on NVIDIA technology, enabling breakthroughs in climate forecasting, drug discovery and ...
  93. [93]
    News - TOP500
    June 10, 2025. The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position. With El Capitan, Frontier, and ...
  94. [94]
  95. [95]
    TOP500: El Capitan Retains Top Spot as Exascale Era Expands
    The 65th edition of the TOP500 showed that the El Capitan system retains the No. 1 position.
  96. [96]
    TOP500 at SC24: El Cap on Top, U.S. Now Has 3 Exascale ...
    Nov 18, 2024 · The U.S. now has the only three exascale-class systems in the world, as verified by the TOP500, though it is generally believed that China also ...
  97. [97]
    Exascale Computing's Four Biggest Challenges and How They ...
    Oct 18, 2021 · It concluded that exascale supercomputers faced four major obstacles—power consumption, data movement, fault tolerance, and extreme parallelism ...Missing: cooling | Show results with:cooling
  98. [98]
    [PDF] The Opportunities and Challenges of Exascale Computing
    For example, software-based recovery mechanisms for fault-tolerance or energy-management features will create substantial load- imbalances as tasks are ...
  99. [99]
    How Nvidia Dominated the Top500 List With AI Supercomputers
    Nov 23, 2024 · Nvidia's Hopper GPUs & AI computing tech dominates Top500 list from advancements in climate forecasting, drug discovery & quantum simulation.Missing: HPL sparse<|separator|>
  100. [100]
    TOP500: El Capitan Stays on Top, US Holds Top 3 Supercomputers ...
    Jun 10, 2025 · El Capitan secured the number 26 spot on the GREEN500 with an energy efficiency score of 58.89 GFlops/Watt. Frontier produced an energy ...<|separator|>
  101. [101]
    Europe enters the exascale supercomputing league with 'JUPITER'
    Sep 4, 2025 · Officially ranked as Europe's most powerful supercomputer and the fourth fastest worldwide, JUPITER combines unmatched performance with a strong ...
  102. [102]
    Jupiter: Europe's fastest supercomputer for AI - deutschland.de
    Sep 26, 2025 · Jupiter is accelerating AI and science. Europe's fastest supercomputer is reinforcing digital sovereignty.
  103. [103]
  104. [104]
    Oracle unveils Zettascale10 AI supercomputer, claims it will be ...
    Oct 14, 2025 · The cloud company claims it will be the largest AI supercomputer in the cloud. The Zettascale10 is built on Oracle's Acceleron RoCE ...
  105. [105]
    Why Supercomputer Benchmarking Is So Important - HPCwire
    Nov 25, 2024 · Panelists said supercomputing benchmarking will remain important for many reasons. But it would need to keep up with applications, hardware, and codebases.