Fact-checked by Grok 2 weeks ago

Exascale computing

Exascale computing refers to high-performance computing systems capable of performing at least one exaFLOPS, or $10^{18} floating-point operations per second, marking a significant advancement over prior petaflop-scale supercomputers. The United States achieved this milestone in May 2022 with the Frontier supercomputer at Oak Ridge National Laboratory, which delivered 1.102 exaFLOPS on the LINPACK benchmark and has since enabled breakthroughs in simulations for fusion energy, climate modeling, and materials science. By 2025, additional systems like the Aurora supercomputer at Argonne National Laboratory reached exascale performance, expanding capabilities for AI-driven research in quantum simulations and nuclear engineering. Developing these machines involved overcoming key technical hurdles, including extreme power consumption exceeding 20 megawatts, massive data movement across millions of cores, fault tolerance in highly parallel architectures, and programming for unprecedented scale. Exascale systems promise to accelerate empirical discoveries by enabling first-principles simulations of complex physical phenomena previously intractable, though their full realization demands ongoing innovations in hardware efficiency and software resilience.

Fundamentals

Definition and Performance Thresholds

Exascale computing refers to high-performance computing systems capable of performing at least one exaFLOP of computational throughput, where one exaFLOP equals 10^{18} floating-point operations per second (FLOPS). This scale represents a thousandfold increase over petascale systems, which operate at 10^{15} FLOPS, enabling simulations and analyses previously infeasible due to computational limits. The term emphasizes sustained performance in double-precision (64-bit) arithmetic, aligning with standards for scientific computing workloads in fields such as climate modeling, materials science, and drug discovery. The primary performance threshold for designating a system as exascale is achieving sustained performance of at least 1 exaFLOP on the High-Performance Linpack (HPL) , a used by the list to rank supercomputers. HPL measures dense linear algebra solvings, approximating real-world floating-point intensive tasks, and requires verifiable results submitted with hardware details for validation. While peak theoretical performance may exceed this—often through mixed-precision or specialized accelerators—the exascale designation hinges on HPL's conservative, double-precision metric to ensure broad applicability across scientific applications. Systems falling short on HPL, even with higher peak claims, do not qualify, underscoring the 's role in establishing credible thresholds amid varying architectural efficiencies.

Benchmarks and Verification Standards

Exascale computing performance is primarily verified through the High-Performance Linpack (HPL) benchmark, which measures sustained double-precision floating-point operations per second (FLOPS) for solving dense systems of linear equations, as standardized by the TOP500 project. A system qualifies as exascale by achieving at least 1 exaFLOP (10^18 FLOPS) on HPL under controlled conditions, including full-system utilization and reproducible results submitted biannually to the TOP500 list. For instance, the Frontier supercomputer at Oak Ridge National Laboratory first demonstrated exascale capability with an HPL score of 1.102 exaFLOPS in May 2022, later improving to 1.35 exaFLOPS by November 2024. The verification process requires submissions to adhere to specific HPL implementation rules, such as using the latest approved versions of the software and documenting configurations, optimizations, and run parameters to ensure comparability and prevent inflated claims. This methodology, while effective for peak performance ranking, has been critiqued for overemphasizing compute-bound operations at the expense of memory access patterns typical in real applications, prompting the development of complementary standards. To address HPL's limitations, the High-Performance Conjugate Gradient (HPCG) benchmark serves as a more representative verification tool for exascale systems, focusing on sparse matrix-vector multiplications, irregular memory access, and preconditioned conjugate gradient solvers that mirror scientific workloads. HPCG scores are reported alongside results; for example, achieved 17.4 HPCG-PFLOPS in June 2025, highlighting sustained performance under data-intensive conditions. Unlike HPL, HPCG yields lower efficiency percentages relative to peak —often 5-10% for large systems—providing a realistic gauge of application-relevant capability. Emerging standards like HPL-MxP extend verification to mixed-precision , relevant for and on exascale platforms, by incorporating lower-precision factorizations and iterative refinement for higher throughput. Systems such as have recorded 11.6 exaFLOPS on HPL-MxP, underscoring the need for multifaceted benchmarks to fully validate exascale versatility beyond traditional double-precision metrics. These benchmarks collectively ensure claims of exascale attainment are empirically grounded, with ongoing refinements driven by the HPC community to align measurements with diverse computational demands.

Engineering Challenges

Power Consumption and Efficiency

Achieving exascale performance, defined as at least one exaflop of double-precision floating-point operations per second, demands immense computational resources, exacerbating power consumption challenges. Projections from the early 2010s estimated that unchecked scaling could require up to 100 MW or more, equivalent to the energy needs of thousands of households, due to the exponential growth in transistor counts and heat dissipation issues under conventional air cooling. To surmount this "power wall," system designers targeted a 200-fold improvement in energy efficiency, from roughly 2 nJ per instruction to 10 pJ, combining advances in device physics, architecture, and software. Key innovations include architectures integrating energy-efficient accelerators like GPUs with CPUs, leveraging smaller process nodes (e.g., 5-7 nm), and high-bandwidth to reduce data movement overheads, which account for a significant portion of use. Direct cooling has become standard to manage densities exceeding 1 kW per chip, enabling sustained operation without throttling. The U.S. Department of Energy's Exascale Computing Project emphasized power caps of 20-30 MW for practical deployment, balancing performance with facility constraints and operational costs exceeding $1 million annually per MW at typical utility rates. The Frontier supercomputer at Oak Ridge National Laboratory, operational since 2022, exemplifies these efforts, delivering 1.1 exaflops sustained on the HPL benchmark while consuming approximately 21-30 MW, depending on workload and cooling integration. Its efficiency reached 52.23 gigaflops per watt on the Green500 list, surpassing prior petascale systems through AMD's EPYC CPUs and Instinct MI250X GPUs optimized for vector workloads. Subsequent systems like Germany's JEDI module for the JUPITER exascale project achieved 72.7 gigaflops per watt in 2024, highlighting ongoing refinements in power capping and dynamic voltage scaling to prioritize flops per joule over raw speed.
SystemPower Consumption (MW)Efficiency (Gflops/W)Deployment Year
(ORNL)21-3052.232022
(JUPITER module)Not specified72.72024
Despite these advances, exascale facilities remain energy-intensive, with total power including cooling and auxiliaries often doubling IT loads, prompting research into recovery and renewable integration for sustainability. under power constraints further necessitates resilient designs, as efficiency gains must not compromise reliability in million-node clusters.

Scalability and Parallel Processing

Exascale systems demand unprecedented parallelism, typically involving millions of cores distributed across thousands of nodes to reach 10^{18} floating-point operations per second (FLOPS). This scale amplifies challenges in coordinating computations, where communication overhead between processors can dominate execution time, necessitating optimized interconnect networks with low latency and high bandwidth, such as fat-tree or dragonfly topologies supporting hundreds of thousands of endpoints. For instance, the Frontier supercomputer at Oak Ridge National Laboratory, which achieved 1.102 exaFLOPS on the TOP500 Linpack benchmark in May 2022, employs a Slingshot-11 interconnect to enable efficient scaling across its 9,472 compute nodes and over 8.7 million CPU/GPU cores. Scalability in exascale environments is constrained by fundamental limits like , which quantifies how non-parallelizable serial components restrict overall despite adding processors; even a small 1% serial fraction caps efficiency at around 50 times with infinite parallelism. Strong —solving fixed problem sizes with more processors—often yields diminishing returns due to increased and load imbalance, while weak —enlarging problems proportionally—better suits many scientific workloads but requires algorithms with bounded communication volume per processor. Programming models like MPI and must evolve to handle these, with efforts focusing on asynchronous task-based parallelism and hybrid CPU-GPU heterogeneity to maximize throughput, as demonstrated in speculative task methods that predict and execute dependent operations early to hide latency. Efficiency at exascale hinges on mitigating parallel I/O bottlenecks and , though the former involves striped file systems scaling to petabytes per second. Interconnects for upcoming systems, such as those in the RED-SEA project, target sub-microsecond latencies and terabit-per-second bandwidths to support exascale's energy-constrained scaling, where power delivery limits node counts to around 10^5-10^6. Real-world benchmarks reveal that while Linpack achieves near-peak performance, application-specific scaling efficiencies often drop below 20-30% at full system size due to irregular data dependencies and memory access patterns. Advances in optimizations and systems are essential to approach Gustafson's scaled ideal, where problem sizes grow with resources to sustain efficiency.

Data Management and Fault Tolerance

In exascale computing, confronts severe I/O bottlenecks arising from the generation of terabytes to petabytes of per timestep across millions of elements, far outpacing subsystem bandwidths that typically achieve only tens to hundreds of GB/s aggregate throughput. This disparity stems from the causal scaling of volumes with computational output, rendering conventional file-based post- inefficient and leading to stalled workflows. Mitigation strategies emphasize in-situ and in-transit , where or reduction occurs concurrently with computation to minimize persistent writes; for instance, techniques like adaptive refinement and selective sampling reduce output by orders of without sacrificing . compression algorithms, such as those integrating lossless methods with domain-specific approximations, further alleviate constraints, enabling sustained performance in applications like climate modeling. Hierarchical storage architectures, combining burst buffers with parallel file systems like Lustre or GPFS, facilitate data staging and prefetching to optimize locality and reduce , though contention remains a challenge in bursty workloads. The Exascale Computing Project (ECP) has advanced tools like VeloC, which couples data reduction with I/O to achieve up to 90% ratios in scientific datasets while preserving checkpoint . These approaches prioritize causal efficiency—focusing on data provenance and minimal viable outputs—over exhaustive archiving, as empirical benchmarks on systems like demonstrate that unchecked data deluges can degrade overall system utilization by 20-50%. Fault tolerance in exascale environments is necessitated by the "reliability wall," where component failure rates—driven by shrinking feature sizes and high densities—yield mean times between failures (MTBF) as low as 5-10 minutes for systems comprising over 10^6 cores. Coordinated checkpoint/restart remains the baseline, involving periodic global snapshots to disk, but incurs overheads exceeding 10% of runtime due to I/O saturation and synchronization costs; silent data corruptions, undetected by , compound risks in long-running jobs. Algorithm-based fault tolerance (ABFT) addresses this by embedding redundancy in computations—such as checksums in linear algebra routines—to detect and correct errors with minimal recomputation, achieving at scales where MTBF falls below application tolerance. Application-level resilience techniques, including forward recovery and selective recomputation, reduce dependency on full restarts; for example, ECP's efforts integrate these into MPI libraries like ULFM for dynamic , sustaining progress amid failures without full-system halts. Modular hardware designs in deployed exascale machines, such as Frontier's node-level and error-correcting codes in () and interconnects, enable hot-swapping of faulty units, empirically extending effective MTBF to hours in practice. These methods collectively ensure causal continuity in simulations, prioritizing verifiable error bounding over absolute failure elimination, as validated in benchmarks showing sub-1% productivity loss under projected failure regimes.

Historical Development

Conceptual Foundations (Pre-2010)

The pursuit of exascale computing, defined as systems capable of at least 10^{18} floating-point operations per second (FLOPS), originated in the mid-2000s amid projections of HPC scaling limits following the anticipated petascale milestone. As supercomputers like the IBM Blue Gene/L approached 0.478 petaFLOPS in 2007, researchers foresaw that conventional extrapolation of Moore's law and Dennard scaling would falter due to power density constraints, prompting early visions for radical architectural shifts to enable million-processor concurrency while maintaining energy efficiency below 20 MW per exaFLOPS. This conceptual shift emphasized first-principles reevaluation of system design, prioritizing resilience against faults in massive parallelism and integration of heterogeneous processors to overcome the "power wall" where transistor scaling no longer yielded proportional performance gains without excessive heat dissipation. A pivotal document, the 2008 DARPA-sponsored Exascale Computing Study, articulated these foundations by analyzing pathways to a 1,000-fold performance leap from petascale by around 2015, highlighting challenges in , , and software scalability for applications requiring extreme data movement. Sponsored by DARPA's Information Processing Techniques Office, the study—conducted by experts from industry, academia, and labs—stressed causal factors like the end of voltage scaling in technology, projecting that exascale systems would demand innovations in integration, , and fault-tolerant algorithms to manage error rates exceeding 10^6 failures per hour. These ideas were driven by imperatives in simulations and scientific , such as energy modeling and dynamics, where petascale resolutions proved insufficient for predictive fidelity. By 2009, the U.S. Department of Energy's Advisory Committee on Advanced Scientific Computing (ASCAC) Exascale Subcommittee reinforced these concepts in its report, urging to lead an exascale initiative to sustain U.S. primacy in HPC amid emerging international competition. The subcommittee outlined application-driven requirements, including sustained performance for multiphysics codes and data analytics at unprecedented scales, while cautioning against over-reliance on unproven s without empirical validation through prototypes. Concurrently, facilities like began strategizing application readiness, projecting needs for adaptive mesh refinement and I/O hierarchies to handle exabyte-scale datasets, underscoring the foundational tension between hardware ambition and software ecosystem maturity. These pre-2010 efforts established exascale not as mere extrapolation but as a requiring interdisciplinary breakthroughs in algorithms, devices, and systems to realize causal insights into complex phenomena.

National Programs and Milestones (2010-2021)

In the United States, the Department of Energy () formalized its Exascale Computing Initiative in 2016, co-led by the Office of Science and the (NNSA), with the objective of delivering a capable exascale system by the early to support scientific simulations and applications. This effort built on earlier planning from 2014, which initially targeted deployment by 2023 but accelerated to aim for an initial system operational in 2021, including nine months for acceptance testing. The Exascale Computing Project (ECP), a collaborative R&D program involving DOE national laboratories, , and , launched the same year with a projected $1.7 billion budget over seven years to develop hardware, software, and applications for exascale performance exceeding 1 exaFLOPS on sustained benchmarks. By 2018, DOE committed an additional $1.8 billion toward constructing follow-on exascale systems at Oak Ridge, Argonne, and Lawrence Livermore National Laboratories, emphasizing energy-efficient architectures to address power constraints. China pursued parallel national efforts through state-backed institutions, announcing in 2017 plans for an exascale prototype by year's end as part of broader supercomputing advancements, including domestically developed processors to reduce reliance on foreign . In , prototypes for three systems—Tianhe-3, Sunway, and Oceanlite—were revealed, signaling progress toward full exascale deployment, with the Sunway architecture detailed in technical publications emphasizing indigenous many-core designs for scalability. By November 2021, Chinese researchers claimed two operational exascale systems and a third delayed one, based on internal high-performance Linpack testing exceeding 1 exaFLOPS; however, these assertions remain unverified by independent benchmarks like the list, raising questions about performance metrics and operational status due to limited transparency. Japan's Center for advanced the Post-K (later renamed Fugaku) project, initiated in the mid-2010s with government funding exceeding ¥100 billion, targeting exascale capabilities by 2021 through innovations in ARM-based processors and high-bandwidth memory. Key milestones included in 2019 and public previews demonstrating pre-exascale performance, with Fugaku achieving 442 petaFLOPS sustained on Linpack by June 2020, positioning it as a bridge to full exascale while prioritizing fault-tolerant software ecosystems. In , the EuroHPC Joint Undertaking was established in 2018 with €1 billion in public-private funding to procure pre-exascale and eventual exascale infrastructure, including calls for processor development under the European Processor Initiative to foster sovereignty in hardware. This initiative supported coordinated national contributions from member states, aiming for two exascale machines by the mid-2020s, with early milestones in 2019-2021 focused on system procurement and software co-design projects like those under the EuroHPC Phase 1 calls.

Breakthroughs and Deployments (2022-2025)

In May 2022, the Frontier supercomputer at Oak Ridge National Laboratory (ORNL) became the first publicly verified system to achieve exascale performance, reaching 1.102 exaFLOPS on the High-Performance Linpack (HPL) benchmark. This milestone, powered by AMD EPYC CPUs and Instinct MI250X GPUs in a heterogeneous architecture developed by Hewlett Packard Enterprise (HPE), marked a breakthrough in scalable parallel processing and energy-efficient computing for scientific simulations. Frontier's deployment under the U.S. Department of Energy's (DOE) Exascale Computing Project validated years of investment in overcoming power and interconnect challenges, enabling applications in climate modeling, materials science, and fusion energy research. By November 2024, the El Capitan supercomputer at Lawrence Livermore National Laboratory (LLNL), deployed for the National Nuclear Security Administration (NNSA), surpassed Frontier as the world's fastest, achieving over 2.79 exaFLOPS on HPL. Featuring advanced direct liquid cooling and integration of AMD Instinct MI300A accelerators with EPYC Genoa-X CPUs, El Capitan represented a key engineering advancement in thermal management and GPU density, supporting stockpile stewardship and high-fidelity simulations without nuclear testing. Its full operational status by early 2025 extended U.S. leadership in verified exascale capabilities, with performance gains attributed to optimized node designs and Cray Slingshot interconnects. Aurora, deployed at Argonne National Laboratory in January 2025, joined as the third U.S. exascale system, emphasizing AI-driven workloads alongside traditional simulations through Intel Xeon Max CPUs and Data Center GPU Max (Ponte Vecchio) accelerators across 10,624 nodes. This deployment highlighted breakthroughs in I/O subsystems and , facilitating exascale applications in , cosmology, and research under DOE's Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. Internationally, Europe's supercomputer in , , was inaugurated as the continent's first exascale system by late 2024, incorporating hybrid CPU-GPU architectures to advance EuroHPC Joint Undertaking goals in sustainable computing. Chinese announcements of multiple exascale prototypes since 2021, including successors to Sunway TaihuLight, have lacked independent verification via benchmarks or public access, raising questions about performance claims amid restricted transparency. By mid-2025, verified exascale deployments remained dominated by U.S. systems, underscoring disparities in open validation standards.

Global Systems and Achievements

United States Leadership

![Frontier supercomputer at Oak Ridge National Laboratory][float-right] The Department of Energy (DOE) has spearheaded exascale computing through the Exascale Computing Project (ECP), a collaborative effort between DOE's Office of and the aimed at delivering capable exascale systems by the early 2020s. This initiative has positioned the US as the global leader in deploying operational exascale supercomputers, with three systems achieving this milestone ahead of international counterparts. Frontier, hosted at (ORNL), became the world's first recognized exascale supercomputer on May 30, 2022, when it topped the list with a sustained performance of 1.1 exaFLOPS on the High-Performance Linpack benchmark. Built by using processors, Frontier's deployment marked the culmination of over a decade of investments in hardware, software, and applications, enabling breakthroughs in simulations for energy, , and climate modeling. Following , at (LLNL) entered full operation in 2024 and was dedicated on January 9, 2025, achieving over 1.7 exaFLOPS and securing the top ranking on the June 2025 list. Designed primarily for missions, including nuclear , leverages advanced GPUs and represents the National Nuclear Security Administration's first exascale system. Aurora at Argonne National Laboratory was released to researchers on January 28, 2025, delivering 1.012 exaFLOPS on the benchmark and supporting applications in , , and large-scale simulations. Powered by processors and HPE architecture, Aurora complements the exascale ecosystem by focusing on data-intensive workloads and interdisciplinary research. As of mid-2025, these three DOE-operated systems—Frontier, El Capitan, and Aurora—collectively maintain dominance in exascale performance, outpacing global competitors and underpinning advancements in scientific discovery and computational capabilities.

European Initiatives

The (EuroHPC JU), established in 2018 as a public-private partnership between the , member states, and industry partners, coordinates Europe's efforts to achieve exascale computing sovereignty and reduce reliance on non-European infrastructure. EuroHPC JU has procured and deployed multiple pre-exascale systems since 2021, including in (delivering approximately 550 petaflops in high-performance as of 2023), Leonardo in (around 250 petaflops), and MareNostrum 5 in , which collectively rank among the global top 10 supercomputers and support applications in climate modeling, , and . These systems, funded through EU budgets exceeding €7 billion by 2025, laid the groundwork for exascale by testing hybrid architectures combining CPUs, GPUs, and accelerators while addressing challenges inherent to scaling beyond petascale. Europe's breakthrough in exascale arrived with , the continent's first system to surpass 1 exaflop of computing power, inaugurated on September 5, 2025, at the Jülich Supercomputing Centre in . Hosted at and owned by EuroHPC JU, JUPITER integrates advanced ARM-based processors and GPU accelerators from European and international vendors, achieving a sustained performance that positioned it fourth on the list in June 2025, behind three U.S. Department of Energy systems. Its modular design enables rapid deployment—completed in under two years—and emphasizes , with metrics supporting operations at scales where cooling and power draw exceed 20 megawatts. JUPITER's architecture prioritizes hybrid quantum-classical computing interfaces and workloads, enabling simulations unattainable at petascale, such as atomic-level and high-resolution Earth system models. Beyond , EuroHPC JU initiatives include plans for additional exascale upgrades and AI-focused "factories," with six new sites selected in 2025 across Czechia, , , and others to expand capacity for training and federated computing. These efforts, co-funded by the Digital Europe Programme, aim to integrate exascale resources into a pan-European , promoting amid geopolitical tensions over technology supply chains. While marks a milestone, Europe's exascale ecosystem faces ongoing hurdles in and indigenous chip design, with reliance on non-EU components highlighting vulnerabilities in achieving full .

Chinese Claims and Asian Efforts

China has claimed the development and deployment of multiple exascale supercomputers since 2021, though these assertions lack independent verification through standard benchmarks like the list, which China ceased contributing to amid U.S. export restrictions. Reports indicate two operational systems by early 2021: the Sunway OceanLight, achieving approximately 1.3 exaFLOPS peak performance using domestically produced SW26010P processors, and a second unnamed system with similar capabilities, both validated via High Performance LINPACK tests conducted internally in March 2021. A third system, potentially the Tianhe-3 at the National Supercomputing Center in , is estimated to deliver 1.7 exaFLOPS peak or 1.57 exaFLOPS sustained performance, employing hybrid architectures with Phytium FeiTeng ARM-based CPUs and matrix accelerators, though details remain opaque due to classifications. These claims, primarily sourced from Chinese state media and analysts like David Kahaner of the Asian Technology Information Program, suggest ambitions for up to 10 exascale systems by 2025, aggregating over 300 exaFLOPS in national compute power to support applications in AI, quantum simulation, and defense modeling. For instance, a Sunway-based system with over 40 million cores demonstrated exascale mixed-precision performance in 2023 and was utilized in October 2025 for quantum chemistry simulations on 37 million cores, achieving 92% strong scaling efficiency. However, skepticism persists among international experts due to the absence of third-party audits, potential overstatement for strategic signaling, and reliance on indigenous hardware circumventing U.S. sanctions, which may prioritize quantity over verified sustained performance comparable to the U.S. Frontier system's 1.1 exaFLOPS LINPACK benchmark. Chinese authorities have not included exascale machines in their 2024 top-100 supercomputer list, fueling doubts about operational maturity or measurement standards. Beyond China, other Asian nations pursue exascale capabilities but lag in deployments. Japan’s Fugaku supercomputer, operational since 2021, sustains around 442 petaFLOPS and serves as a platform for post-petascale research, with national plans targeting exascale integration by the late 2020s through the FLAGSHIP2020 project extensions. South Korea aims for an exascale system by 2030 via the National Ultra High Performance Computing Innovation Center, emphasizing local chip development to reduce foreign dependency, though current systems like Aleph remain at petascale levels. India’s efforts, coordinated through the National Supercomputing Mission, focus on expanding PARAM-series machines to multi-petaFLOPS scales by 2025 but have not announced exascale prototypes, prioritizing indigenous hardware amid resource constraints. These initiatives reflect regional investments in high-performance computing for scientific and industrial applications, yet none have matched China's claimed volume or timeline as of October 2025.

Applications and Capabilities

Scientific Simulations and Discovery

Exascale computing facilitates unprecedented fidelity in scientific simulations by performing over one quintillion floating-point operations per second, enabling models that incorporate complex physical processes at scales previously unattainable. This capability accelerates discoveries across disciplines by reducing simulation times from years to days or hours, allowing iterative refinement and integration of experimental data. In astrophysics, the Frontier supercomputer at Oak Ridge National Laboratory executed the largest cosmological hydrodynamics simulation to date in November 2024, modeling the interplay of dark matter, atomic matter, gas, and plasma across cosmic volumes with resolutions capturing thermal dynamics and feedback processes. This breakthrough provides foundational data for understanding galaxy formation and the universe's large-scale structure, surpassing prior gravity-only models by incorporating full hydrodynamic physics. Fusion energy research benefits from exascale simulations of plasma-facing materials, such as polycrystals under extreme conditions, elucidating brittle failure and plastic flow mechanisms critical for reactor design. The Whole Device Modeling Application, developed under the Department of Energy's Exascale Computing Project, integrates multi-physics models to predict behavior, supporting advancements toward sustainable . Materials science leverages exascale for atomic-scale predictions, exemplified by 's June 2025 simulation of 5 million atoms to optimize carbon fiber composites, enhancing strength and reducing production costs through novel processing insights. In molecular dynamics, simulated systems with 2 million electrons in July 2024, advancing quantum-level understanding of chemical reactions and biomolecular interactions. Climate modeling advances via frameworks like the Energy Exascale Earth System Model, which incorporates detailed aerosol chemistry and multi-scale atmospheric processes for improved long-term predictions. Exascale also propels biomolecular simulations, combining with to explore protein dynamics and drug interactions at unprecedented scales.

Defense, Security, and AI Advancements

Exascale computing enables high-fidelity simulations critical to nuclear stockpile stewardship, allowing verification of weapon reliability without underground testing. The El Capitan supercomputer, deployed by the National Nuclear Security Administration at Lawrence Livermore National Laboratory in November 2024, delivers over 2.79 exaflops of performance and supports the Advanced Simulation and Computing program by modeling nuclear weapon physics, materials degradation, and safety protocols. These capabilities underpin U.S. national security by certifying the enduring stockpile's effectiveness amid aging components and evolving threats, with simulations resolving uncertainties in subatomic behaviors and hydrodynamic responses. Similarly, the Exascale Computing Project integrates fault-tolerant software to ensure resilient execution of defense-oriented applications, reducing downtime and energy costs in mission-critical computations. In security domains beyond nuclear applications, exascale systems advance biodefense and threat modeling. The Department of Defense activated a dedicated supercomputer in August 2024 for biodefense, leveraging exascale-scale simulations to analyze pathogen dynamics and develop countermeasures against biological weapons. This infrastructure supports predictive analytics for epidemic scenarios and vulnerability assessments, drawing on vast datasets to simulate real-world dispersal and mitigation strategies. Exascale platforms accelerate advancements vital for defense intelligence and autonomous systems. Through initiatives like ExaLearn under the Exascale Computing Project, toolkits enable scalable training of models for , of military assets, and optimization of in contested environments, deployable across and missions. Systems such as , achieving exascale in 2024 at , integrate with simulations for enhanced data analysis in security contexts, including in feeds and accelerated hypothesis testing for . These developments, powered by quintillion-scale operations per second, outperform prior generations in handling multimodal , thereby improving decision-making in .

Broader Computational Impacts

Exascale computing catalyzes economic growth by underpinning innovations that enhance industrial productivity and global competitiveness. Investments in exascale systems, such as the U.S. Exascale Computing Project, target advancements that support high-fidelity predictive simulations, enabling sectors like to optimize designs and processes at scales unattainable with prior petascale technologies. This computational capability fosters job creation in high-tech industries and contributes to GDP expansion through accelerated R&D cycles, with supercomputing infrastructure recognized as essential for sustaining national amid international rivalry. The spillover effects of exascale hardware and software ecosystems extend to commercial computing environments, promoting scalable architectures compatible with and deployments. Developments like vendor-agnostic accelerators and resilient software stacks from exascale initiatives allow enterprises to handle massive datasets for , inventory optimization, and , thereby reducing operational inefficiencies. These advancements democratize principles, bridging government-funded research with private-sector applications in finance, logistics, and energy management. Societally, exascale enables data-intensive computations that inform policy and , such as large-scale modeling for or disaster preparedness, though realization depends on accessible and workforce training. By integrating exascale with emerging paradigms like agent-based modeling, it supports complex socio-economic simulations that reveal causal dynamics in human systems, potentially aiding evidence-based decision-making in and . However, equitable access remains challenged by the concentration of systems in major powers, limiting diffuse societal benefits without international collaboration.

Criticisms and Geopolitical Context

Technical Limitations and Overhype Risks

Despite achieving peak performance exceeding 1 exaFLOPS (10^18 floating-point operations per second), exascale systems encounter fundamental technical limitations rooted in scaling and system architecture. Primary challenges include excessive power consumption, inefficient data movement across components, vulnerability to faults, and the demands of parallelism involving millions of processing elements. These issues stem from the physical constraints of scaling, where has ended, forcing reliance on specialized accelerators like GPUs that exacerbate energy demands and interconnect latencies. Power efficiency represents a core bottleneck, as exascale architectures require tens of megawatts to sustain operations, far surpassing prior generations and complicating deployment in non-specialized facilities. For instance, the system at consumes approximately 21-30 MW under load, highlighting the "power wall" that limits further raw scaling without breakthroughs in low-power computing or novel cooling technologies. Data movement poses another constraint, with the "communication wall" arising from high in interconnects amid billions of transistors and hierarchical systems, often bottlenecking more than compute itself. compounds these problems, as mean time between failures (MTBF) drops to minutes in systems with over a million nodes, necessitating resilient software that can checkpoint and recover without halting simulations. Programming and software adaptation further limit practical utility, demanding applications to exploit heterogeneous architectures, manage data locality, and tolerate asynchrony—challenges unmet by legacy codes reliant on MPI and paradigms. Exascale's extreme parallelism amplifies effects, where inherently serial code fractions cap speedup regardless of added cores, requiring algorithmic redesigns that many scientific workloads have yet to undergo. Risks of overhype arise from conflating peak theoretical performance with sustained, application-relevant throughput; benchmarks like HPL-Rmax yield exascale claims, but real-world simulations often achieve fractions thereof due to I/O imbalances, incomplete parallelization, and model inaccuracies. Early operational issues, such as daily hardware failures during Frontier's 2022 testing phase, underscore reliability gaps that delay productive and inflate costs beyond initial projections. Moreover, exascale does not inherently resolve like or climate turbulence, which persist due to incomplete physical models and algorithmic intractability rather than compute deficits alone, potentially diverting resources from complementary advances in theory and data handling. Proponents' emphasis on raw overlooks these systemic hurdles, fostering expectations that undervalue the need for integrated reforms in software, algorithms, and validation.

Energy and Resource Debates

Exascale supercomputers demand significant electrical , typically in the range of 20-30 megawatts per , far exceeding petascale predecessors due to the scale of required for 10^18 operations per second. The U.S. Frontier , operational since , draws about 21 megawatts for its IT load during peak performance, with total facility consumption including cooling reaching up to 30 megawatts—equivalent to powering roughly 10,000 households. These figures reflect efficiency gains from architectures like AMD's GPU-accelerated nodes, which achieved under 20 megawatts per exaFLOP, surpassing early projections of up to 50 megawatts or more. Debates over energy use highlight tensions between computational power and , with concerns that exascale facilities contribute to rising electricity demands—now 4% of some nations' totals—and associated carbon emissions if reliant on fossil fuels. Resource allocation critiques question prioritizing such systems amid global energy shortages and imperatives, arguing that the power equivalent of thousands of homes diverts capacity from immediate societal needs. In response, advocates emphasize causal benefits: exascale enables simulations accelerating low-emission tech development, energy research, and forecasting, yielding long-term reductions in environmental costs that justify upfront inputs. Mitigation strategies include power constraints, liquid , and renewable sourcing, as demonstrated by Europe's exascale prototype, which ranks highly on lists through 100% renewable operation. Ongoing research explores software-driven optimizations to cap energy per flop, though scaling to zettascale may intensify these debates without proportional efficiency advances. Empirical data from facilities like Oak Ridge underscore that while absolute consumption is high, performance-per-watt metrics have improved dramatically, informing pragmatic trade-offs over alarmist narratives.

Strategic Competition and National Security Implications

Exascale computing has emerged as a in the U.S.-China technological rivalry, with both nations viewing it as essential for military simulations, development, and overall strategic deterrence. The U.S. Department of Energy and emphasize that systems like , deployed in 2023, enable advanced and reliability assessments without physical testing, bolstering deterrence capabilities. Similarly, the Department of Defense has integrated exascale resources for modeling and in weapon system analysis, underscoring its role in maintaining qualitative military edges. China's pursuit of exascale systems, including claims of operational deployment by 2021 for applications, heightens U.S. concerns over Beijing's potential to accelerate design, cyber operations, and AI-driven warfare. U.S. officials note that prior to American exascale achievements like in 2022, held top supercomputing positions, prompting fears of eroded U.S. leads in predictive modeling for defense scenarios. To counter this, the U.S. has imposed stringent export controls since 2022, targeting advanced integrated circuits and supercomputing components destined for to impede its military modernization and buildup. These measures, expanded in 2023 and 2025, restrict foreign-produced items incorporating U.S. technology, aiming to preserve American advantages in exascale-relevant domains like quantum and AI integration. Such competition risks escalating an in advanced computing, where rapid iterations could destabilize strategic stability by enabling faster development of autonomous systems and processing. U.S. frameworks, including the Exascale Computing , frame sustained leadership as a national imperative for economic competitiveness and security, while collaborations with allies via frameworks like seek to counterbalance Chinese advances through shared technology safeguards. Nonetheless, credible assessments warn that unchecked proliferation of exascale capabilities could empower adversarial decryption of encrypted data and enhanced , necessitating ongoing vigilance against technology diversion.

References

  1. [1]
    What is Exascale Computing? | Glossary | HPE
    Exascale computing is a level of supercomputing capable of at least one exaflop floating point calculations per second to support expansive workloads.
  2. [2]
    DOE Explains...Exascale Computing - Department of Energy
    Exascale computers are digital computers, roughly similar to today's computers and supercomputers but with much more powerful hardware.
  3. [3]
  4. [4]
    Frontier supercomputer debuts as world's fastest, breaking exascale ...
    May 30, 2022 · The system is the first to achieve an unprecedented level of computing performance known as exascale, a threshold of a quintillion calculations ...
  5. [5]
    At the Frontier: DOE Supercomputing Launches the Exascale Era
    Jun 7, 2022 · Frontier is a DOE Office of Science exascale supercomputer that was ranked the fastest in the world on the Top500 list released May 30, 2022.
  6. [6]
    Argonne National Laboratory Celebrates Aurora Exascale Computer
    Jul 18, 2025 · Aurora marks a significant achievement in the advancement of high performance computing. ... july 18, 2025 published artificial intelligence ...
  7. [7]
    The Aurora supercomputer achieves exascale
    The achievement of exascale by the Aurora supercomputer at Argonne National Laboratory marks a significant milestone in the field of high-performance computing.
  8. [8]
    Exascale Computing's Four Biggest Challenges and How They ...
    Oct 18, 2021 · The four major challenges for exascale computing are: power consumption, data movement, fault tolerance, and extreme parallelism.
  9. [9]
    What is exascale computing? - McKinsey
    Nov 22, 2022 · Exascale is the next milestone in computing. It's a higher level of computer performance that will have unprecedented impact on society and the economy.
  10. [10]
  11. [11]
    The Linpack Benchmark - TOP500
    The benchmark used in the LINPACK Benchmark is to solve a dense system of linear equations. For the TOP500, we used that version of the benchmark.Missing: exascale verification standards
  12. [12]
    Oak Ridge's exascale 'Frontier' system named world's most powerful ...
    May 30, 2022 · On the main High-Performance Linpack (HPL) benchmark used by Top500, Frontier reached 1.102 exaflops of sustained performance. It has a ...
  13. [13]
    Frontier Supercomputer Hits New Highs in Third Year of Exascale
    Nov 18, 2024 · The Frontier team achieved a High-Performance Linpack, or HPL, score of 1.35 exaflops, or 1.35 quintillion calculations per second using double-precision ...
  14. [14]
    [PDF] The TOP500 List and Progress in High- Performance Computing
    Nov 2, 2015 · Finally, we explore several emerging trends and reflect on the list's potential useful- ness for guiding large-scale HPC into the exascale era.
  15. [15]
    HPCG Benchmark
    HPCG is a complete, stand-alone code that measures the performance of basic operations in a unified code: Sparse matrix-vector multiplication. Vector updates.HPCG Software Releases · HPCG Overview · HPCG Publications · FAQ
  16. [16]
    HPCG - June 2025 - TOP500
    El Capitan is the new leader on the HPCG benchmark with 17.4 HPCG-PFlop/s. · Supercomputer Fugaku, the long-time leader, is now in second position with 16 HPCG- ...
  17. [17]
    HPL-MxP Benchmark: Mixed-Precision Algorithms, Iterative ... - arXiv
    Sep 23, 2025 · We present a mixed-precision benchmark called HPL-MxP that uses both a lower-precision LU factorization with a non-stationary iterative ...<|separator|>
  18. [18]
    Aurora Reaches Exascale, Leads in AI Performance | 2024 ALCF ...
    Aurora also set a new record for AI performance, registering 11.6 exaflops on the HPL-MxP mixed-precision benchmark. Its strengths in data-intensive ...
  19. [19]
    El Capitan reigns supreme across three major supercomputing ...
    machines capable of reaching more than a ...
  20. [20]
    The challenges of ExaScale computing - IEEE Xplore
    To build an ExaScale machine in a power budget of 20 MW requires a 200-fold improvement in energy per instruction: from 2 nJ to 10 pJ. Only 4x is expected from ...Missing: consumption | Show results with:consumption
  21. [21]
    Challenges and innovation in the age of Exascale - techUK
    Exascale HPC datacenters are now commonly configured to provide 20 MW of electrical power, or more. With the rising electricity cost, each MW now exceeds $1 ...
  22. [22]
    The Beating Heart of the World's First Exascale Supercomputer
    Jun 24, 2022 · Frontier is not only the first machine to break the exascale barrier, a threshold of a billion billion calculations per second, but is also ...
  23. [23]
    JUPITER Sets New Energy Efficiency Standards with #1 Ranking on ...
    The first module installed in April, the JUPITER Exascale Development Instrument (JEDI), is capable of 72 billion floating-point operations per second per watt.
  24. [24]
    Energy dataset of Frontier supercomputer for waste heat recovery
    Frontier is a significant electricity consumer, drawing 8–30 MW; this massive energy demand produces significant waste heat, requiring extensive cooling ...Missing: consumption | Show results with:consumption
  25. [25]
    Scalability and efficiency challenges for the exascale ...
    Jan 23, 2023 · Parallel operating systems and I/O optimization technology mainly support large-scale system scaling, while the ultra-large-scale parallel ...
  26. [26]
  27. [27]
    Compilers and More: Is Amdahl's Law Still Relevant? - HPCwire
    Jan 22, 2015 · Amdahl's Law tells us that the maximum speedup using P processors for a parallelizable fraction F of the program is limited by the remaining (1-F) fraction of ...
  28. [28]
    [PDF] Programming at Exascale: Challenges and Innovations - arXiv
    Parallel programming technology that available today are still not enough to utilize the current hardware as well as the new Exascale systems, which require ...
  29. [29]
    EXAALT researchers explore speculative task methods to improve ...
    Oct 18, 2022 · A task-level speculative method that maximizes parallelism and computational throughput at large scales by predicting whether task-level outputs will be used ...
  30. [30]
    Network Solution for Exascale Architectures | RED-SEA | Project
    May 6, 2025 · In order to enable Exascale computing, next generation interconnection networks must scale to hundreds of thousands of nodes, ...
  31. [31]
    Exascale Computing and Data Handling - AMS Journals
    This paper describes key technical and budgetary challenges, identifies gaps and ways to address them, and makes a number of recommendations.Missing: controversies | Show results with:controversies
  32. [32]
    The Challenges of ExaScale Computing - IEEE Xplore
    To program a machine of this scale requires more productive parallel programming environments - that make parallel programming as easy as sequential programming ...Missing: scalability | Show results with:scalability
  33. [33]
    [PDF] Data Management Challenges of Exascale Scientific Simulations
    Different strategies need to be adopted for different components. At the exascale, the sheer size of the data can be an issue as many Petabytes of data can be ...
  34. [34]
    ECP-funded researchers enable faster time-to-science with novel I/O ...
    Jan 25, 2021 · Researchers funded by the Exascale Computing Project have delivered a novel method that addresses overloaded communication processes that ...
  35. [35]
    Addressing Fault Tolerance and Providing Data Reduction at Exascale
    Feb 8, 2018 · VeloC will provide ECP applications a fault tolerance environment that is effective, transparent, and efficient.
  36. [36]
    Understanding I/O Bottlenecks and Tuning for High Performance I/O ...
    In this work, we study the I/O behavior of WRF, a widely used scientific application for atmospheric research and operational weather forecasting, on high ...<|separator|>
  37. [37]
    [PDF] The Reliability Wall for Exascale Supercomputing
    Thus, fault tolerance will play a more important role in the future exascale supercomputing. While many fault-tolerance mechanisms are avail- able to improve ...
  38. [38]
    Exascale fault tolerance challenge and approaches - IEEE Xplore
    Supercomputer fault tolerance must be a first class design concern for Exascale and beyond systems. Myriad solutions exist and can touch each level of the ...
  39. [39]
    [PDF] IESP Exascale Challenge: Resilience and Fault Tolerance
    IESP Exascale Challenge: Resilience and Fault Tolerance ... While supercomputers can often reconfigure around faults, every fault kills the application.
  40. [40]
    Application-Level Resilience: Efficient Algorithmic Fault Tolerance
    Application-level resilience provides fault tolerance at a lower cost than traditional approaches by saving the minimum amount of data required for the ...
  41. [41]
    [PDF] Algorithm-Based Fault Tolerance at Scale - LOUIS
    The rising need for fault tolerant systems is higher than ever due to the introduction of exascale computing. A system's fault tolerance is defined by.
  42. [42]
    Fault tolerance of MPI applications in exascale systems: The ULFM ...
    The fault-tolerant version uses ULFM to detect failures and continue with the computation using only the survivor processes. In case of failure, the final ...Missing: supercomputers | Show results with:supercomputers
  43. [43]
    An Analysis of Resilience Techniques for Exascale Computing ...
    This work provides a comparison of four state-of-the-art HPC resilience techniques that are being considered for use in exascale systems. We explore the ...
  44. [44]
    [PDF] The Opportunities and Challenges of Exascale Computing
    This Report from the ASCAC Subcommittee on Exascale Computing is designed to cover the main issues raised by 'going to the exascale', and to provide some ...Missing: controversies | Show results with:controversies
  45. [45]
    [PDF] ExaScale Computing Study: Technology Challenges in Achieving ...
    Sep 28, 2008 · This study examines challenges in advancing computing by a thousand-fold by 2015, and the key challenges surfaced from the study.
  46. [46]
    ORNL Leadership Computing Application Requirements and ... - OSTI
    PREPARING FOR EXASCALE: ORNL Leadership Computing Application Requirements and Strategy. Technical Report · Mon Nov 30 23:00:00 EST 2009. DOI:https://doi.
  47. [47]
    First US Exascale Supercomputer Now On Track for 2021 | TOP500
    Dec 10, 2016 · The new goal is to get an initial exascale system deployed sometime in 2021, with acceptance nine months after that. That shrinks the schedule ...
  48. [48]
    About ECP | ORNL
    The ECP is a seven-year $1.7B R&D effort that launched in 2016. The mission of the Exascale Computing Project (ECP) is the accelerated delivery of a capable ...Missing: timeline | Show results with:timeline
  49. [49]
    Secretary of Energy Rick Perry Announces $1.8 Billion Initiative for ...
    Apr 10, 2018 · The new supercomputers funded through this RFP will be follow-on systems to the first U.S. exascale system authorized by Secretary Perry this ...Missing: timeline | Show results with:timeline
  50. [50]
    China, already dominant in supercomputers, shoots for an exascale ...
    Jan 19, 2017 · China is working on a prototype exascale (1000-petaflop) system that it aims to complete by the end of this year, according to state media.
  51. [51]
    [PDF] Sunway supercomputer architecture towards exascale computing
    Feb 3, 2021 · This unique roadmap will be insisted on and optimized continuously for the implementation of world-leading Sunway exascale supercomputer.
  52. [52]
    Three Chinese Exascale Systems Detailed at SC21 - HPCwire
    Nov 24, 2021 · Kahaner claimed there are two exascale (> 1 exaflops High Performance Linpack) systems running in China today, and reported there is a third, delayed, system.
  53. [53]
    To exascale and beyond | RIKEN
    Mar 29, 2019 · Post-K will also likely be the first in a wave of exascale computers from Japan, China, the United States and Europe expected to become ...Missing: milestones | Show results with:milestones
  54. [54]
    Japan's Fugaku gains title as world's fastest supercomputer | RIKEN
    Jun 23, 2020 · On HPCG, it scored 13,400 teraflops using 138,240 nodes, and on HPL-AI it gained a score of 1.421 exaflops—the first time a computer has even ...Missing: milestones | Show results with:milestones
  55. [55]
    EU launches €1B project to build world's fastest supercomputer
    2018年10月2日 · The overall goal is for Europe to acquire two 'exascale' supercomputers, capable of performing at least one billion billion mathematical ...
  56. [56]
    European Exascale
    On this website, you can learn more about 10 projects from the EuroHPC19 call that are currently paving the way to Exascale.缺少字词: 2010-2021 | 必须包含:2010-2021
  57. [57]
    [PDF] Foundations to National Progress: The Impact of Exascale Computing
    Exascale computing, a $1.8B project, is revolutionizing science, sustaining US leadership, and is foundational to national progress, with systems performing ...
  58. [58]
    Lawrence Livermore National Laboratory's El Capitan verified as ...
    Nov 18, 2024 · El Capitan marks the third exascale machine HPE has deployed for the Department of Energy, extending a longstanding supercomputing legacy.
  59. [59]
    Hewlett Packard Enterprise delivers world's fastest direct liquid ...
    Nov 18, 2024 · Hewlett Packard Enterprise (NYSE: HPE) announced that it delivered the fastest supercomputer, El Capitan, to the United States Department of Energy's (DOE) ...
  60. [60]
    NNSA and Livermore Lab achieve milestone with El Capitan, the ...
    Dec 10, 2024 · El Capitan, as the third exascale-class system deployed by the Department, embodies the latest evolution of this unparalleled expertise. El ...
  61. [61]
    Argonne Aurora Exascale Supercomputer Open for Business
    Jan 30, 2025 · “We're ecstatic to officially deploy Aurora for open scientific research,” said Michael Papka, director of the Argonne Leadership Computing ...
  62. [62]
    Aurora Exascale Supercomputer - Argonne National Laboratory
    U.S. Department of Energy's INCITE program seeks proposals for 2025 to advance science and engineering at U.S. leadership computing facilities. The INCITE ...Aurora · Aurora by the Numbers · Argonne’s Aurora... · Aurora Early ScienceMissing: deployment | Show results with:deployment
  63. [63]
    JUPITER: Launching Europe's Exascale Era - HPCwire
    JUPITER, Europe's first exascale supercomputer, was inaugurated today in Jülich, Germany by Chancellor Friedrich Merz and ...
  64. [64]
    China's Quiet Journey into Exascale Computing - HPCwire
    Sep 17, 2023 · Chinese researchers claim to have demonstrated how artificial intelligence can extend the reach of classical supercomputing into the domain of ...
  65. [65]
    Top 10 most powerful supercomputers in the World
    Sep 22, 2025 · In June 2025, three exascale U.S. lab machines lead the pack, Europe debuts a new heavyweight, and cloud and industry systems hold top-ten spots ...
  66. [66]
    ECP - Exascale Computing Project
    The ECP aims to accelerate an exascale computing ecosystem, providing 50x more power for critical challenges, and is a collaboration of DOE-SC and NNSA.
  67. [67]
    Supercomputing - Department of Energy
    In 1995, DOE took major steps forward in supercomputing by establishing the leadership computing facilities at Argonne and Oak Ridge national laboratories.
  68. [68]
    El Capitan: NNSA's first exascale machine
    El Capitan is NNSA's first exascale supercomputer, capable of 2.79 exaflops, used for nuclear stockpile safety and national security missions.
  69. [69]
    Argonne releases Aurora exascale supercomputer to researchers ...
    Jan 28, 2025 · Before its deployment, an Argonne-led team demonstrated Aurora's potential by using it to train AI models for an innovative protein design ...
  70. [70]
    Our Supercomputers - EuroHPC JU - European Union
    Soon to be capable of delivering 1 ExaFLOP of computing power, JUPITER will become Europe's first Exascale supercomputer. 793.40 petaflops. Sustained ...
  71. [71]
    JUPITER: Launching Europe's Exascale Era - EuroHPC JU
    Sep 5, 2025 · JUPITER, Europe's first exascale supercomputer, was inaugurated today in Jülich, Germany by Chancellor Friedrich Merz & European Commissioner ...
  72. [72]
    JUPITER - Exascale for Europe - Forschungszentrum Jülich
    Developed by the Jülich Supercomputing Centre (JSC) and owned by EuroHPC JU, JUPITER ranks 4th on the June 2025 TOP500 list of the world's fastest ...
  73. [73]
    Ceremony Marks Launch of the Exascale Supercomputer JUPITER
    Sep 5, 2025 · Europe's first exascale computer was completed in less than two years, housed in a purpose-built, innovative facility – the modular data centre.<|control11|><|separator|>
  74. [74]
    Reaching JUPITER: ECMWF celebrates the first European exascale ...
    2025年9月9日 · The Call grants ECMWF 414,000 node hours on the JUPITER Booster module, the first exascale supercomputer installed in Europe. An additional ...缺少字词: 2010-2021 | 必须包含:2010-2021
  75. [75]
    The EuroHPC JU Selects Six Additional AI Factories to Expand ...
    Oct 10, 2025 · In March 2025, the EuroHPC JU selected six additional AI Factories ... Three of these EuroHPC supercomputers are now ranked among the ...Czechia (czai) · Lithuania (litai Factory) · Poland (gaia Ai Factory)
  76. [76]
    NEWS - JUPITER - first European Exascale computer acquired thro...
    Jul 16, 2024 · The Digital Europe programme provides funds for the acquisition of JUPITER, the first exascale computer in Europe.
  77. [77]
    What is exascale computing? The fastest supercomputer coming to ...
    Sep 4, 2025 · The Jupiter supercomputer ranks fourth on the June 2025 TOP500 list of the world's fastest supercomputers and is the most energy-efficient ...
  78. [78]
    China Has Already Reached Exascale – On Two Separate Systems
    Oct 26, 2021 · Our source confirms these LINPACK results for both of China's exascale systems—the first in the world—were achieved in March 2021 ...
  79. [79]
    China publishes list of its most powerful supercomputers, with no ...
    Jan 7, 2025 · The Chinese Society of Computer Science has published a list of what it claims to be the 100 highest-performing supercomputers in the country in 2024.Missing: verified | Show results with:verified
  80. [80]
    China's secretive Tianhe 3 supercomputer uses homegrown hybrid ...
    Feb 12, 2024 · The Tianhe-3 is believed to achieve unprecedented computational performance, potentially reaching 1.57 ExaFLOPS on LINPACK benchmarks.
  81. [81]
    China Intends to Exceed 300 Exaflops Aggregate Compute Power ...
    Oct 10, 2023 · David Kahaner, director of the Asian Technology Information Program, said China is investing in 10 new exascale systems by 2025.Missing: 2022-2025 verified<|control11|><|separator|>
  82. [82]
    Chinese Exascale Sunway Supercomputer has Over 40 Million ...
    Aug 22, 2023 · Today, we have some information regarding the next-generation Sunway system, which is supposed to be China's first exascale supercomputer.
  83. [83]
  84. [84]
    Keeping Time On Asia's Race To Exascale - Asian Scientist Magazine
    Oct 12, 2022 · Looking ahead, exascale computing is expected to drive research across the globe with speedy and efficient calculations.
  85. [85]
    South Korea plans exascale supercomputer by 2030, potentially ...
    May 28, 2021 · South Korea hopes to launch an exascale supercomputer in the country by 2030. Ahead of that goal, the National Ultra High Performance Computing Innovation ...
  86. [86]
    Frontier - Oak Ridge Leadership Computing Facility
    Exascale is the next level of computing performance. By solving calculations five times faster than today's top supercomputers—exceeding a quintillion, or 1018, ...
  87. [87]
    Exascale Computing Project: Home Page
    The ECP ran from 2016–2024 and was the largest software research, development, and deployment project managed to date by the US Department of Energy (DOE).ECP · Understanding Exascale · 2022 ECP Annual Meeting · AboutMissing: examples 2022-2025
  88. [88]
    Record-breaking run on Frontier sets new bar for simulating the ...
    Nov 20, 2024 · The calculations set a new benchmark for cosmological hydrodynamics simulations and provide a new foundation for simulating the physics of atomic matter and ...
  89. [89]
    Supercomputer runs largest and most complicated simulation of the ...
    Feb 13, 2025 · Frontier, the second fastest supercomputer in the world, used dark matter and the movement of gas and plasma rather than just gravity to model the observable ...
  90. [90]
    Advancing Fusion Reactor Materials Through Exascale Simulations
    Jan 1, 2025 · The goal of this INCITE project is to make a breakthrough in our understanding of the brittle failure and plastic flow behavior of tungsten polycrystals.
  91. [91]
    Innovative fusion computer program receives national achievement ...
    Feb 14, 2025 · These new machines can perform 1 quintillion, or 1 million million million, operations per second and could power discoveries in a range of ...
  92. [92]
    5 Million Simulations: Frontier Exascale Supercomputer for Carbon ...
    Jun 20, 2025 · ORNL researchers simulated 5 million atoms to study a novel process for making carbon-fiber composites stronger and more cost efficient by ...
  93. [93]
    Exascale: Frontier Supercomputer Used in Molecular Dynamics ...
    Jul 18, 2024 · The exascale- class Frontier supercomputer set a new standard for calculating the number of atoms in a molecular dynamics simulation 1,000 ...
  94. [94]
    The Energy Exascale Earth System Model Version 3: 1. Overview of ...
    Oct 10, 2025 · The simulation of nitrate aerosols by MOSAIC strongly depends on the simulation of HNO3 by gas chemistry scheme (Wu et al., 2022, 2025). Note ...
  95. [95]
    Advancing biomolecular simulation through exascale HPC, AI and ...
    As we enter the Exascale era, this mini-review surveys the computational landscape from both the point of view of the development of new and ever more powerful ...
  96. [96]
    El Capitan High Performance Computing
    El Capitan is currently the world's fastest supercomputer, benchmarked at 1.742 exaFLOPs. This unprecedented power contributes to LLNL, Sandia and Los Alamos ...Missing: 2025 | Show results with:2025
  97. [97]
    FAIL-SAFE: Fault Aware IntelLigent Software for Exascale - DTIC
    The increased application resilience resulting from this research will lead to faster completion of Defense applications, and thus substantial energy savings as ...
  98. [98]
    DOD Introduces New Supercomputer Focused on Biodefense ...
    Aug 15, 2024 · The biodefense-focused system will provide unique capabilities for large-scale simulation and AI-based modeling for a variety of defensive ...
  99. [99]
    Exascale Machine Learning Technologies
    ExaLearn is succeeding in its goal to build a software tool set that can be applied to multiple problems within the DOE mission space, use exascale platforms ...<|separator|>
  100. [100]
    Artificial Intelligence | Department of Energy
    DOE and its National Labs are advancing AI through world-class supercomputers, cutting-edge algorithms and software stacks such as through the Exascale ...
  101. [101]
    A New Frontier: Sustaining U.S. High-Performance Computing ...
    Sep 12, 2022 · Continued leadership in high-performance computing (HPC) as it enters the exascale era remains a key pillar of US industrial competitiveness, economic power, ...
  102. [102]
    Exascale supercomputing is here and it will change the world | HPE
    Oct 18, 2022 · By advancing plasma simulations to make more powerful particle accelerators, researchers can conduct high-energy physics experiments to ...
  103. [103]
    [PDF] Leveraging the Future Potential of US Exascale Computing Project ...
    Jun 20, 2023 · 7 years building an accelerated, cloud-ready software ecosystem. • Positioned to utilize accelerators from multiple vendors that others ...<|separator|>
  104. [104]
    Exascale: Bringing Engineering and Scientific Acceleration to Industry
    Feb 21, 2024 · The Department of Energy's (DOE's) Exascale Computing Project (ECP) has developed a capable, high-performance computing (HPC) ecosystem.
  105. [105]
    Harnessing exascale computing and scalable AI for societal impact
    May 22, 2025 · Our Chief Science Officer, Vassil Alexandrov, delves into how exascale computing and scalable AI are helping to transform industry, accelerate innovation and ...Missing: economic | Show results with:economic
  106. [106]
    Exascale computing and 'next generation' agent-based modelling
    Sep 29, 2023 · The scale of performance of exascale computers means that there is scope to go beyond doing-more-of-what-we-already-do to thinking more deeply ...
  107. [107]
    Exascale: challenges and benefits | Shaping Europe's digital future
    Nov 7, 2013 · Five experts shared their views on the need for exascale computing ... consequences in the economy, politics, and societal health. He made ...Missing: impacts | Show results with:impacts<|separator|>
  108. [108]
    Exascale Computing | PNNL
    Exascale computing refers to the next milestone in the measurement of capabilities of the world's fastest supercomputers. The lightning speed of these ...Missing: definition | Show results with:definition
  109. [109]
    The demands and challenges of exascale computing: an interview ...
    Mar 26, 2016 · Challenges in applicability and application efficiency: the first challenge is how to use high-performance exascale computers efficiently so ...Missing: controversies | Show results with:controversies
  110. [110]
    Exascale Computing Technology Challenges - SpringerLink
    This article will describe the technology challenges on the road to exascale, their underlying causes, and their effect on the future of HPC system design.
  111. [111]
    At Long Last, HPC Officially Breaks The Exascale Barrier
    May 30, 2022 · Caveat One: All of those telecom and hyperscaler and cloud builder machines tested running HPL should not be included in the Top500 rankings.
  112. [112]
    Frontier supercomputer suffering 'daily hardware failures' during ...
    Oct 10, 2022 · Oak Ridge National Laboratory's (ORNL) upcoming exascale Frontier supercomputer is seeing daily hardware failures during its testing phase.
  113. [113]
    Kathy Yelick on Post-Exascale Challenges | Intersect360 Research
    Exascale was already harder than some of the previous milestones because of the broad adoption of GPUs that have been disruptive to the software stack, from ...
  114. [114]
    The Journey to Frontier | ORNL
    Nov 14, 2023 · Today's exascale supercomputer not only keeps running long enough to do the job but at an average of only around 30 megawatts. That's a little ...
  115. [115]
    European Exascale Supercomputer JUPITER Sets New Energy ...
    May 13, 2024 · The first module of the exascale supercomputer JUPITER, named JEDI, is ranked first place in the Green500 list of the most energy-efficient ...
  116. [116]
    Energy dataset of Frontier supercomputer for waste heat recovery
    Oct 3, 2024 · This paper reports power demand and waste heat measurements from an ORNL HPC data centre, aiming to guide future research on optimizing waste heat recovery.<|separator|>
  117. [117]
    Supercomputer Code Can Help Capture Carbon, Reduce Global ...
    Jul 19, 2022 · Exascale Computing Can Help Reduce Risk. At ORNL, Frontier recently became the first supercomputer to reach exascale, with 1.1 exaflops of ...Missing: debates | Show results with:debates
  118. [118]
    World's most energy-efficient AI supercomputer comes online - Nature
    Sep 12, 2025 · JUPITER, the European Union's new exascale supercomputer, is 100% powered by renewable energy. Can it compete in the global AI race?Missing: debates | Show results with:debates
  119. [119]
    Exploring the Frontiers of Energy Efficiency using Power ... - arXiv
    Aug 2, 2024 · In this study, we tackle the gap in understanding the impact of software-driven energy efficiency on exascale hardware architectures through a ...
  120. [120]
    Power Consumption and Exascale Computing: Toward a “Short ...
    The intended audience for this event are HPC system administrators looking to optimize the energy efficiency of their systems, developers of system software and ...
  121. [121]
    Don't Be Fooled, Advanced Chips Are Important for National Security
    Feb 10, 2025 · Advanced chips enable nuclear deterrence, intelligence analysis, and are vital for weapon systems, driving strategic military advantage and  ...Missing: implications | Show results with:implications
  122. [122]
    2 Disruptions to the Computing Technology Ecosystem for Stockpile ...
    Meanwhile, there is credible evidence that China was the first country to deploy exascale computing systems, targeting China's own national security interests.
  123. [123]
    Manchin Questions Witnesses on Rapid Development of Artificial ...
    Sep 7, 2023 · Before we authorized the Exa-scale Computing Program, China had the fastest computers. Now, the U.S. has regained the lead,” said Chairman ...
  124. [124]
    Implementation of Additional Export Controls: Certain Advanced ...
    Oct 25, 2023 · BIS imposed these new controls to protect U.S. national security interests by restricting certain exports to China that would advance China's ...
  125. [125]
    Reducing Strategic Risks of Advanced Computing Technologies
    Even as a U.S.-Chinese technology competition looms, policymakers must recognize the arms-racing risks to strategic stability and pursue policies, even if ...
  126. [126]
    [PDF] 2022-hpc-leadership-exascale-era.pdf
    Sep 8, 2022 · The advent of exascale computing will unlock a wealth of heretofore scarcely imaginable research opportunities across a variety of scientific, ...