Fact-checked by Grok 2 weeks ago

ASCI Red

ASCI Red was a developed by Corporation as part of the U.S. Department of Energy's Accelerated Strategic Computing Initiative (ASCI) and installed at in late 1996. Designed to simulate nuclear weapons performance for following the cessation of physical testing, it utilized an architecture based on the with over 4,500 compute nodes and processors. In December 1996, ASCI Red became the first to sustain over one teraflops (1 trillion floating-point operations per second) on the LINPACK benchmark, achieving 1.06 teraflops with three-quarters of its capacity. It held the top position on the list of the world's fastest supercomputers from June 1997 until June 2000, marking the longest continuous reign at number one for any system. Notable for its exceptional reliability and scalability, ASCI Red processed complex three-dimensional simulations that advanced and influenced the design of future platforms, before being decommissioned in 2006.

Development and History

Origins in the ASCI Program

The U.S. (DOE) established the Accelerated Strategic Computing Initiative (ASCI) in 1995 to advance computational capabilities for certifying the safety, reliability, and performance of the nuclear stockpile without reliance on underground nuclear testing, amid preparations for a comprehensive test ban treaty. This initiative formed a core component of the broader Stockpile Stewardship Program, responding to the policy shift toward simulation-based validation following the 1992 moratorium on U.S. nuclear testing and the anticipated zero-yield Comprehensive Test Ban Treaty signed in 1996. ASCI's strategic roadmap targeted progressive scaling of computing power, with a key milestone of achieving petascale performance—defined as one quadrillion floating-point operations per second—by 2004 to support full three-dimensional, multi-physics simulations of primaries and secondaries. These simulations prioritized causal mechanisms of weapon physics, such as dynamics and initiation, grounded in empirical data from prior tests to ensure model fidelity and reduce uncertainties in stockpile predictions. The program emphasized hardware and software co-development to handle massive datasets and complex geometries, enabling predictive assessments that mirrored experimental outcomes without physical detonations. Within this framework, ASCI Red emerged as the first pathfinder system, selected to demonstrate tera-scale computing feasibility as a precursor to subsequent machines. Contracted to Intel Corporation for delivery to , it initiated the pathfinder series by focusing on scalable architectures capable of supporting early workloads, including initial 3D weapon simulations that required validation against historical test data for accuracy. This procurement underscored ASCI's emphasis on components adapted for , laying groundwork for empirical model certification in a test-ban era.

Design and Construction by Intel and Sandia

ASCI Red represented a collaborative effort between Corporation and under the U.S. Department of Energy's Accelerated Strategic Computing Initiative (ASCI), with responsible for design, fabrication, and initial testing. The system evolved from 's architecture, which featured a scalable 2D mesh interconnect topology using i860 processors, by adapting commodity components for while enhancing to meet teraflops-scale requirements. Intel received the ASCI platform development contract in August 1995, initiating construction of a system comprising over 4,500 compute nodes, each equipped with dual 200 MHz processors and 128 MB of memory. Assembly occurred primarily at facilities in , where the machine achieved initial benchmarks before shipment. Installation at in , began in late 1996, marking the transition from prototype scaling to site-specific integration. Key engineering decisions addressed , reliability, and infrastructure constraints inherent to prior . The interconnect employed a custom fat-tree and scheme in a 38x32x2 , supporting bidirectional up to 800 MB/s per to minimize in . Power consumption reached 850 kW excluding cooling, prompting an air-cooled design with modular packaging for efficient heat dissipation and maintenance, prioritizing off-the-shelf components over liquid cooling to enhance long-term reliability in a environment.

Deployment Milestones and Operational Timeline

ASCI Red reached its initial performance milestone of one teraFLOPS during pre-operational testing at in December 1996. By June 1997, the system transitioned to full operational status, enabling its integration into Sandia's environment for executing complex simulations in support of missions, including both classified nuclear weapons assessments and unclassified research applications. Over the ensuing years, ASCI Red received incremental hardware upgrades, such as processor replacements, to sustain its utility amid evolving computational demands while minimizing disruptions to ongoing workloads. These enhancements ensured continued service reliability, with the machine accumulating over 97% uptime across its lifespan and supporting terascale computations for multidisciplinary teams at Sandia. ASCI Red remained in active deployment until its decommissioning on June 29, 2006, marking the end of nearly nine years of operational tenure during which it served as a cornerstone of Sandia's simulation capabilities. Sandia National Laboratories director Bill Camp attributed its enduring performance to superior engineering, stating that ASCI Red exhibited the highest reliability of any supercomputer built up to that time.

Technical Architecture

Hardware Components and Scalability

ASCI Red utilized a , (MIMD) architecture optimized for processing, with processors organized into four distinct partitions: compute for primary calculations, service for user interaction and , I/O for handling, and for administrative functions. The compute partition comprised 4,536 nodes, each equipped with two Pentium Pro processors, totaling 9,072 processing elements initially clocked at 200 MHz and later upgraded to 333 MHz via Pentium II Overdrive variants. This configuration leveraged for cost-effective scaling while ensuring high parallelism through message-passing paradigms. The nodes were interconnected using Intel's custom high-performance fabric, implementing a split-plane mesh topology scaled to 38 by 32 by 2 dimensions, which facilitated low-latency communication and efficient data exchange across the system. This interconnect design supported the MIMD model's demands for independent instruction streams, enabling fault-tolerant operation by isolating failures to individual nodes without compromising overall scalability. Physically, ASCI Red occupied 104 cabinets covering approximately 2,500 square feet and drew about 850 kW of power, excluding cooling requirements, reflecting its emphasis on modular expansion and reliability for sustained large-scale computations. The architecture's was inherent in its distributed design, allowing incremental addition of nodes and partitions while preserving communication efficiency and minimizing single points of failure.

Core Specifications and Performance Metrics

ASCI Red featured 9,298 Pentium Pro processors operating at 200 MHz initially, organized into approximately 4,649 dual-processor s. Later upgrades increased clock speeds to 333 MHz for enhanced performance. The system provided a total of 1.2 terabytes of distributed , with roughly 256 megabytes per . Its theoretical peak performance reached 1.8 teraFLOPS for double-precision floating-point operations in the initial configuration. On the High-Performance LINPACK benchmark, it sustained 1.068 teraFLOPS, yielding an efficiency of approximately 59% relative to peak. Storage consisted of two independent 1-terabyte disk systems for scalable I/O operations.
MetricValue
Processors9,298 ()
Clock Speed (initial)200 MHz
Nodes~4,649 (dual-processor)
Total RAM1.2 TB
Theoretical Peak1.8 TFLOPS
LINPACK Sustained1.068 TFLOPS
Disk Storage2 × 1 TB

Achievements and Recognition

Breakthrough to TeraFLOPS Performance

In December 1996, during final assembly and testing at , a three-quarter configuration of ASCI Red—utilizing approximately 6,912 of its 9,216 processors running at 200 MHz—sustained 1.06 teraflops on the MP-LINPACK benchmark. This performance shattered the prior record of 368.2 gigaflops and established ASCI Red as the first to cross the 1 TFLOPS threshold. The engineering effort focused on optimizing the High-Performance LINPACK (HPL) implementation, incorporating custom tuning, assembly-coded routines, and efficient message-passing via MPI over the custom interconnection network to maximize computation-to-communication overlap. Load balancing in the parallel dense linear algebra algorithms was achieved through the system's scalable augmented with virtual lanes, which mitigated contention in data redistribution phases. Numerical accuracy was validated by passing the benchmark's error checks, ensuring the computed solutions met specified tolerances despite the scale. This validated the architecture's , proving that processors interconnected in a MIMD configuration could deliver tera-scale sustained performance, a critical proof-of-concept for advancing beyond gigaflop-era limitations in computational capability.

List Dominance and Benchmarks

ASCI Red debuted at the top of the list in June 1997, achieving an Rmax of 1.068 teraflops on the High-Performance LINPACK benchmark, marking it as the world's fastest at the time. This positioned it ahead of competitors like Japan's CP-PACS/2048 system, which ranked second with 0.614 teraflops. The system maintained the number-one ranking for seven consecutive biannual lists, from June 1997 through June 2000, a record duration of dominance unmatched until later decades. Subsequent upgrades enhanced its benchmark , with Rmax increasing to approximately 1.3 teraflops by late following expansions and optimizations. A upgrade further boosted sustained Linpack to around 2 teraflops, allowing it to outperform emerging such as the partial ASCI Blue-Pacific installation, which trailed despite its theoretical peak exceeding 3.9 teraflops. By June 2000, ASCI Red's final entry recorded an Rmax of 2.379 teraflops, solidifying its edge in measured efficiency over theoretical claims of competitors. Sustained TOP500 leadership stemmed from targeted incremental improvements, including processor node expansions from 4,608 to over 9,000 units and firmware tweaks that improved Linpack efficiency without full redesigns. Software optimizations, such as refined message-passing implementations and benchmark-specific tuning on its Coupler software layer, contributed to consistent Rmax-to-Rpeak ratios above 50%, enabling it to hold rankings amid rapid global advancements in vector and architectures. These factors underscored ASCI Red's pragmatic , prioritizing verifiable Linpack results over speculative peaks.

Software Environment

Operating System Implementation

ASCI Red employed a dual-operating-system to optimize performance in its processing () environment, with distinct kernels tailored to compute and non-compute nodes. The compute , comprising over 4,500 nodes, ran , a lightweight kernel derived from the Puma operating system originally developed by and the . Cougar's minimal footprint—under 0.5 MB per node—enabled efficient scalability across thousands of processors by minimizing overhead and supporting bare-metal-like execution for parallel workloads. At its core, featured the Q-Kernel for direct hardware resource access, including interrupts and device drivers, layered above which was the Process Control Thread (PCT) responsible for process scheduling, creation, and termination on each node. This structure facilitated low-latency task dispatching and without the bloat of full-featured UNIX kernels, ensuring that compute resources remained dedicated to application execution rather than OS services. I/O operations were offloaded to the service partition via a host OS dependency, preventing contention on compute nodes and leveraging high-bandwidth interconnects for data movement. Service, I/O, and interactive nodes utilized the TFLOPS Operating System (T/OS), a distributed variant of UNIX ported from Intel's XP/S architecture, compliant with 1003.1 and supporting a single-system image for administration and development tasks. T/OS handled user interactions, file systems, and system-wide services, integrating with through scalable platform services for across partitions. Reliability was enhanced through integrated features like application-assisted checkpointing, capable of dumping the full system memory (1.2 TB) in approximately 5 minutes via dedicated I/O , and fault isolation mechanisms in the Monitoring and Recovery Subsystem (MRS). These allowed dynamic bypassing of failed components, such as mesh router chips, and maintained with mean times between failures exceeding 50 hours per application and continuous runs over 4 weeks at more than 97% resource utilization. Redundant designs, including hot-swappable and dual-plane inter-node communications fabrics, further supported fault-tolerant in the classified .

Programming and Application Support

ASCI Red's programming environment provided support for C, C++ with standard templates, Fortran 77, Fortran 90, and High Performance Fortran (HPF) on its compute nodes, enabling developers to write performance-critical code in established languages optimized for the system's Pentium Pro processors. Compilers from Portland Group Incorporated (PGI) handled these languages, facilitating compilation for both message-passing and shared-memory paradigms. C++ adoption marked an early use in large-scale HPC for object-oriented elements in parallel applications, alongside traditional Fortran for numerical computations. Parallel programming relied on a full MPI 1.1 implementation for inter-node , layered over the Portals network to achieve low-latency communication across the system's mesh topology. This setup supported MIMD workloads, with shared-memory threading via the or flags like -Mconcur for intra-node parallelism on dual-processor nodes. HPF extended data-parallel models, abstracting distribution across thousands of nodes. Debugging parallel jobs utilized a scalable , re-implementing Intel's IPD with graphical and command-line interfaces mimicking DBX, capable of handling the full 9,000+ configuration. Performance tools leveraged hardware counters from CPUs and network interface cards for profiling. Visualization support included parallel tools such as polygon and volume renderers tailored for outputs from simulations. The emphasis on MPI and portable s in ASCI Red's promoted in HPC software, paving the way for consistent message-passing interfaces and compiler optimizations in subsequent terascale and petascale systems.

Applications and Scientific Impact

Primary Role in Nuclear

ASCI Red, deployed at Sandia National Laboratories in 1997 as the inaugural system under the Department of Energy's Accelerated Strategic Computing Initiative (ASCI), played a foundational role in the Science-Based Stockpile Stewardship Program established following the 1992 moratorium on U.S. underground nuclear testing. This initiative aimed to certify the safety, reliability, and performance of the nuclear arsenal through advanced computational simulations rather than empirical explosive tests, addressing the need to predict weapon behavior under aging, manufacturing variances, and operational stresses while maintaining national deterrence credibility. ASCI Red's teraFLOPS-scale processing capability enabled the transition from limited 2D models to comprehensive three-dimensional, full-physics representations of nuclear weapon effects, including coupled hydrodynamic, radiation transport, and material response phenomena. Central to its stewardship mission, ASCI Red facilitated hydrodynamic simulations that modeled implosion dynamics and high-pressure material behaviors in weapon primaries and secondaries, validated against declassified data from prior tests conducted before the moratorium. By 1999, it executed end-to-end simulations of a representative weapons system subjected to and environments, achieving fidelity sufficient to replicate historical experimental outcomes and inform surveillance assessments. These runs incorporated multi-physics codes to integrate neutronics, thermonuclear burn, and structural integrity analyses, allowing annual certification reviews by the nuclear weapons labs without resuming underground explosions. The system's contributions extended to predictive modeling of aging effects, such as plutonium pit degradation and booster gas leakage, which informed remediation strategies and extended the viable lifespan of legacy warheads like the W80 and W88. Through iterative validation against pre-1992 test archives—comprising over 1,000 historical detonations—ASCI Red helped establish computational margins of error below 5% for key performance metrics, bolstering confidence in the stockpile's operational readiness amid the absence of full-yield testing. This capability directly supported the DOE's mandate under the Stockpile Stewardship Program to ensure a safe and effective deterrent, reducing dependence on physical experimentation while prioritizing physics-based predictions over approximations.

Broader Contributions to High-Performance Computing

ASCI Red's architecture, built predominantly from (COTS) components such as processors, demonstrated the viability of scaling commodity hardware to tera-scale performance, influencing the shift toward cost-effective, commercially driven supercomputing designs. This approach validated the use of standard processors in large clusters, reducing reliance on custom systems and encouraging broader industry adoption of paradigms. Operational experiences with ASCI Red yielded key insights into scalable parallel applications, including algorithms optimized for thousands of processors, efficient parallel I/O handling for massive datasets, and methods for terabyte-scale outputs. These techniques advanced general methodologies for challenges, such as load balancing and data partitioning in multiphysics-like simulations requiring coupled physical models. Reliability metrics from ASCI Red, including a targeted mean time between failures (MTBF) exceeding 50 hours for single applications across its 9,000+ processors, provided empirical data that shaped fault-tolerant designs in follow-on systems like ASCI Blue, emphasizing lightweight operating systems and proactive hardware monitoring. The system's decade-long operation—twice the norm for contemporaries—highlighted the benefits of modular COTS redundancy in achieving sustained uptime for production workloads. Collaborations during ASCI Red's development, particularly with , facilitated of scalable interconnects and optimizations, bolstering commercial HPC offerings and contributing to subsequent architectures like Red Storm. This transfer process underscored the economic spillover, as validated hardware-software stacks lowered barriers for non-government sectors to deploy similar clusters.

Decommissioning and Legacy

Retirement Process and Reasons

ASCI Red was decommissioned in June 2006 after nine years of service at Sandia National Laboratories. The system had been retired from active computational duties in September 2005, marking the end of its appearances on the TOP500 list after 17 entries spanning eight years. It was replaced by the more advanced Red Storm supercomputer, which offered superior scalability and power efficiency through a modern Cray architecture and Linux-based clustering. The primary drivers for retirement included technological obsolescence relative to emerging systems, as ASCI Red's Pentium Pro-based design could no longer meet escalating demands for and efficiency. Economic factors were significant, with the machine's power draw reaching 850 kW excluding cooling—potentially exceeding 1 MW total—and requiring substantial space across its extensive cabinet , alongside rising maintenance burdens for an aging vector-parallel hybrid. These costs had become prohibitive as newer architectures delivered teraflops-scale performance at lower operational overheads. The shutdown proceeded in an orderly manner, including an informal gathering of personnel to reflect on the system's , ensuring no abrupt failure but a planned transition. Critical data and validation results from its tenure in nuclear were archived for ongoing reference, facilitating continuity in Department of Energy programs without loss of historical outputs.

Long-Term Influence on Supercomputing Evolution

ASCI Red pioneered the integration of commodity x86 processors, specifically 7,264 Pentium Pro chips, into a scalable supercomputing , demonstrating that off-the-shelf components could achieve teraflops without relying on bespoke vector processors or custom ASICs prevalent in prior systems like the Cray T3E. This approach validated massive parallelism using standard hardware, accelerating the transition to x86-dominant clusters that supplanted vector s by the early 2000s and formed the basis for most systems thereafter. The machine's partitioned design—separating compute, I/O, service, and system nodes—anticipated heterogeneous computing motifs, with its message-passing paradigm leveraging early MPI implementations for inter-node communication, influencing standardization efforts that persist in exascale-era software stacks. Retrospectives, including a 2022 analysis marking the system's 25th anniversary, highlight these elements as foundational to ongoing heterogeneous HPC trends, where CPU-GPU hybrids build on ASCI Red's reliability precedents; Sandia director Bill Camp attested to its unmatched uptime, exceeding 99.99% over a decade, setting enduring benchmarks for fault-tolerant scaling. Lessons from ASCI Red's power consumption—peaking at over 500 kW for 1.3 teraflops—underscored early constraints on linear , informing thermodynamic and efficiency hurdles in pursuing exascale systems capable of 10^18 without prohibitive demands exceeding gigawatts if unaddressed. These insights, drawn from its role in advancing simulation fidelity for complex physics, contributed indirectly to ASC program evolutions emphasizing balanced power-performance trade-offs and resilient architectures in U.S. exascale initiatives.

References

  1. [1]
    NNSA's High-Performance Computing Achievements
    ASCI Red was located at Sandia National Laboratories and built by Intel. Red broke records as the world's first teraFLOPS supercomputer, which means that it ...
  2. [2]
    ASCI Red: Sandia National Laboratory - TOP500
    Intel's ASCI Red supercomputer was the first teraflop/s computer, taking the No.1 spot on the 9th TOP500 list in June 1997 with a Linpack performance of 1.068 ...
  3. [3]
    ASCI Red -- Experiences and lessons learned with a massively ...
    Oct 8, 2025 · The ASCI Red machine at Sandia National Laboratories consists of over 4,500 computational nodes with a peak computational rate of 1.8 TFLOPs, ...
  4. [4]
    Sandia's ASCI Red, world's first teraflop supercomputer, is ...
    Jun 29, 2006 · ASCI Red first broke the teraflops barrier in December, 1996 and topped the world-recognized LINPAC top-500 computer speed ratings seven ...
  5. [5]
    Sandia's ASCI Red, world's first teraflop supercomputer, is ...
    Jun 29, 2006 · It still holds the record for the longest continuous rating as the world's fastest computer, four years running.” A teraflop represents a ...
  6. [6]
    Reflecting on the 25th Anniversary of ASCI Red and Continuing ...
    Apr 26, 2022 · Sandia director Bill Camp said, in 2006, that ASCI Red had the best reliability of any supercomputer ever built. Why ASCI Red? Accelerated ...
  7. [7]
    About - Advanced Simulation and Computing
    The ASC Program was established in 1995 to support the NNSA Defense Programs, using simulation to analyze and predict nuclear weapons performance, safety, and ...Missing: origins | Show results with:origins
  8. [8]
    [PDF] 25 YEARS of ACCOMPLISHMENTS - Department of Energy
    Oct 13, 2022 · Accelerated Strategic Computing Initiative (ASCI) Program Plan. ... Delivering Insight: The History of the Accelerated Strategic Computing.
  9. [9]
    Accelerated Strategic Computing Initiative (ASCI) Program Plan ...
    Oct 8, 2025 · In August 1995, the United States took a significant step to reduce the nuclear danger. The decision to pursue a zero- yield Comprehensive ...Missing: origins | Show results with:origins
  10. [10]
    [PDF] Accelerated Strategic Computing Initiative (ASCI) Program Plan
    The goal of this program is to provide scientists with the means to maintain a credible nuclear deterrent without the use of the two key tools used to do that ...Missing: petascale | Show results with:petascale
  11. [11]
    The Accelerated Strategic Computing Initiative - NCBI - NIH
    The peak performance of the SGI/Cray Blue Mountain system is around 3.1 teraflops. These two computer systems, although they use the same fundamental ...
  12. [12]
    [PDF] Delivering Insight Final - Advanced Simulation and Computing
    The history of the Accelerated Strategic Computing Initiative (ASCI) tells of the development of computational simulation into a third fundamental piece of ...
  13. [13]
    1990s – About Sandia
    Developed as part of DOE's Accelerated Strategic Computing Initiative (ASCI), ASCI Red was built by Intel in Beaverton, Oregon, where the demonstration was held ...
  14. [14]
    Sandia's heritage in Advanced Simulation and Computing – LabNews
    Oct 31, 2024 · ... ASCI Red contributed to more than a decade of U.S. leadership in supercomputing. The machine displayed incredible scalability, applying ...
  15. [15]
    The DOE Accelerated Strategic Computing Initiative
    As a critical component of the SBSS program, the Accelerated Strategic Computing Initiative. (ASCI) was established to provide the required advances in computer ...<|separator|>
  16. [16]
    [PDF] 7X Performance Results – Final Report: ASCI Red vs. Red Storm
    Each end was a significant parallel computer in its own right, with a peak computational rate of approximately 780 Gflop. Since each end was always connected to ...Missing: timeline | Show results with:timeline
  17. [17]
    [PDF] 97-0463C - OSTI
    The ASCI Red compute nodes are comprised of dual 200 MHz Intel Pentium. Pro processors with a total of 128 MBytes of memory. Two compute nodes, i.e., four ...
  18. [18]
    [PDF] An Overview of the Intel TFLOPS Supercomputer
    In December 1996, this machine, known as the ASCI Option Red Supercomputer, ran the MP-LINPACK benchmark at a rate of 1.06 trillion floating point operations ...
  19. [19]
    Schematic diagram of the ASCI Option Red supercomputer as it...
    The ASCI Red maintains communication through an Interconnection Facility (ICF) in a 38x32x2 topology with a peak (sustainable) bidirectional bandwidth of 800 ...
  20. [20]
    ASCI Red | TOP500
    ASCI Red. Site: Sandia National Laboratories. System URL: http://www.sandia.gov/ASCI/Red/. Manufacturer: Intel. Cores: 9,632. Processor: Pentium Pro 333MHz.Missing: exact | Show results with:exact
  21. [21]
    [PDF] The ASCI Red TOPS Supercomputer
    Oct 15, 1999 · The I/O Partition supports a scalable file system and network services. The System Partition supports system Reliability, Availability, and ...<|control11|><|separator|>
  22. [22]
    An Overview of the Intel TFLOPS Supercomputer - ResearchGate
    Nov 28, 2024 · Logical System Block Diagram for the ASCI Option Red Supercomputer. This system uses a split-plane mesh topology and has 4 partitions: System, ...<|separator|>
  23. [23]
    ONE TERAFLOPS BROKEN BY SANDIA/INTEL SYSTEM - HPCwire
    Dec 17, 1996 · The achievement of 1.06 teraflops (trillions of floating point operations-per-second) shatters the previous performance record of 368.2 ...
  24. [24]
    June 1997 | TOP500
    TOP 10 Sites for June 1997 ; 1, ASCI Red, Intel Sandia National Laboratories United States, 7,264 ; 2, CP-PACS/2048, Hitachi/Tsukuba Center for Computational ...
  25. [25]
    25 Year Anniversary | TOP500
    Intel's ASCI Red supercomputer was the first teraflop/s computer, taking the No.1 spot on the 9th TOP500 list in June 1997 with a Linpack performance of 1.068 ...
  26. [26]
    23 Years Of Supercomputer Evolution | Tom's Hardware
    Jun 29, 2016 · The ASCI Red was one of the first supercomputers to use mass production components, and with its modular and scalable architecture, the ASCI Red ...<|separator|>
  27. [27]
    Blue Pacific adds power to Energy supercomputing - Route Fifty
    Nov 5, 1999 · ASCI Red also has a theoretical peak speed of roughly 4 TFLOPS. The June top-500 list ranked a partial installation of ASCI Blue Pacific at No. ...Missing: comparison | Show results with:comparison
  28. [28]
    [PDF] The TOP500 List and Progress in High- Performance Computing
    Nov 2, 2015 · Maximal achieved performance (Rmax) of TOP500 systems on the Linpack benchmark. The increasing number of cores per socket has compensated for ...<|separator|>
  29. [29]
    Lightweight kernel operating system - Wikipedia
    This operating system evolved into the Puma, Cougar - which achieved the first teraflop on ASCI Red - and Catamount on Red Storm. Sandia continues its work ...
  30. [30]
    [PDF] 7X Performance Results – Final Report: ASCI Red vs. Red Storm
    Total number of compute nodes at. 12960. Each compute node was upgraded to dual core topology, bringing total processor count to 25920. • The 7X testing was ...Missing: hardware | Show results with:hardware
  31. [31]
    Improving processor availability in the MPI implementation for the ...
    We describe the approach of the benchmark suite and discuss a performance problem that we uncovered with the MPI implementation on the ASCI/Red supercomputer. A ...
  32. [32]
    How ASCI Revolutionized the World of High-Performance ... - HPCwire
    Nov 9, 2018 · This program would grow to over $700 million/year and would revolutionize the world of supercomputing and advanced modeling and simulation.Missing: milestones timeline
  33. [33]
    [PDF] Computer Simulation and the Comprehensive Test Ban Treaty
    A major goal of the ASCI program is to enhance computational capability so that it can substitute for nuclear proof and safety test- ing. To give U.S. ...Missing: petascale 2004
  34. [34]
    [PDF] THE NEW FRONTIER - Sandia National Laboratories
    Meanwhile, Sandia has upgraded ASCI Red with new processors to exceed 3 teraflops.
  35. [35]
    [PDF] Preface - Intel
    topology and has 4 partitions: System, Service, I/O and Compute. Two different kinds of node boards are used and described in the text: the Eagle node and ...
  36. [36]
    [PDF] The TOP500 Project
    Jan 20, 2008 · ASCI Red was retired from service in September 2005, after having been on 17 TOP500 lists over eight years. It was the fastest computer on the ...
  37. [37]
    Sandia's ASCI Red is Decommissioned - HPCwire
    Jun 30, 2006 · ASCI Red first broke the teraflops barrier in December, 1996 and topped the Linpack Top500 computer speed ratings seven consecutive times from ...
  38. [38]
    Challenges on the road to exascale computing
    The first teraflop computer, ASCI Red ... If today's most power and energy efficient supercomputer was linearly scaled to the exascale level, it would.
  39. [39]
    [PDF] Taking ASCI Supercomputing to the End Game - OSTI.GOV
    The model also includes limits imposed by laws of physics, such as thermodynamic limits on power dissipation, limits on cooling, and the limitation of signal.