Fact-checked by Grok 2 weeks ago

Computer cluster

A computer cluster is a group of interconnected standalone computers, known as nodes, that collaborate to function as a cohesive resource, often appearing to users as a single high-performance system through specialized software and networking. Typically, these nodes include a head node for managing job submissions and , alongside compute nodes dedicated to executing parallel tasks. Clusters enable the distribution of workloads across multiple processors to achieve greater computational power than a single machine could provide, leveraging high-speed interconnects for efficient communication between . The concept of clustering emerged from early efforts in during the late , with significant advancements popularized by the Beowulf project in 1994, which demonstrated the use of inexpensive commodity hardware to build scalable systems at . Prior to this, rudimentary forms of clustered appeared in the through linked mainframes and minicomputers for tasks like , but the Beowulf approach made clusters accessible and cost-effective for widespread adoption in research and industry. Today, clusters form the backbone of supercomputing, with modern implementations incorporating thousands of nodes equipped with multi-core CPUs, GPUs, and high-bandwidth networks like . Key advantages of computer clusters include , allowing seamless addition of to handle increasing workloads; cost-efficiency through the use of off-the-shelf components; and , where the failure of one does not halt the entire due to and load balancing. These systems excel in applications requiring massive parallelism, such as scientific simulations in physics and climate modeling, analytics, training, and bioinformatics. For instance, clusters process vast datasets far beyond the capacity of individual workstations, enabling breakthroughs in fields like and .

Fundamentals

Definition and Principles

A computer cluster is a set of loosely or tightly coupled computers that collaborate to perform computationally intensive tasks, appearing to users as a unified resource. These systems integrate multiple independent machines, known as nodes, through high-speed networks to enable coordinated beyond the capabilities of a single device. Unlike standalone computers, clusters distribute workloads across nodes to achieve enhanced performance for applications such as scientific simulations, , and large-scale modeling. The foundational principles of computer clusters revolve around parallelism, resource pooling, and . Parallelism involves dividing tasks into smaller subtasks that execute simultaneously across multiple nodes, allowing for faster processing of complex problems by leveraging collective computational power. Resource pooling combines the CPU, , and capacities of individual nodes into a shared , accessible via interconnects, which optimizes utilization and scales resources dynamically to meet demand. is ensured through , where the failure of one node does not halt operations, as tasks can be redistributed to healthy nodes, minimizing downtime and maintaining continuous service. Key concepts in cluster architecture include the distinction from (SMP) systems, basic load balancing, and the roles of nodes and head nodes. While SMP involves multiple processors sharing a common within a single chassis for tightly integrated parallelism, clusters use across independent machines connected by networks, offering greater at the cost of communication overhead. Load balancing distributes workloads evenly among nodes to prevent bottlenecks and maximize efficiency, often managed by software that monitors resource usage and reallocates tasks as needed. In a typical setup, compute nodes perform the core processing, while a head node (or gateway node) orchestrates job scheduling, user access, and system management. Although rooted in multiprocessing innovations aimed at distributing tasks across machines for reliability and capacity, clusters evolved distinctly from single-system multiprocessors by emphasizing networked, scalable ensembles of commodity .

Types of Clusters

Computer clusters can be classified based on their degree of , which refers to the level of interdependence between the nodes in terms of and communication. Tightly coupled clusters connect independent nodes with high-speed, low-latency networks to support workloads requiring frequent , such as applications using message-passing interfaces like MPI. In contrast, loosely coupled clusters consist of independent nodes, each with its own memory and , communicating via message-passing protocols over a , which promotes but introduces higher latency. Clusters are also categorized by their primary purpose, reflecting their intended workloads. (HPC) clusters are designed for computationally intensive tasks like scientific simulations and , aggregating resources to solve complex problems in parallel. Load-balancing clusters distribute incoming requests across multiple nodes to handle high volumes of traffic, commonly used in web services and application hosting to ensure even resource utilization. High-availability (HA) clusters provide redundancy and mechanisms, automatically switching to backup nodes during failures to maintain continuous operation for critical applications. Among specialized types, clusters represent a cost-effective approach to HPC, utilizing off-the-shelf commodity hardware interconnected via standard networks to form scalable parallel systems without proprietary components. Storage clusters focus on distributed file systems for managing large-scale data, exemplified by Apache Hadoop's HDFS, which replicates data across nodes for fault-tolerant, parallel access in environments. Database clusters employ techniques like sharding to partition data horizontally across nodes, enabling scalable query processing and storage for relational or databases handling massive datasets. Emerging types include container orchestration clusters, such as those managed by , which automate the deployment, scaling, and networking of containerized applications across a fleet of nodes for architectures. Additionally, AI and (AI/ML) training clusters are optimized for GPU parallelism, leveraging —where model replicas process different data subsets—or model parallelism—where model components are distributed across devices—to accelerate training of large neural networks.

Historical Development

Early Innovations

The roots of computer clustering trace back to early systems in the early 1960s, which laid the groundwork for resource sharing and parallel execution concepts essential to later distributed architectures. The Atlas computer, developed at the and operational from 1962, introduced and multiprogramming capabilities, allowing multiple programs to run concurrently on a single machine and influencing subsequent designs for scalable computing environments. Similarly, the Burroughs B5000, released in 1961, featured hardware support for multiprogramming and stack-based processing, enabling efficient task switching and serving as a precursor to clustered configurations in its later iterations like the B5700, which supported up to four interconnected systems. In the 1970s and 1980s, advancements in distributed systems and networking propelled clustering toward practical networked implementations, particularly for scientific applications. At Xerox PARC, researchers developed the Alto personal computer in 1973 as part of a vision for distributed personal computing, where multiple workstations collaborated over a local network, fostering innovations in resource pooling across machines. The introduction of Ethernet in 1973 by Robert Metcalfe at Xerox PARC provided a foundational networking protocol for high-speed, shared-medium communication, enabling the interconnection of computers into clusters without proprietary hardware. NASA employed parallel processing systems during this era for demanding space simulations and data processing, such as the Ames Research Center's Illiac-IV starting in the 1970s, an early massively parallel array processor used for complex aerodynamic and orbital computations. The 1990s marked a pivotal shift with the emergence of affordable, commodity-based clusters, democratizing . The project, initiated in 1993 by researchers Thomas Sterling and Donald Becker at , demonstrated a prototype cluster of off-the-shelf PCs interconnected via Ethernet, achieving parallel processing performance rivaling specialized supercomputers at a fraction of the cost. This approach spurred the development of the first terascale clusters by the late 1990s, where ensembles of hundreds of standard processors delivered sustained teraflops of computational power for scientific workloads. These innovations were primarily motivated by the need to reduce costs compared to expensive mainframes and vector supercomputers, fueled by , which predicted the doubling of density roughly every two years, driving down hardware prices and making scalable clustering economically viable.

Modern Evolution

The 2000s marked the rise of , which enabled the aggregation of distributed computational resources across geographically dispersed systems to tackle large-scale problems previously infeasible on single machines. This era also saw the emergence of early prototypes, such as ' Elastic Compute Cloud (EC2) launched in 2006, which provided on-demand virtualized clusters foreshadowing scalable infrastructure-as-a-service models. Milestones in the list highlighted cluster advancements, with IBM's Blue Gene/L topping the ranking in November 2004 at 70.7 teraflops Rmax, establishing a benchmark for , low-power cluster designs that paved the way for petaflop-scale performance by the decade's end. In the , computer clusters evolved toward hybrid architectures, integrating on-premises systems with public resources to enhance flexibility and resource bursting for high-performance workloads. revolutionized cluster management, beginning with Docker's open-source release in March 2013, which simplified application packaging and deployment across distributed environments. This was complemented by , introduced by in June 2014 as an orchestration platform for automating container scaling and operations in clusters. The proliferation of GPU-accelerated clusters for gained traction, exemplified by NVIDIA's DGX systems launched in 2016, which integrated multiple GPUs into cohesive units optimized for AI training and inference tasks. The 2020s brought exascale computing to fruition, with the Frontier supercomputer at Oak Ridge National Laboratory achieving 1.102 exaflops Rmax in May 2022, becoming the world's first recognized exascale system and demonstrating cluster scalability beyond 8 million cores. Subsequent systems like Aurora at Argonne National Laboratory (2023) and El Capitan at Lawrence Livermore National Laboratory (2024, 1.742 exaflops Rmax as of November 2024) further advanced exascale capabilities. Amid growing concerns over data center energy consumption contributing to carbon emissions—estimated to account for 1-1.5% of global electricity use—designs increasingly emphasized efficiency, as seen in Frontier's 52.73 gigaflops/watt performance, 32% better than its predecessor. Edge clusters emerged as a key adaptation for Internet of Things (IoT) applications, distributing processing closer to data sources to reduce latency and bandwidth demands in real-time scenarios like smart cities and industrial monitoring. Key trends shaping modern clusters include open-source standardization efforts, such as OpenStack's initial release in 2010, which facilitated interoperable cloud-based cluster management and has since supported hybrid deployments. The accelerated remote access to (HPC) resources, with international collaborations leveraging virtualized clusters for accelerated and epidemiological modeling. Looking ahead, projections indicate the integration of quantum-hybrid clusters by 2030, combining classical nodes with quantum processors to address optimization problems intractable for current systems, driven by advancements from vendors like and .

Key Characteristics

Performance and Scalability

Performance in computer clusters is primarily evaluated using metrics that capture computational throughput, data movement, and response times. quantifies the raw arithmetic processing capacity, with modern clusters achieving exaFLOPS scales for scientific simulations. measures inter-node data transfer rates, often exceeding 100 GB/s in high-end interconnects like to support parallel workloads, while latency tracks communication delays, typically in the range, which can tightly coupled applications. For AI-oriented clusters, serves as a key metric, evaluating efficiency in matrix multiplications and inferences; systems like NVIDIA's DGX deliver up to 1,000 TOPS at low-precision formats to handle large-scale models. Scalability assesses how clusters handle increasing computational demands, distinguishing between strong and weak regimes. Strong scaling maintains a fixed problem size while adding processors, yielding speedup governed by , which limits gains due to inherently serial components:
S = \frac{1}{f + \frac{1 - f}{p}}
where S is the speedup, f the serial fraction of the workload, and p the number of processors; for instance, with f = 0.05 and p = 100, S \approx 16.8, illustrating from communication overhead as processors increase. Weak scaling proportionally enlarges the problem size with processors, aligning with for more optimistic growth:
S = p - f(p - 1)
where speedup approaches p for small f, enabling near-linear in scalable tasks like climate modeling, though communication overhead remains a primary in distributed clusters.
Efficiency metrics further contextualize cluster performance by evaluating resource and energy utilization. Cluster utilization rates, defined as the fraction of allocated compute time actively used, often hover below 50% for CPUs in GPU-accelerated jobs and show 15% idle GPU time across workloads, highlighting opportunities for better job scheduling to maximize throughput. (PUE), calculated as the ratio of total facility to IT equipment , benchmarks ; efficient HPC data centers achieve PUE values of 1.2 or lower, with leading facilities like NREL's ESIF reaching 1.036 annually, minimizing overhead from cooling and power delivery. Node homogeneity, where all compute nodes share identical specifications, enhances overall performance by ensuring balanced load distribution and reducing inconsistencies that degrade in heterogeneous setups.

Reliability and Efficiency

Reliability in computer clusters is fundamentally tied to metrics such as (MTBF), which quantifies the average operational uptime before a component fails, often measured in hours for individual but scaling down significantly in large systems due to the increased failure probability across thousands of components. In practice, MTBF for cluster platforms can drop to minutes or seconds at exascale, prompting designs that incorporate levels like configurations, where one extra unit (e.g., power supply or ) ensures continuity if a primary fails, minimizing without full duplication. Checkpointing mechanisms further enhance by periodically saving job states to stable storage, enabling recovery from with minimal recomputation; for instance, coordinated checkpointing in parallel applications can restore progress after crashes, though it introduces I/O overhead that must be balanced against failure rates. Efficiency in clusters encompasses energy consumption models, such as floating-point operations per second () per watt, which measures computational output relative to draw and has improved dramatically in (HPC) systems. Leading examples include the JEDI supercomputer, achieving 72.7 GFlops/W through efficient architectures like NVIDIA Grace Hopper Superchips, highlighting how specialized hardware boosts energy proportionality. Cooling strategies play a , with air-based systems consuming up to 40% of total energy, while cooling reduces this by directly dissipating from components, enabling higher densities and lower overall usage in dense clusters. Virtualization, used for resource isolation, incurs overheads of 5-15% in performance and due to layers, though lightweight alternatives like containers mitigate this in cloud-based clusters. Balancing node count with interconnect costs presents key trade-offs, as adding nodes enhances parallelism but escalates expenses for high-bandwidth fabrics like , potentially limiting scalability if latency rises disproportionately. Green computing initiatives address these by promoting sustainability; post-2020, the EU Green Deal has influenced data centers through directives mandating and , aiming to cut sector emissions that contribute about 1% globally. calculations for clusters factor in operational emissions from power sources and embodied carbon from hardware, with models estimating total impacts via location-specific energy mixes; integration of renewables, such as or , can reduce this by up to 90% in hybrid setups, as demonstrated in frameworks optimizing workload scheduling around variable supply.

Advantages and Applications

Core Benefits

Computer clusters provide substantial economic advantages by utilizing (COTS) hardware, which leverages and to significantly lower acquisition and maintenance costs compared to custom-built supercomputers. This approach allows organizations to assemble high-performance s from readily available components, reducing overall infrastructure expenses while maintaining reliability through proven technologies. Furthermore, clusters support incremental , enabling the addition of nodes without necessitating a complete overhaul, which optimizes over time. On the functional side, clusters enhance by redistributing workloads across nodes in the event of a , achieving levels such as 99.999% uptime essential for mission-critical operations. For parallelizable tasks, they offer linear performance , where computational throughput increases proportionally with the number of added nodes under ideal conditions, maximizing resource utilization. This attribute allows clusters to handle growing demands efficiently without proportional increases in complexity. Broader impacts include the democratization of (HPC), empowering small organizations to access powerful resources previously limited to large institutions through affordable deployments in environments. also provide flexibility for dynamic workloads by dynamically allocating resources across nodes, adapting to varying computational needs in . In modern contexts, reduce latency by processing data locally at the network periphery, minimizing transmission delays for time-sensitive applications. Additionally, bursting models enable cost-effective scaling during peak loads by temporarily extending on-premises to public using pay-as-you-go pricing, avoiding overprovisioning while controlling expenses.

Real-World Use Cases

Computer clusters play a pivotal role in scientific computing, particularly for computationally intensive tasks like weather modeling and analysis. The European Centre for Medium-Range Weather Forecasts (ECMWF) employs a facility comprising four clusters with 7,680 compute nodes and over 1 million cores to perform high-resolution numerical weather predictions, enabling accurate forecasts by processing vast datasets of atmospheric data. In , clusters facilitate the of next-generation sequencing (NGS) pipelines, where raw sequencing data is processed into annotated genomes using resources to handle the high volume of reads generated in large-scale studies. In commercial applications, clusters underpin web hosting, , and analytics. Google's relies on massive clusters of commodity PCs to manage the enormous workload of indexing and querying the , ensuring low-latency responses through fault-tolerant software architectures. benefits from high-performance computing (HPC) clusters to simulate complex economic scenarios. Similarly, leverages GPU-based clusters for training models in its recommendation engine, processing petabytes of user data to personalize content delivery at scale. Emerging uses of clusters extend to , autonomous systems, and distributed ledgers. Training large language models like requires GPU clusters scaled to tens of thousands of accelerators for efficient end-to-end model optimization. In autonomous vehicle development, simulation platforms on HPC clusters replicate real-world driving conditions, enabling safe validation of AI-driven navigation through digital twins before physical deployment. For blockchain validation, cluster-based protocols enhance mechanisms, such as random cluster practical tolerance (RC-PBFT), which reduces communication overhead and improves block propagation efficiency in decentralized networks. Post-2020 developments highlight clusters' role in addressing global challenges, including pandemic modeling and sustainable energy simulations. During the COVID-19 crisis, HPC clusters like those at Oak Ridge National Laboratory's Summit supercomputer powered drug discovery pipelines, screening millions of compounds via ensemble docking to accelerate therapeutic development. In sustainable energy, the National Renewable Energy Laboratory (NREL) utilizes HPC facilities to support 427 modeling projects in FY2024, simulating grid integration for renewables like wind and solar to optimize energy efficiency and reliability.

Architecture and Design

Hardware Components

Computer clusters are composed of multiple interconnected nodes, each serving distinct roles to enable and data handling. Compute nodes form the core of the cluster, equipped with high-performance central processing units (CPUs), graphics processing units (GPUs) for accelerated workloads, (), and local to execute computational tasks. These nodes are typically rack-mount servers designed for dense packing in environments, allowing through the addition of identical or similar units. Storage nodes, often integrated with compute nodes or dedicated, handle data persistence and access, while head or management nodes oversee cluster coordination, job scheduling, and monitoring without participating in heavy . The storage hierarchy in clusters balances speed, capacity, and accessibility. Local storage on individual nodes, such as hard disk drives (HDDs) or solid-state drives (SSDs), provides fast access for temporary data but lacks sharing across nodes. Shared storage solutions like (NAS) offer file-level access over networks for collaborative environments, whereas (SAN) deliver block-level access for high-throughput demands in enterprise settings. Modern clusters increasingly adopt SSDs and express (NVMe) interfaces to reduce and boost I/O , enabling NVMe-over-Fabrics (NVMe-oF) for efficient shared storage in distributed systems. Power and cooling systems are critical for maintaining hardware reliability in dense configurations. Rack densities in (HPC) clusters can reach 100-140 kW per rack for AI workloads as of 2025, necessitating redundant units (PSUs) configured in setups to ensure without downtime. Cooling strategies, including air-based and liquid immersion, address heat dissipation from high-density racks, with liquid cooling supporting up to 200 kW per rack for sustained operation. Efficiency trends post-2020 include ARM-based nodes like the Altra processors, which provide up to 128 cores per with lower power consumption compared to traditional x86 architectures, optimizing for constrained environments. Heterogeneous hardware integration enhances cluster versatility for specialized tasks. Field-programmable gate arrays (FPGAs) are incorporated as accelerator nodes alongside CPUs and GPUs, offering reconfigurable logic for low-latency applications like or , thereby improving in mixed workloads. This approach allows clusters to scale hardware resources dynamically, adapting to diverse computational needs without uniform node designs.

Network and Topology Design

In computer clusters, the serves as the critical interconnect linking compute nodes, enabling efficient data exchange and collective operations essential for . Design choices in types and directly influence overall system performance, balancing factors such as throughput, , and to meet the demands of (HPC) and AI workloads. Common network interconnects for clusters include Ethernet, InfiniBand, and Omni-Path, each offering distinct trade-offs in bandwidth and latency. Gigabit and 10 Gigabit Ethernet provide cost-effective, standards-based connectivity suitable for general-purpose clusters, delivering up to 10 Gbps per link with latencies around 5-10 microseconds, though they may introduce higher overhead due to protocol processing. In contrast, InfiniBand excels in low-latency environments, achieving sub-microsecond latencies and bandwidths up to 400 Gbps (NDR) or 800 Gbps (XDR) per port as of 2025, making it ideal for tightly coupled HPC applications where rapid message passing is paramount. Omni-Path, originally developed by Intel and continued by Cornelis Networks, targets similar HPC needs with latencies under 1 microsecond and bandwidths reaching up to 400 Gbps as of 2025, emphasizing high message rates for large-scale simulations while offering better power efficiency than InfiniBand in some configurations. These trade-offs arise because Ethernet prioritizes broad compatibility and lower cost at the expense of latency, whereas InfiniBand and Omni-Path optimize for minimal overhead in bandwidth-intensive scenarios, often at higher deployment expenses. Cluster topologies define how these interconnects are arranged to minimize contention and maximize aggregate bandwidth. The fat-tree topology, a multi-level switched hierarchy, is prevalent in HPC clusters for its ability to provide non-blocking communication through redundant paths, ensuring full where the total capacity between any two node sets equals the aggregate endpoint bandwidth. In a fat-tree, leaf switches connect directly to nodes, while switches aggregate uplinks, scaling efficiently to thousands of nodes without degradation. topologies, by comparison, employ direct or closely connected links between nodes, offering simplicity and low for smaller clusters but potentially higher and wiring at . topologies, often used in supercomputing, form a grid-like structure with wrap-around connections in multiple dimensions, providing regular, predictable paths that support efficient nearest-neighbor communication in scientific simulations, though they may underutilize bandwidth in irregular traffic patterns. Switch fabrics in these topologies, such as Clos networks underlying fat-trees, enable non-blocking operation by oversubscribing ports judiciously to avoid hotspots. Key design considerations include bandwidth allocation to prevent bottlenecks, (QoS) mechanisms for mixed workloads, and support for (RDMA) to achieve low-latency transfers. In fat-tree or designs, bandwidth is allocated hierarchically, with higher-capacity links at aggregation levels to match traffic volumes, ensuring equitable distribution across nodes. QoS features, such as priority queuing and congestion notification, prioritize latency-sensitive tasks like AI training over bulk transfers in heterogeneous environments. RDMA enhances this by allowing direct memory-to-memory transfers over the network, bypassing CPU involvement to reduce latency to under 2 microseconds and boost effective throughput in bandwidth-allocated paths. Recent advancements address escalating demands in AI-optimized clusters, including 800G Ethernet and emerging 1.6 Tbps standards for scalable, high-throughput fabrics alongside 400G Ethernet, and for intra- and inter-node GPU connectivity. 400G Ethernet extends traditional Ethernet's reach into HPC by delivering 400 Gbps per port with (RoCE), enabling non-blocking topologies in large-scale deployments while maintaining compatibility with existing infrastructure. , NVIDIA's high-speed interconnect, provides up to 1.8 TB/s (1800 GB/s) bidirectional per GPU for recent generations like Blackwell as of 2025, extending via switches for all-to-all communication across clusters, optimizing AI workloads by minimizing data movement latency in multi-GPU fabrics.
Interconnect TypeTypical Bandwidth (per port)Latency (microseconds)Primary Use Case
Ethernet (10G/800G)10-800 Gbps5-2Cost-effective scaling in mixed HPC/AI
InfiniBand100-800 Gbps<1Low-latency HPC simulations
Omni-Path100-400 Gbps<1High-message-rate large-scale computing

Data and Communication

Shared Storage Methods

Shared storage methods in computer clusters enable multiple nodes to access and manage data collectively, facilitating high-performance computing and distributed applications by providing a unified view of storage resources. These methods typically involve network-attached or fabric-based architectures that abstract underlying hardware, allowing scalability while addressing data locality and access latency. Centralized approaches, such as Storage Area Networks (SANs), connect compute nodes to a dedicated pool of storage devices via high-speed fabrics like Fibre Channel, offering block-level access suitable for databases and virtualized environments. In contrast, distributed architectures spread storage across cluster nodes, enhancing fault tolerance and parallelism through software-defined systems. Key file system protocols include the Network File System (NFS), which provides a client-server model for mounting remote directories over TCP/IP, enabling seamless file sharing in clusters but often limited by single-server bottlenecks in large-scale deployments. For parallel access, the Parallel Virtual File System (PVFS) stripes data across multiple disks and nodes, supporting collective I/O operations that improve throughput for scientific workloads on Linux clusters. Distributed object-based systems like Ceph employ a RADOS (Reliable Autonomic Distributed Object Store) layer to manage self-healing storage pools, presenting data via block, file, or object interfaces with dynamic metadata distribution. Similarly, GlusterFS aggregates local disks into a scale-out namespace using elastic hashing for file distribution, ideal for unstructured data in cloud environments without a central metadata server. In big data ecosystems, the (HDFS) replicates large files across nodes for fault tolerance, optimizing for sequential streaming reads in MapReduce jobs. Consistency models in these systems balance availability and performance, with strong consistency ensuring linearizable operations where reads reflect the latest writes across all nodes, as seen in SANs and NFS with locking mechanisms. Eventual consistency, prevalent in distributed filesystems like and , allows temporary divergences resolved through background synchronization, prioritizing scalability for write-heavy workloads. These models trade off strict ordering for higher throughput, with applications selecting based on tolerance for staleness. Challenges in shared storage include I/O bottlenecks arising from network contention and metadata overhead, which can degrade performance in high-concurrency scenarios; mitigation often involves striping and caching strategies. Data replication enhances fault tolerance by maintaining multiple copies across nodes, as in HDFS's default three-replica policy or Ceph's CRUSH algorithm for placement, but increases storage overhead and synchronization costs. Object storage addresses unstructured data like media and logs by treating files as immutable blobs with rich metadata, enabling efficient scaling in systems like Ceph without hierarchical directories. Emerging trends include serverless storage in cloud clusters, where elastic object stores like InfiniStore decouple compute from provisioned capacity, automatically scaling for bursty workloads via stateless functions. Integration of NVMe-over-Fabrics (NVMe-oF) extends low-latency NVMe semantics over Ethernet or InfiniBand, reducing protocol overhead in disaggregated clusters for up to 10x bandwidth improvements in remote access.

Message-Passing Protocols

Message-passing protocols enable inter-node communication in computer clusters by facilitating the exchange of data between processes running on distributed nodes, typically over high-speed networks. These protocols abstract the underlying hardware, allowing developers to implement parallel algorithms without direct management of low-level network details. The primary standards for such communication are the (MPI) and the earlier (PVM), which have shaped cluster computing since the 1990s. The Message Passing Interface (MPI) is a de facto standard for message-passing in parallel computing, initially released in version 1.0 in 1994 by the MPI Forum, a consortium of over 40 organizations including academic institutions and vendors. Subsequent versions expanded its capabilities: MPI-1.1 (1995) refined the initial specification; MPI-2.0 (1997) introduced remote memory operations and dynamic process management; MPI-2.1 (2008) and MPI-2.2 (2009) addressed clarifications; MPI-3.0 (2012) enhanced non-blocking collectives and one-sided communication; MPI-3.1 (2015) added support for partitioned communication; MPI-4.0 (2021) improved usability for heterogeneous systems; MPI-4.1 (2023) provided corrections and clarifications; and MPI-5.0 (2025) introduced major enhancements including persistent handles, session management, and improved support for scalable and heterogeneous environments. MPI supports both point-to-point and collective operations, with semantics ensuring portability across diverse cluster architectures. In contrast, the Parallel Virtual Machine (PVM), developed in the early 1990s at Oak Ridge National Laboratory, provided a framework for heterogeneous networked computing by treating a cluster as a single virtual machine. PVM version 3, released in 1993, offered primitives for task spawning, messaging, and synchronization, but it was superseded by MPI due to the latter's standardization and performance advantages; PVM's last major update was around 2000, and it is now largely archival. MPI's core paradigms distinguish between point-to-point operations, which involve direct communication between two processes, and collective operations, which coordinate multiple processes for efficient group-wide data exchange. Point-to-point operations include blocking sends (e.g., MPI_Send) that wait for receipt completion and non-blocking variants (e.g., MPI_Isend) that return immediately to allow overlap with computation. Collective operations, such as broadcast (MPI_Bcast) for distributing data from one process to all others or reduce (MPI_Reduce) for aggregating results (e.g., sum or maximum), require all processes in a communicator to participate and are optimized for topology-aware execution to minimize latency. Messaging in MPI can be synchronous or asynchronous, impacting performance and synchronization. Synchronous modes (e.g., MPI_Ssend) ensure completion only after the receiver has posted a matching receive, providing rendezvous semantics to avoid buffer overflows but introducing potential stalls. Asynchronous modes decouple sending from completion, using requests (e.g., MPI_Wait) to check progress, which enables better overlap in latency-bound clusters but requires careful management to prevent deadlocks. Popular open-source implementations of MPI include Open MPI and MPICH, both conforming to MPI-5.0 and supporting advanced features like fault tolerance and GPU integration. Open MPI, initiated in 2004 by a consortium including Cisco and IBM, emphasizes modularity via its Modular Component Architecture (MCA) for runtime plugin selection, achieving up to 95% of native network bandwidth in benchmarks. MPICH, originating from Argonne National Laboratory in 1993, prioritizes portability and performance, with derivatives like Intel MPI widely used in many top supercomputers, including several in the TOP500 list as of 2023. Overhead in these implementations varies significantly with message size: for small messages (<1 KB), latency dominates due to protocol setup and synchronization, often adding 1-2 μs in non-data communication costs on networks, limiting throughput to thousands of messages per second. For large messages (>1 MB), bandwidth utilization prevails, with overheads below 5% on optimized paths, enabling gigabytes-per-second transfers but sensitive to network contention. These characteristics guide algorithm design, favoring collectives for small data dissemination to amortize setup costs. In modern AI and GPU-accelerated clusters, the NVIDIA Collective Communications Library (NCCL), released in 2017, extends MPI-like collectives for multi-GPU environments, supporting operations like all-reduce optimized for and with up to 10x speedup over CPU-based MPI for workloads. NCCL integrates with MPI via bindings, allowing hybrid CPU-GPU messaging in scales exceeding 1,000 GPUs.

Management and Operations

Resource Allocation and Scheduling

Resource allocation and scheduling in computer clusters involve the systematic distribution of computational tasks across multiple nodes to optimize resource utilization, minimize wait times, and ensure efficient workload execution. This process is critical for handling diverse workloads in (HPC) environments, where resources like CPU cores, , and GPUs must be dynamically assigned to jobs submitted by users or applications. Effective scheduling balances competing demands from multiple users in multi-tenant setups, preventing bottlenecks and maximizing throughput. Scheduling in clusters is broadly categorized into batch and interactive types. Batch scheduling manages non-interactive jobs queued for execution, such as scientific simulations or tasks, where jobs are submitted in advance and processed in sequence or without user intervention. Interactive scheduling, in contrast, supports user sessions, allowing immediate resource access for development or testing, often prioritizing low-latency responses over long-running computations. Common scheduling policies include First-Come-First-Served (FCFS), which processes jobs in submission order to ensure fairness but can lead to inefficiencies with long-running tasks blocking shorter ones; priority-based policies, which assign higher precedence to critical jobs based on user roles or deadlines; and fair-share policies, which allocate resources proportionally to historical usage to promote equitable access among users or groups over time. These policies are often combined in modern systems to address varying workload priorities. Key algorithms for include gang scheduling, which coordinates the simultaneous allocation of resources to all processes of a parallel job across nodes to reduce overhead and improve efficiency for tightly coupled applications like MPI-based programs. Bin-packing heuristics, inspired by the classic bin-packing problem, treat resources as bins and jobs as items to be packed, using approximations like First-Fit Decreasing to match job requirements to available node capacities while minimizing fragmentation. These approaches enhance packing density, particularly in heterogeneous clusters. Prominent tools for cluster scheduling include SLURM (Simple Linux Utility for Resource Management), a widely adopted open-source batch scheduler for HPC that supports advanced features like resource reservations and job arrays, handling millions of cores in supercomputers. (Portable Batch System) and its derivative PBS Professional provide flexible job queuing with support for multi-cluster environments, emphasizing portability across systems. For containerized workloads, employs a scheduler that uses priority and affinity rules to place pods on nodes, enabling dynamic scaling in cloud-native clusters. These tools facilitate dynamic allocation, where resources are provisioned on-demand based on workload demands. Recent advancements incorporate AI-driven techniques, such as () optimizers, to enhance scheduling decisions in multi-tenant environments by learning from historical data to predict and mitigate inefficiencies like . For example, approaches have demonstrated at least 20% reductions in average job completion times in large-scale clusters. These methods address multi-tenant efficiency by optimizing for metrics like average response time and resource utilization without predefined policies.

Fault Detection and Recovery

Fault detection in computer clusters relies on mechanisms such as heartbeats, where nodes periodically send signals to a central to confirm operational status, allowing the to identify failures when signals cease. techniques capture events and errors across nodes, enabling post-failure to pinpoint root causes like malfunctions or software crashes. Tools like Ganglia provide scalable by aggregating metrics such as CPU usage, , and network traffic from cluster nodes, facilitating fault detection through distributed . Recovery from detected faults involves techniques like checkpoint/restart, which periodically saves the state of running jobs to persistent storage, allowing them to resume from the last checkpoint on healthy nodes after a . DMTCP (Distributed MultiThreaded CheckPointing) exemplifies this by enabling transparent checkpointing of distributed applications without code modifications, supporting restart on alternative hardware in cluster environments. Job migration transfers active workloads to available nodes upon detection, minimizing downtime by leveraging checkpoint data to continue execution seamlessly. clustering ensures by automatically redirecting services from a failed to a standby within the cluster, maintaining continuous operation for critical applications. Advanced recovery strategies include predictive failure analysis using , which analyzes historical logs and sensor data to forecast node failures and preemptively migrate jobs, reducing overall system interruptions in large-scale HPC clusters. For data integrity, quorum-based requires a of replicas to acknowledge operations, ensuring reliable reads and writes even during partial node outages by guaranteeing intersection between read and write quorums. Post-2020 developments emphasize resilient designs for exascale systems, incorporating algorithm-level to handle silent data corruptions and frequent hardware errors at extreme scales. Additionally, clusters face growing threats, such as DDoS attacks that overwhelm resources and disrupt computations, prompting enhanced mitigation through traffic filtering and intrusion detection tailored to HPC environments.

Programming and Tools

Parallel Programming Models

Parallel programming models provide abstractions for developing software that exploits the computational resources of computer clusters, enabling efficient distribution of workloads across multiple nodes. These models address the inherent challenges of coordinating independent processors while managing data dependencies and communication. Key paradigms include (SPMD) and Multiple Program Multiple Data (MPMD), which define how code and data are replicated or varied across processes. In SPMD, the same program executes on all processors but operates on different data portions, facilitating straightforward parallelism for uniform tasks. MPMD, by contrast, allows different programs to run on different processors, offering flexibility for heterogeneous workloads but increasing complexity in coordination. Distributed Shared Memory (DSM) systems create an illusion of a unified across cluster nodes, simplifying programming by allowing shared-memory semantics on distributed hardware. DSM achieves this through software or hardware mechanisms that handle remote memory accesses transparently, mapping local memories to a global space while managing and consistency. This approach reduces the need for explicit , making it suitable for legacy shared-memory applications ported to clusters, though it incurs overhead from page faults and protocol latencies. Prominent frameworks underpin these models, with (MPI) serving as the de facto standard for distributed-memory clusters under SPMD paradigms. enables explicit communication via point-to-point and collective operations, supporting scalable implementations for . , oriented toward shared-memory systems, uses compiler directives to parallelize loops and sections within a , often extended to clusters via multi-node extensions. Hybrid models combine for inter-node communication with for intra-node parallelism, optimizing resource use on multi-core clusters by minimizing data movement across slower networks. Domain-specific frameworks further tailor parallelism to application needs, such as Apache Spark's data-parallel model for large-scale analytics. Spark employs Resilient Distributed Datasets (RDDs) to partition data across nodes, enabling fault-tolerant, in-memory processing with high-level operators like and reduce for implicit parallelism. For machine learning, provides a unified distributed runtime supporting task-parallel and actor-based computations, scaling applications across clusters with dynamic resource allocation. Post-2016 developments in address emerging AI workloads by integrating with libraries like for distributed training. Serverless parallelism extends these models to cloud clusters, where platforms like abstract infrastructure management for event-driven workloads. In serverless setups, functions execute in parallel across ephemeral containers, supporting distributed via asynchronous invocations without fixed provisioning, though limited by execution timeouts and cold starts. Frameworks like leverage stateless functions for reducing training time by up to 44% in distributed settings. Programming clusters involves challenges like load imbalance, where uneven task distribution leads to idle processors, and synchronization overheads that serialize execution and amplify communication costs. Load imbalance arises from data skew or irregular computations, significantly reducing efficiency in large-scale runs, while synchronization primitives like barriers can introduce wait times dominating total execution. Auto-parallelization tools mitigate these by automatically inserting directives or partitioning ; for instance, AI-driven approaches like OMPar use large language models to generate OpenMP pragmas for C/C++ , achieving parallel speedups on clusters with minimal manual intervention.

Development, Debugging, and Monitoring

Development of software for computer clusters relies on specialized compilers and libraries optimized for and . The oneAPI toolkit, including the oneAPI (oneMKL), provides comprehensive support for cluster environments through highly optimized implementations of mathematical routines such as BLAS for basic linear algebra operations and for advanced linear algebra computations, enabling efficient and threading across multi-node systems. These libraries are integral for compute-intensive tasks in scientific simulations and , reducing development time while maximizing performance on distributed architectures. Continuous integration and continuous delivery (CI/CD) practices have been adapted for high-performance computing (HPC) clusters to automate testing and deployment of parallel applications. In HPC environments, CI/CD pipelines often integrate containerization tools like Singularity with job schedulers such as Slurm, allowing automated builds and executions across cluster nodes to ensure reproducibility and reliability. For instance, GitLab CI/CD is employed in academic HPC centers to trigger builds upon code commits, facilitating seamless integration of parallel code changes into cluster workflows. Debugging parallel programs on clusters presents unique challenges due to the distributed nature of execution, necessitating tools that can handle multi-process and multi-threaded interactions. TotalView serves as a prominent parallel debugger, offering features like process control, memory debugging, and visualization of MPI communications to identify issues in large-scale applications running on HPC clusters. It supports fine-grained inspection of individual threads or processes across nodes, including reverse debugging capabilities for replaying executions. Extensions to GDB, such as those enabling parallel session management, allow developers to attach to distributed processes for core dump analysis and breakpoint setting in MPI-based programs. Trace analysis is essential for detecting synchronization issues like deadlocks in parallel computing, where processes await resources held by others. Tools like the Stack Trace Analysis Tool (STAT) capture and analyze execution traces from MPI jobs to pinpoint deadlock locations by examining call stacks and resource dependencies across cluster nodes. Dedicated deadlock detectors, such as MPIDD for C++ and MPI programs, perform dynamic runtime monitoring to identify circular wait conditions without significant overhead. Monitoring cluster operations involves tools that provide visibility into resource utilization and system health in real time. Prometheus, a time-series database and monitoring system, excels in distributed environments by scraping metrics from cluster nodes and services, supporting alerting on anomalies like high CPU or usage in workloads. complements this with plugin-based monitoring for infrastructure components, including network latency and node availability in HPC setups, though it is often integrated with Prometheus for enhanced scalability. Real-time dashboards, such as those built with on clusters, visualize aggregated metrics like RAM and CPU utilization across pods and nodes, enabling quick identification of bottlenecks in . Integration of practices, particularly GitOps, has modernized cluster software development since the late 2010s by treating Git repositories as the for declarative infrastructure management. In Kubernetes-based clusters, tools like ArgoCD automate synchronization of application deployments with Git changes, streamlining for parallel applications while ensuring and rollback capabilities. Emerging AI-assisted techniques leverage large language models to analyze parallel program traces, providing explanations for runtime discrepancies and suggesting fixes for issues like race conditions in multi-node executions. For complex systems, AI agents like DebugMate incorporate to automate on-call , reducing manual effort in tracing distributed faults.

Notable Implementations

HPC and Supercomputing Clusters

(HPC) clusters form the backbone of supercomputing, enabling massive-scale parallel computations through architectures like (MPP), where thousands of processors execute tasks simultaneously across interconnected nodes. In MPP systems, workloads are divided into independent subtasks that run in parallel, optimizing for scalability in scientific simulations. Custom interconnects, such as those developed by (now part of HPE), like the network, provide low-latency, high-bandwidth communication essential for coordinating these processors and minimizing bottlenecks in data transfer. Prominent examples include the IBM Summit supercomputer, deployed in 2018 at Oak Ridge National Laboratory, which achieved 148.6 petaflops on the High Performance Linpack (HPL) benchmark, topping the TOP500 list at the time. Frontier, also at Oak Ridge and operational since 2022, marked the first exascale system with 1.353 exaflops on HPL as of June 2025, leveraging HPE Cray EX architecture with AMD processors and Slingshot-11 interconnects. Aurora, deployed at Argonne National Laboratory and operational since 2023, represents the third exascale system, achieving 1.012 exaflops on HPL as of June 2025 using Intel processors and high-speed Ethernet interconnects. By 2025, Lawrence Livermore National Laboratory's El Capitan surpassed these, delivering 1.742 exaflops on HPL and securing the top TOP500 spot, powered by AMD Instinct GPUs and advanced liquid cooling for sustained high performance. Many modern supercomputers derive from concepts, which originated as cost-effective assemblies of commodity off-the-shelf hardware networked for , now scaled up in systems for enterprise-level HPC. These clusters support critical simulations in physics, such as modeling dynamics and multi-physics phenomena at exascale resolution on . In climate , national labs like Oak Ridge use them for high-resolution Earth system models, such as the Energy Exascale Earth System Model (E3SM), to forecast and interactions with unprecedented detail. As of 2025, open-source trends in HPC emphasize portability across , open standards for interconnects, and modular software stacks to facilitate in diverse environments, as seen in initiatives enhancing CFD platforms for CPU-to-GPU transitions.

Cloud and Distributed Clusters

Cloud and distributed clusters represent a in computer clustering, leveraging and on-demand infrastructure to enable scalable, pay-as-you-go across geographically dispersed resources. These systems integrate virtual machines, , and serverless components to form dynamic clusters that can span multiple centers or provider regions, contrasting with traditional on-premises setups by emphasizing elasticity and multi-tenancy. technologies, such as hypervisors and container runtimes, abstract to allow seamless resource provisioning, supporting workloads from to training without dedicated physical infrastructure. Prominent examples include (AWS) EC2 clusters, which provide (HPC) capabilities through instance types optimized for parallel processing and low-latency networking. Similarly, Google Cloud offers HPC clusters via Compute Engine and Kubernetes Engine, enabling rapid deployment of turnkey environments for scientific simulations and with integrated tools like the Cluster Toolkit. For private deployments, facilitates customizable cloud infrastructures, allowing organizations to build on-premises or hosted clusters that mimic public cloud features while maintaining . Key features of these clusters include elastic scaling, which automatically adjusts compute resources based on demands to optimize and cost, often achieving up to 30-40% savings through dynamic allocation. AWS Spot Instances exemplify this by offering spare capacity at discounts of 50-90% compared to pricing, integrated into clusters for fault-tolerant, interruptible jobs like . Container orchestration further enhances distribution, with platforms like automating deployment, scaling, and management of containerized applications across nodes. Amazon Elastic Kubernetes Service (EKS) streamlines this by providing a managed for clusters, supporting hybrid workloads with features like Auto Mode for automated infrastructure handling. These setups enable container clusters to run distributed applications efficiently, with built-in support for GPUs and high-throughput networking essential for and tasks. At hyperscale, distributed clusters power massive AI infrastructures, such as Meta's deployments exceeding 100,000 GPUs, which utilize custom frameworks like NCCLX for collective communication and low-latency scaling across vast node counts. These systems demonstrate the feasibility of training trillion-parameter models through optimized resource utilization and multi-gigawatt data centers. By 2025, serverless extensions like Knative have matured into graduated Kubernetes-native platforms, enabling event-driven, auto-scaling workloads without managing underlying infrastructure. Complementing this, hybrid edge-cloud setups for integrate on-premises edge nodes with central clouds to minimize latency for and real-time applications, using multi-cloud architectures for seamless orchestration.

Alternative Approaches

Grid and Cloud Alternatives

Grid computing represents a decentralized approach to resource sharing, enabling coordinated access to distributed computational power, storage, and data across multiple institutions without the tight coupling characteristic of traditional computer clusters. Unlike clusters, which emphasize high-speed interconnects for homogeneous environments, grid systems focus on wide-area networks and heterogeneous resources to solve large-scale problems collaboratively. Seminal work defines grid computing as a system for large-scale resource sharing among dynamic virtual organizations, providing secure and flexible coordination. Projects like exemplify early through public-resource sharing, where millions of volunteer computers worldwide analyzed radio telescope data for extraterrestrial signals, demonstrating volunteer-based decentralized computation. Similarly, the Enabling Grids for E-sciencE (EGEE) created a reliable uniting over 140 institutions to process over 200,000 jobs daily, primarily for scientific applications such as high-energy physics; the ran from 2004 to 2010 and was succeeded by the European Grid Infrastructure (EGI). Cloud computing paradigms, particularly Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), serve as scalable alternatives to dedicated clusters by providing on-demand access to virtualized resources without the need for physical management. In IaaS, users rent virtual machines and , akin to provisioning a cluster but with elastic scaling across global data centers, while PaaS abstracts infrastructure further to focus on application deployment. Serverless models like Functions extend this by executing code in response to events without provisioning servers, reducing overhead for bursty workloads compared to cluster maintenance. However, clouds excel in and elasticity—allowing seamless resource pooling across providers—but may incur higher costs for sustained high-performance tasks and less control over low-latency interconnects than clusters. Grids and clouds overlap with clusters in distributed processing but differ in scope: grids prioritize wide-area, loosely coupled heterogeneous systems for cross-organizational , while clouds offer managed, pay-per-use environments better suited for variable demands. Post-2020 developments in and further position them as micro-scale alternatives, processing data closer to sources in networks to minimize , effectively creating localized "micro-clusters" without central aggregation. Emerging blockchain-based grids enhance by using distributed ledgers for secure, resource trading and , as seen in frameworks like SparkGrid for query scheduling in heterogeneous environments.

Emerging Distributed Systems

Emerging distributed systems represent innovative paradigms that extend beyond conventional computer clusters by emphasizing , event-driven execution, and integration of specialized , enabling scalable without centralized . These systems address limitations in traditional clusters, such as resource provisioning overhead and data locality issues, by leveraging abstractions, privacy-preserving learning, and hybrid processing models. Serverless computing, particularly through Function-as-a-Service (FaaS) frameworks, allows developers to deploy stateless functions that execute on-demand across distributed infrastructures, abstracting away server management and enabling automatic scaling. In FaaS models like or Google Cloud Functions, computations are triggered by events, with the underlying platform handling orchestration and , reducing operational costs by up to 90% compared to provisioned clusters in bursty workloads. This approach facilitates architectures in distributed environments, where functions can be chained for complex workflows without maintaining persistent nodes. Federated learning clusters enable collaborative model training across decentralized devices or edges without centralizing raw , preserving while aggregating updates to a shared model. Introduced in seminal work on communication-efficient deep network learning, this uses iterative averaging of local gradients, minimizing and supporting heterogeneous datasets in scenarios like . For instance, frameworks like Federated allow clusters of edge nodes to train models on-device, achieving convergence with 10-100x less communication than centralized methods. Decentralized systems such as networks treat nodes as pseudo-clusters for consensus-driven computation, where Ethereum's architecture distributes transaction validation and execution across thousands of peers using proof-of-stake mechanisms. This model ensures through Byzantine agreement protocols, enabling applications like without a central authority. Complementing this, (P2P) networks for content delivery, as in systems like , form dynamic overlays where nodes collaboratively replicate and route data chunks, reducing bandwidth costs by 50-70% over client-server models in large-scale . Hybrid quantum-classical clusters integrate quantum processors with classical distributed systems via frameworks like IBM's , allowing variational algorithms to optimize parameters across HPC nodes and quantum hardware. Qiskit Runtime enables seamless execution of hybrid workflows, such as quantum approximate optimization for NP-hard problems, partitioning circuits for parallel classical simulation and quantum sampling. In neuromorphic systems for , Intel's Loihi chips emulate in distributed setups, scaling to over 1 million neurons across multiple chips for energy-efficient inference, consuming 100x less power than GPU-based clusters for edge tasks like . As of 2025, zero-trust distributed architectures emerge as a key trend, enforcing continuous verification in decentralized environments without implicit trust boundaries, using micro-segmentation and identity-based access across hybrid clouds. This model, applied in and clusters, integrates blockchain for auditability and AI for anomaly detection, mitigating insider threats in systems spanning classical, quantum, and neuromorphic components.

References

  1. [1]
    What is a cluster? | Princeton Research Computing
    Each computer in the cluster is called a node (the term "node" comes from graph theory), and we commonly talk about two types of nodes: head node and compute ...
  2. [2]
    What is an HPC cluster | High Performance Computing
    An HPC cluster is a collection of many separate servers (computers), called nodes, which are connected via a fast interconnect.
  3. [3]
    Cluster Computing - FAS Research Computing - Harvard University
    In a cluster, these servers are dedicated to performing computations, as opposed to storing data or databases. They normally contain a larger number of cores ...
  4. [4]
    [PDF] The Roots of Beowulf - NASA Technical Reports Server
    The first Beowulf Linux commodity cluster was constructed at. NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end ...
  5. [5]
    Computing History | HPC2 - hpc.msstate.edu
    Dec 4, 2024 · Many people think that cluster computing originated with Thomas Sterling and Donald Becker's work on the Beowulf Project in 1994. This project ...
  6. [6]
    [PDF] Introduction to High Performance Computing - Boston University
    • Computer cluster: connect many computers with high- speed networks. • Currently computer cluster is the best solution to scale up computer power.Missing: definition | Show results with:definition<|control11|><|separator|>
  7. [7]
    [PDF] What are Clusters? - Computer Science
    Aug 24, 2008 · Cluster Computing 10. DiSCoV. Cluster Advantages. • Capability Scaling. • Convergence Architecture -standard. • Price/Performance. • Flexibility ...
  8. [8]
    Why HPC? - Purdue's RCAC
    Ability to handle large datasets: HPC can process and analyze large amounts of data more easily than conventional computers, which is vital in many fields, ...
  9. [9]
    What is Cluster Computing? | IBM
    Cluster computing is a type of computing where multiple computers are connected so they work together as a single system to perform the same task.Missing: principles authoritative
  10. [10]
    What is a Computer Cluster? | Answer from SUSE Defines
    Mar 24, 2018 · A computer cluster is a set of connected computers (nodes) that work together as if they are a single (much more powerful) machine.Missing: principles authoritative
  11. [11]
    [PDF] Cluster-Based Scalable Network Services - cs.Princeton
    High Availability: Clusters have natural redundancy due to the independence of the nodes: Each node has its own busses, power supply, disks, etc., so it is ...
  12. [12]
    [PDF] Multiprocessors and Clusters - Northeastern University
    Because it is even harder to find applications that can take advantage of many processors, the chal- lenge is greater for large-scale multiprocessors. Because ...
  13. [13]
    [PDF] 7.14 Historical Perspective and Further Reading
    Clusters Clusters were probably “invented” in the 1960s by customers who could not fit all their work on one computer, or who needed a backup machine in case ...
  14. [14]
    CS322: Multiple CPU Systems: Multiprocessing, Distributed ...
    The most basic distinction is between tightly coupled and loosely coupled systems: In a tightly coupled system, 2 or more CPU's share a common clock, share ...
  15. [15]
    [PDF] Distributed Systems and Clusters
    Tightly coupled ≈ parallel processing. ◇ Processors share clock and memory, run one OS, communicate frequently. □. Loosely coupled ≈ distributed computing.
  16. [16]
    Computer Clusters, Types, Uses and Applications - Baeldung
    Mar 18, 2024 · 3. Types of Clustering · 3.1. Fail-over or High Availability Cluster · 3.2. Load Balancing · 3.3. High-Performance Computing – HPC.
  17. [17]
    An Overview of Cluster Computing - GeeksforGeeks
    Jul 23, 2025 · An Overview of Cluster Computing · 1. High performance (HP) clusters : · 2. Load-balancing clusters : · 3. High Availability (HA) Clusters : · 1.
  18. [18]
    Introduction to Beowulf Cluster - GeeksforGeeks
    Jul 23, 2025 · A Beowulf cluster is a group of identical, commodity-grade computers networked to form a local area network, working like a single machine.
  19. [19]
    Apache Hadoop
    Apache Hadoop is open-source software for distributed computing, processing large datasets across clusters, and includes modules like HDFS and YARN.Download · Setting up a Single Node Cluster · Hadoop 2.7.2 · Hadoop CVE List
  20. [20]
    What is Database Sharding? - Shard DB Explained - Amazon AWS
    Database sharding is the process of storing a large database across multiple machines. A single machine, or database server, can store and process only a ...
  21. [21]
    Overview | Kubernetes
    Sep 11, 2024 · Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both ...Kubernetes Components · The Kubernetes API · Kubernetes Object Management
  22. [22]
    Introduction to Model Parallelism - Amazon SageMaker AI
    Model parallelism is a distributed training method in which the deep learning model is partitioned across multiple devices, within or across instances.
  23. [23]
    1962 | Timeline of Computer History
    Atlas was the fastest computer in the world at the time and introduced the concept of “virtual memory,” that is, using a disk or drum as an extension of main ...
  24. [24]
    [PDF] Design of the B 5000 System - People @EECS
    A master control program (MCP) will be provided with the B 5000 system. It will be stored on a por- tion of the magnetic drum. During normal oper- ations, a ...<|separator|>
  25. [25]
    Milestones:The Xerox Alto Establishes Personal Networked ...
    May 17, 2024 · “Between 1972 and 1980, the first distributed personal computing system was built at the Xerox Palo Alto Research Center. The system was ...
  26. [26]
    Milestones:Ethernet Local Area Network (LAN), 1973-1985
    May 17, 2024 · Ethernet wired LAN was invented at Xerox Palo Alto Research Center (PARC) in 1973, inspired by the ALOHAnet packet radio network and the ARPANET.
  27. [27]
    [PDF] NASA's Supercomputing Experience
    NASA's experience with advanced supercomputers began in the 1970s with the installation of the Illiac-IV parallel computer at Ames Research Center and the CDC ...Missing: clustered | Show results with:clustered
  28. [28]
    Overview -- History - Beowulf.org
    In late 1993, Donald Becker and Thomas Sterling began sketching the outline of a commodity-based cluster system designed as a cost-effective alternative to ...
  29. [29]
    [PDF] A Curriculum for Petascale Computing
    Terascale systems emerged in the late 1990's. At that time, teraflop scale performance could be achieved with thousands of processors running housed in a few ...
  30. [30]
  31. [31]
    [PDF] A PARALLEL WORKSTATION FOR SCIENTIFIC COMPUTATION
    The Beowulf parallel workstation project is driven by a set of requirements for high performance scientific work- stations in the Earth and space sciences ...
  32. [32]
    [PDF] An Overview of Supercomputers, Clusters and Grid Architecture ...
    Mar 18, 2005 · Computers. ♢ Cray X1, XD1, XT3. ♢ SGI Altix. ♢ IBM Regatta. ♢ IBM Blue Gene/L. ♢ IBM eServer. ♢ Sun. ♢ HP. ♢ Dawning. ♢ Bull NovaScale.Missing: evolution cloud
  33. [33]
    [PDF] HPC Cloud for Scientific and Business Applications - arXiv
    High Performance Computing (HPC) clouds are becoming an alternative to on-premise clusters for executing scientific applications and business analytics ...Missing: Blue | Show results with:Blue
  34. [34]
    [PDF] The TOP500 Project
    Jan 20, 2008 · BlueGene/L had been in first place since November 2004; however, the current system has been significantly upgraded so that it now achieves a ...Missing: evolution cloud prototypes
  35. [35]
    The journey to cloud as a continuum: Opportunities, challenges, and ...
    The 2010s highlight the rise of containers and hybrid consistencies at the edge. The 2020s emphasize a cloud continuum with multi-node topologies, distributed ...<|separator|>
  36. [36]
    Frontier - Oak Ridge Leadership Computing Facility
    Exascale is the next level of computing performance. By solving calculations five times faster than today's top supercomputers—exceeding a quintillion, or 1018, ...
  37. [37]
    Meet Frontier, the world's first, fastest Exascale Supercomputer by HPE
    Frontier claims to be the world's most energy efficient supercomputer, as it is 32% more energy efficient compared to the previous number one system. The ...
  38. [38]
    [PDF] High Performance Computing (HPC) for Covid-19 - Asean
    The supercomputer serves as a platform for international research cooperation and for the development of artificial intelligence and quantum technology. Part of ...<|separator|>
  39. [39]
    Quantum computing futures | Deloitte Insights
    Aug 11, 2025 · Quantum computing vendors are projecting tangible business benefits by 2030 and accelerating their expected timelines to commercial scale over ...
  40. [40]
    What is High-Performance Computing (HPC)? - DataCore Software
    Performance in HPC systems is multi-dimensional. The most common metric is FLOPS (floating-point operations per second), which defines the raw compute capacity.
  41. [41]
    A guide to AI TOPS and NPU performance metrics | Qualcomm
    Apr 24, 2024 · TOPS is a measurement of the potential peak AI inferencing performance based on the architecture and frequency required of the NPU.Missing: clusters | Show results with:clusters
  42. [42]
    NVIDIA DGX and the Future of AI Desktop Computing | IDC Blog
    May 19, 2025 · This system is powered by the GB10 Grace Blackwell Superchip, which delivers up to 1,000 trillion operations per second (TOPS) of AI computing ...Nvidia Blackwell Rtx Pro · Dgx Station... · Isv Collaboration And...<|separator|>
  43. [43]
    Introduction to Parallel Computing Tutorial - | HPC @ LLNL
    Amdahl's Law​​ If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup). If all of the code is parallelized, P = 1 and the speedup is ...
  44. [44]
    Scalability: strong and weak scaling – PDC Blog - KTH
    Nov 9, 2018 · Weak scaling concerns the speedup for a scaled problem size with respect to the number of processors, and is governed by Gustafson's law. When ...Scalability: Strong And Weak... · Amdahl's Law And Strong... · Gustafson's Law And Weak...
  45. [45]
    (PDF) Gustafson's Law - ResearchGate
    Gustafson's Law says that if you apply P processors to a task that has serial fraction f, scaling the task to take the same amount of time as before, ...<|separator|>
  46. [46]
    Analyzing Resource Utilization in an HPC System: A Case Study of ...
    May 10, 2023 · Our one-month usage analysis reveals that CPUs are commonly not fully utilized, especially for GPU-enabled jobs. Also, around 64% of both CPU ...
  47. [47]
    High-Performance Computing Data Center Power Usage ... - NREL
    Apr 10, 2025 · Data centers focusing on efficiency typically achieve PUE values of 1.2 or less. PUE is the ratio of the total amount of power used by a ...
  48. [48]
    What Is PUE (Power Usage Effectiveness) and What Does It Measure?
    PUE is a standard efficiency metric for power consumption in data centers. A simple definition of PUE is the ratio of total facility energy to IT equipment ...
  49. [49]
    [PDF] Achieving Performance Consistency in Heterogeneous Clusters
    As computing systems shift from parallel homogeneous hardware to heterogeneous commodity configurations, cluster nodes could easily de- liver inconsistent ...
  50. [50]
    Application MTTFE vs. Platform MTBF - ACM Digital Library
    Platform MTBF: A Fresh Perspective on System Reliability and Application Throughput for Computations at Scale.
  51. [51]
    Too Big Not to Fail - Communications of the ACM
    Jun 1, 2017 · Availability = MTBF / (MTBF + MTTR). Since we assume one in five servers fail every year, our MTBF (Mean Time Between Failures) is relatively ...
  52. [52]
    Checkpointing strategies for parallel jobs - ACM Digital Library
    Checkpointing and rollback recovery are widely used techniques for handling failures in distributed systems.Abstract · Information & Contributors · Published In
  53. [53]
    November 2024 | TOP500
    ### Summary of Top Systems in Green500 (November 2024)
  54. [54]
    Cooling the Data Center - ACM Queue
    Mar 10, 2010 · This article reviews some of the generic approaches to cooling and identifies opportunities for further innovation.
  55. [55]
  56. [56]
    Analyzing Cost-Performance Tradeoffs of HPC Network Designs ...
    Analyzing Cost-Performance Tradeoffs of HPC Network Designs under Different Constraints using Simulations.Missing: trade- offs
  57. [57]
    [PDF] Trends in data centre energy consumption under the European ...
    The aim of this paper is to evaluate, analyse and present the current trends in energy consumption and efficiency in data centres in the European Union using.<|control11|><|separator|>
  58. [58]
    Quantifying Data Center Carbon Footprint - ACM Digital Library
    May 7, 2024 · In 2021, the global data center industry was responsible for around 1% of the worldwide greenhouse gas emissions.
  59. [59]
    Managing Server Clusters on Renewable Energy Mix
    We propose GreenWorks, a framework for HPC data centers running on a renewable energy mix. Specifically, GreenWorks features a cross-layer power management ...
  60. [60]
    Commodity Cluster - an overview | ScienceDirect Topics
    Commodity cluster hardware is, by definition, all commodity off the shelf (COTS) to maximize the benefit of economy of scale and achieve the best performance to ...
  61. [61]
    What is COTS (Commercial-off-the-Shelf)? - Trenton Systems
    Jun 17, 2022 · Some of the benefits of using COTS products are lower costs, reduced development time, faster insertion of new technology, and lower lifecycle ...
  62. [62]
    [PDF] Cluster Computing White Paper - arXiv
    This White Paper provides an authoritative review of all the hardware and software technologies that can be used to make up a cluster now or in the near future.
  63. [63]
    What Is High Availability? - Cisco
    Using high-availability clusters helps ensure there is no single point of failure for critical IT and reduces or eliminates downtime. High-availability clusters ...Missing: computer | Show results with:computer
  64. [64]
    Scaling - HPC Wiki
    Jul 19, 2024 · With Gustafson's law the scaled speedup increases linearly with respect to the number of processors (with a slope smaller than one), and there ...Scaling tests · Weak Scaling · Scaling Measurement...
  65. [65]
    Democratizing High Performance Computing: HPC Clusters - HPCwire
    Aug 1, 2019 · HPC in the cloud is the democratization of high performance computing – giving researchers the freedom to ask the questions and explore the ...
  66. [66]
    The Complete Guide to Edge Computing Architecture | Mirantis
    Sep 10, 2025 · Reduced Latency. One of the main benefits of edge computing is the reduction in latency due to local processing. Since data does not have to ...Reduced Latency · Edge Vs. Cloud Computing... · Edge Computing Best...
  67. [67]
    Cloud Bursting Fundamentals | Definition, Benefits & Strategy
    Jul 8, 2025 · A thorough cost analysis is required, however, including assessments of pay-as-you-go models versus fixed commitments to cloud capacity.The Role Of Cloud... · Explore Our Top Resources · The Operational Efficiency...<|control11|><|separator|>
  68. [68]
    Supercomputer facility | ECMWF
    ECMWF's supercomputer facility, based on Atos BullSequana XH2000, has four clusters, 7,680 compute nodes, and 448 GPIL nodes, with 1,040,384 cores.
  69. [69]
    From Sequencer to Supercomputer: An Automatic Pipeline for ...
    A computer cluster with the required software programs, genomes, and pipelines installed to process NGS raw data into a suitable result format including ...
  70. [70]
    [PDF] Web search for a planet: the google cluster architecture - Micro, IEEE
    TO HANDLE THIS WORKLOAD, GOOGLE'S. ARCHITECTURE FEATURES CLUSTERS OF MORE THAN 15,000 COMMODITY-. CLASS PCS WITH FAULT-TOLERANT SOFTWARE. THIS ARCHITECTURE ...
  71. [71]
    [PDF] HPC Highlight 2019 Economics Modeling with HPC
    My research studies the policies of central banks in a low interest rate environment and their effect on macroeconomic variables such as gross domestic ...
  72. [72]
    Scaling Media Machine Learning at Netflix
    Feb 13, 2023 · We have developed a large-scale GPU training cluster based on Ray, which supports multi-GPU / multi-node distributed training. We precompute the ...
  73. [73]
    [PDF] Efficient Large-Scale Language Model Training on GPU Clusters ...
    In our experiments, we demonstrate close to linear scaling to 3072 A100 GPUs, with an achieved end-to-end training throughput of 163 teraFLOP/s per GPU ( ...
  74. [74]
    Autonomous Vehicle Simulation | Use Cases - NVIDIA
    This technology effectively simulates real-world conditions, allowing for vehicles' safe testing and validation through a digital twin before they're ...
  75. [75]
    Simulation Of Optimized Cluster Based PBFT Blockchain Validation ...
    This paper presents Random Cluster PBFT (RC-PBFT), an algorithm that runs on randomly selected clusters, to improve the original PBFT algorithm.
  76. [76]
    Supercomputing Pipelines Search for Therapeutics Against COVID-19
    We developed a new drug discovery pipeline using the Summit supercomputer at Oak Ridge National Laboratory to help pioneer this effort.
  77. [77]
    [PDF] High-Performance Computing at the National Renewable Energy ...
    The NREL HPC user facility supported 106 modeling and simulation projects advancing the DOE mission across the spectrum of energy efficiency and renewable ...
  78. [78]
    [PDF] HPC Security: Architecture, Threat Analysis, Security Posture
    Feb 2, 2024 · Compute nodes have the same components as a laptop, desktop, or server, including central processing units (CPUs), memory, disk space, and ...
  79. [79]
    PVFS: A Parallel Virtual File System for Linux Clusters
    A node can be thought of as being one or more of three different types: compute, I/O or management. ... As noted earlier, any I/O node or management node can also ...
  80. [80]
    What is a storage area network (SAN)? – SAN vs. NAS | NetApp
    A SAN is a high-speed network connecting servers to storage, centralizing block storage for business-critical applications needing high throughput and low ...How Is Data Stored In A San? · San Vs. Nas: Key Differences · Faq And Summary
  81. [81]
    NVMe Storage Explained | NVMe Shared Storage, NVMe-oF, & More
    Sep 3, 2021 · NVMe® storage allows for lower latency over SAN as well as improved I/O metrics. NVMe® shared storage thus improves the overall performance.
  82. [82]
    Navigating Data Center Power Density in the Age of HPC and AI
    Jan 10, 2024 · Data center power density, simply put, expresses how much power, in kilowatts (kW), is consumed by a server rack.
  83. [83]
    How N+1 redundancy supports continuous data center cooling - Vertiv
    Aug 12, 2025 · Redundancy strategies for high-performance computing. Many operators mix redundancy levels. A facility might run 2N power with N+1 cooling.
  84. [84]
    Liquid Cooling Steps Up for High-Density Racks and AI Workloads
    Air-cooled systems are designed for rack densities up to approximately 15 kW, while liquid cooling systems are effective for rack densities up to 200+ kW.Missing: clusters PSU
  85. [85]
    Processors - Ampere Altra and AmpereOne
    Mar 18, 2025 · Ampere Altra platforms deliver extremely efficient, high-performance computing especially for power constrained racks and other sensitive environments.Ampere Altra Brief · AmpereOne Product Brief · AmpereOne M Product Brief
  86. [86]
    [PDF] Axel: A Heterogeneous Cluster with FPGAs and GPUs
    Feb 23, 2010 · A heterogeneous computer cluster is more efficient than a homogeneous one since some kinds of processing units have better performance than ...
  87. [87]
    [PDF] A Survey on FPGA-Based Heterogeneous Clusters Architectures
    The classification and study of the architectures illustrate the trade-offs of the solutions and help identify open problems and research lines. INDEX TERMS ...
  88. [88]
    Aggregating and Consolidating two High Performant Network ...
    Jul 10, 2022 · When in need of high-bandwidth and low-latency communications, better options have emerged and are considered such as InfiniBand (IB), an ...Missing: offs | Show results with:offs
  89. [89]
    [PDF] Simplifying 400G for Data Centers - Arista
    From its origins as an ultra-high performance technology, reserved for a few organizations with extreme networking demands, 400 Gigabit Ethernet (400GbE) has ...
  90. [90]
    [PDF] Intel® Omni-Path Fabric Suite Fabric Manager
    systems are optimized to provide the low latency, high bandwidth, and high message rate needed by large scale High Performance Computing (HPC) applications.
  91. [91]
    FatPaths: routing - ACM Digital Library
    Nov 9, 2020 · Ethernet systems are also less efficient than Infiniband, custom, OmniPath, and proprietary systems, see Figure 1 (on the right). This is also.
  92. [92]
    Designing an HPC Cluster with Mellanox InfiniBand Solutions
    The most widely used topology in HPC clusters is a one that users a fat-tree topology. This topology typically enables the best performance at a large scale ...
  93. [93]
    [PDF] Network Topology Scalability
    The fat tree, proposed by Leiserson [Lei85], is a tree topology with channels of increasing width from the leaves to the root. The fat tree solves the ...
  94. [94]
    Switched Fabric - an overview | ScienceDirect Topics
    Multistage topologies like ring, mesh, star, and Clos/fat-tree are used to address congestion, with methods including load balancing, buffering, flow control, ...
  95. [95]
    [PDF] Automated Design of Torus Networks - arXiv
    Jan 25, 2013 · ABSTRACT. This paper presents an algorithm to automatically design networks with torus topologies, such as ones widely used.
  96. [96]
    [PDF] Networking Design for HPC and AI on IBM Power Systems
    In our example, a non-blocking fat-tree topology is used, which means that a leaf switch must provide the same number of ports (as uplinks) and compute nodes.
  97. [97]
    IEEE 802 Nendica Report: Intelligent Lossless Data Center Networks
    The newer Remote Direct Memory Access (RDMA) protocol eliminates data copies and frees CPU resources to perform necessary driver path and take-out order.
  98. [98]
    Configuring and Coordinating End-to-end QoS for Emerging ...
    Jan 15, 2024 · In this article, we define a new transport-level QoS mechanism for the network segment and demonstrate how it can augment and coordinate with the access-level ...
  99. [99]
    A Hybrid I/O Virtualization Framework for RDMA-capable Network ...
    Mar 14, 2015 · RDMA-capable interconnects, providing ultra-low latency and high-bandwidth, are increasingly being used in the con- text of distributed storage ...
  100. [100]
    NVLink & NVSwitch: Fastest HPC Data Center Platform | NVIDIA
    NVLink is a direct GPU-to-GPU interconnect, and NVLink Switch connects multiple NVLinks for all-to-all GPU communication, enabling high-speed data transfer.
  101. [101]
    [PDF] Object-Based Storage - Parallel Data Lab - Carnegie Mellon University
    This figure illustrates a SAN file system being used to share files among a number of clients. The files themselves are stored on a fast storage area network ( ...<|separator|>
  102. [102]
    [PDF] The Hadoop Distributed File System - cs.wisc.edu
    Abstract—The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to ...
  103. [103]
    [PDF] File Systems for Clusters from a Protocol Perspective
    In this paper we will describe some protocols in detail and make brief comparisons with other systems. We will first give a detailed outline of NFS and AFS.
  104. [104]
    [PDF] PVFS: A Parallel File System for Linux Clusters
    In this paper we describe how we designed and implemented PVFS to meet the above goals. We also present performance results with PVFS on the Chiba City cluster ...Missing: original | Show results with:original
  105. [105]
    [PDF] Ceph: A Scalable, High-Performance Distributed File System
    Abstract. We have developed Ceph, a distributed file system that provides excellent performance, reliability, and scala- bility.
  106. [106]
    Scale Out with GlusterFS
    GlusterFS is a versatile distributed filesystem, well supported on standard Linux distributions. It is trivial to install and simple to tune, as we have shown ...
  107. [107]
    Consistency in Distributed Storage Systems - SpringerLink
    Consistency in Distributed Storage Systems. An Overview of Models, Metrics and Measurement Approaches. Conference paper. pp 175–189; Cite this conference paper.
  108. [108]
    [1902.03305] Consistency models in distributed systems: A survey ...
    Feb 8, 2019 · This research proposes two different categories of consistency models. Initially, consistency models are categorized into three groups of data-centric, client- ...
  109. [109]
    Analysis of a Network IO Bottleneck in Big Data Environments Based ...
    This bottleneck is due to virtual network layers which adds a significant delay to Round Trip Time (RTT) of data packets.
  110. [110]
    [PDF] A Consistency in Non-Transactional Distributed Storage Systems
    In this paper we aim to fill the void in literature, by providing a structured and comprehensive overview of different consistency notions that appeared in ...
  111. [111]
    [PDF] The Case for Dual-access File Systems over Object Storage | USENIX
    Users and en- terprises increasingly look to object storage as an economical and scalable solution for storing this unstructured data [42,44].
  112. [112]
    [PDF] InfiniStore: Elastic Serverless Cloud Storage - Yue Cheng
    In this paper, we rethink the fundamental design of cloud stor- age systems and propose ServerlessMemory, a new cloud storage service. ServerlessMemory ...
  113. [113]
    Performance Characterization of SmartNIC NVMe-over-Fabrics ...
    Sep 16, 2024 · The NVMe-over-Fabrics (NVMe-oF) is gaining popularity in cloud data centers as a remote storage protocol for accessing NVMe storage devices ...
  114. [114]
  115. [115]
    [PDF] MPI: A Message-Passing Interface Standard
    Jun 9, 2021 · This document describes the Message-Passing Interface (MPI) standard, version 4.0. The MPI standard includes point-to-point message-passing, ...
  116. [116]
    (PDF) Pvm 3 User's Guide And Reference Manual - ResearchGate
    This report is the PVM version 3.3 users' guide and reference manual. It contains an overview of PVM, and how version 3 can be obtained, installed and used.
  117. [117]
    3.4. Communication Modes - MPI Forum
    A send that uses the synchronous mode can be started whether or not a matching receive was posted. However, the send will complete successfully only if a ...
  118. [118]
    4.1. Introduction and Overview - MPI Forum
    The syntax and semantics of the collective operations are defined to be consistent with the syntax and semantics of the point-to-point operations. Thus ...
  119. [119]
    Asynchronous Communication Primitives of MPI - Emory CS
    A send operation is called synchronous when the operation will only complete after the message sent has been received. - i.e., A synchronous send operation ...
  120. [120]
    Open MPI: Open Source High Performance Computing
    Open MPI is an open-source, high-performance message passing library, developed by a consortium, with features like MPI-3.1 conformance and thread safety.Download · Documentation · OpenMPI 4.0.5 · Source Code Access
  121. [121]
    MPICH | High-Performance Portable MPI
    MPICH is a high performance and widely portable implementation of the Message Passing Interface (MPI) standard. MPICH and its derivatives form the most widely ...Downloads · Documentation · About · News and Events
  122. [122]
    [PDF] The Importance of Non-Data-Communication Overheads in MPI
    We observe that the MPI stack adds close to 1.1µs overhead for small messages; that is, close to 1000 cycles are spent for pre- and post-data-communication ...
  123. [123]
    [PDF] Performance Analysis of MPI Collective Operations * - NetLib.org
    Bell et al. [15] use extensions of LogP and LogGP models to evaluate per- formance of small and large messages on contemporary super-computing net-.
  124. [124]
    Overview of NCCL — NCCL 2.28.6 documentation - NVIDIA Docs
    The NVIDIA Collective Communications Library (NCCL, pronounced “Nickel”) is a library providing inter-GPU communication primitives that are topology-aware.
  125. [125]
    NVIDIA Collective Communications Library (NCCL)
    The NVIDIA Collective Communication Library (NCCL) implements multi-GPU and multi-node communication primitives optimized for NVIDIA GPUs and networking.Documentation · Download · Massively Scale Your Deep...
  126. [126]
    Fault tolerance in big data storage and processing systems
    Heartbeat detection and fault prediction are stable fault detection approaches used in large-scale systems. The heartbeat approach is designed based on an ...
  127. [127]
    Fault tolerance in computational grids: perspectives, challenges ...
    Nov 18, 2016 · Retry, replication, message logging, and check pointing are the fault tolerant techniques that are used in clustered and grid computing ...
  128. [128]
  129. [129]
    [PDF] DMTCP: Transparent Checkpointing for Cluster Computations and ...
    To recreated shared sockets, a single DMTCP restart process is created on each host. This single restart process will first restore all sockets, and then fork ...
  130. [130]
    (PDF) Job migration in HPC clusters by means of checkpoint/restart
    This opens the door to a whole new set of tools and scheduling possibilities based on the fact that jobs can be migrated, checkpointed, and restarted on a ...Missing: failover | Show results with:failover
  131. [131]
    Failover Clustering in Windows Server and Azure Local
    Jun 25, 2025 · Failover clustering is a powerful strategy to ensure high availability and uninterrupted operations in critical environments.
  132. [132]
    Machine Learning Approaches to Predict and Mitigate Failures in ...
    Mar 8, 2025 · To address these challenges, machine learning (ML) techniques have emerged as a powerful approach for predictive failure detection and proactive ...
  133. [133]
    MeteorShower: Minimizing Request Latency for Majority Quorum ...
    Because of the failure-prone nature of distributed storage systems, majority quorum-based data consistency algorithms become one of the most widely adopted ...
  134. [134]
    [PDF] Resiliency in numerical algorithm design for extreme scale simulations
    Sep 20, 2021 · This work is based on the seminar titled “Resiliency in Numerical Algorithm Design for Extreme. Scale Simulations” held March 1-6, 2020 at ...
  135. [135]
    [PDF] Security in HPC Centres
    Protecting against DDoS attacks is not an easy task. While a small scale attack can be mitigated by using software solutions such as precisely selected well ...
  136. [136]
    Programming for Exascale Computers - Marc Snir
    The programs are identical in a single program, multiple data (SPMD) model; they differ in a mul- tiple programs, multiple data (MPMD) model.
  137. [137]
    Evaluating the performance limitations of MPMD communication
    Abstract. The MPMD approach for parallel computing is attractive for programmers who seek fast development cycles, high code re-use, and modular programming ...
  138. [138]
    [PDF] Distributed Shared Memory: Concepts and Systems - IEEE Parallel ...
    Each system cluster contains a physically local memory module, which maps partially or entirely to the DSM global address space. Regardless of the network ...
  139. [139]
    Distributed shared memory based on offloading to cluster network
    Distributed shared memory (DSM) is an important technology that provides programmers the underlying execution mechanism for shared memory programs.
  140. [140]
    [PDF] Comparing the OpenMP, MPI, and Hybrid Programming Paradigms ...
    The MPI and OpenMP programming models can be combined into a hybrid paradigm to exploit parallelism beyond a single level. The main thrust of the hybrid ...
  141. [141]
    Hybrid MPI/OpenMP Parallel Programming on Clusters of Multi-Core ...
    We describe potentials and challenges of the dominant programming models on hierarchically structured hardware: Pure MPI (Message Passing Interface), pure ...
  142. [142]
    RDD Programming Guide - Spark 4.0.1 Documentation
    RDDs are a collection of elements partitioned across nodes, operated on in parallel, and can be created from files or Scala collections. They can be persisted ...
  143. [143]
    [PDF] Ray: A Distributed Framework for Emerging AI Applications - USENIX
    Oct 8, 2018 · The next generation of AI applications will continuously interact with the environment and learn from these inter- actions.
  144. [144]
    Distributed Machine Learning with a Serverless Architecture
    SIREN is a serverless, asynchronous distributed machine learning framework using stateless functions in the cloud, reducing training time by up to 44%.
  145. [145]
    Distributed Double Machine Learning with a Serverless Architecture
    Apr 19, 2021 · This paper explores serverless cloud computing for double machine learning. Being based on repeated cross-fitting, double machine learning ...
  146. [146]
    Load Imbalance - an overview | ScienceDirect Topics
    Load imbalance can arise due to insufficient parallelism during certain phases of computation or the assignment of unequal size tasks, such as those found in ...Causes and Types of Load... · Load Balancing Techniques...
  147. [147]
    OMPar: Automatic Parallelization with AI-Driven Source-to ... - arXiv
    Sep 23, 2024 · This paper introduces OMPar, an AI-driven tool designed to automate the parallelization of C/C++ code using OpenMP pragmas.
  148. [148]
    Intel® oneAPI Math Kernel Library (oneMKL)
    Use this library of math routines for compute-intensive tasks: linear algebra, FFT, RNG. Optimized for high-performance computing and data science.
  149. [149]
    Intel oneAPI - Center for High Performance Computing - chpc.utah.edu
    MKL is a high performance math library containing full BLAS, LAPACK, ScaLapack, transforms and more. It is available as module load intel-oneapi-mkl . For ...Intel Oneapi · Intel Advisor · Intel Vtune Profiler
  150. [150]
    HPC Cluster CI/CD Image Build Pipelining
    HPC Cluster CI/CD Image Build Pipelining. Authors: Travis Cotton (Los Alamos National Laboratory) Abstract: Building images for HPC clusters tends to be a ...
  151. [151]
    Continuous Integration, and Deployment (CI/CD) - HPC
    HPC team uses GitLab as the Git repository management system thus to trigger the build and deployment process uses GitLab CI/CD.
  152. [152]
    TotalView Part 3: Debugging Parallel Programs - | HPC @ LLNL
    TotalView provides two options that control the behavior of breakpoints and barrier points. Individual breakpoint and barrier point behavior can be customized.
  153. [153]
    TotalView - NERSC Documentation
    TotalView from Perforce Software is a parallel software debugger for complex C, C++, Fortran, and CUDA applications. It provides both X Window-based Graphical ...
  154. [154]
    About Parallel Debugging in TotalView
    TotalView is designed to debug multi-process, multi-threaded programs, with a rich feature set to support fine-grained control over individual or multiple ...Missing: clusters | Show results with:clusters
  155. [155]
    Concurrent Deadlock Detection In Parallel Programs - ResearchGate
    Aug 6, 2025 · A deadlock detector, MPIDD, has been developed for dynamically detecting deadlocks in parallel programs that are written using C+ + and MPI. The ...
  156. [156]
    Concurrent deadlock detection in parallel programs
    A deadlock detector, MPIDD, has been developed for dynamically detecting deadlocks in parallel programs that are written using C++ and MPI.
  157. [157]
    Prometheus | Nagios Enterprises
    Prometheus actively pulls metrics from configured targets, providing reliable and consistent data collection across your infrastructure. Extensive Exporter ...
  158. [158]
    Kubernetes Cluster for RAM and CPU Utilization | Grafana Labs
    This dashboard shows the utilization, RAM and CPU, of a cluster at a specific point in time. It helps to answer those questions quickly.
  159. [159]
    Tools for Monitoring Resources - Kubernetes
    Jan 18, 2025 · You can examine application performance in a Kubernetes cluster by examining the containers, pods, services, and the characteristics of the overall cluster.
  160. [160]
    Deploying multicluster Kubernetes applications with GitOps - Red Hat
    Aug 9, 2022 · This article demonstrates how Red Hat Advanced Cluster Management (ACM) for Kubernetes addresses this challenge by showing an example application deployment.
  161. [161]
    GitOps on Kubernetes: how to manage your clusters? - Theodo Cloud
    Jun 23, 2021 · ArgoCD is a Kubernetes controller, which seeks to synchronize a set of Kubernetes resources in a cluster with the content of a Git repository.
  162. [162]
    [PDF] Enhancing AI-Assisted Debugging in Parallel Programs via Trace ...
    LLMs often hallucinate or overlook key runtime differences. This leads us to ask: Does providing trace-level differences help LLMs explain why parallel program ...
  163. [163]
    Debugmate: an AI agent for efficient on-call debugging in complex ...
    Aug 15, 2025 · DebugMate is an AI agent that integrates internal and external knowledge to help software engineers debug complex systems, using techniques ...2 Methodology · 2.1 Context Gathering · 4 Evaluation
  164. [164]
    What Is Massively Parallel Processing (MPP)? How It Powers ...
    Feb 5, 2025 · Massively Parallel Processing (MPP) is a method of computing that divides large data processing jobs into much smaller tasks and executes them simultaneously ...What is Massively Parallel... · The Origins and History of... · Why MPP Matters for...
  165. [165]
    Massively parallel - Wikipedia
    Massively parallel is the term for using a large number of computer processors (or separate computers) to simultaneously perform a set of coordinated ...
  166. [166]
    A Deep Dive into Massively Parallel Processing (MPP) Architecture
    Sep 19, 2023 · Massively Parallel Processing, or MPP, is a distributed computing architecture designed to execute large-scale analytical workloads by ...
  167. [167]
    Cray's Slingshot Interconnect Is At The Heart Of HPE's HPC And AI ...
    Jan 31, 2022 · We do supercomputing to find limits and break through them, and it is reasonable to expect that the Cray team at HPE will sort it out. (They ...
  168. [168]
    November 2022 - TOP500
    1, Frontier - HPE Cray EX235a, AMD Optimized 3rd Generation EPYC 64C 2GHz, AMD Instinct MI250X, Slingshot-11, HPE DOE/SC/Oak Ridge National Laboratory United ...
  169. [169]
    Summit Supercomputer Ranked Fastest Computer in the World
    Jun 25, 2018 · The IBM Summit system reached a speed of 122.3 petaflops on the High-Performance Linpack benchmark test—the software used to evaluate and rank ...
  170. [170]
    ORNL's Summit remains world's most powerful supercomputer
    Jun 17, 2019 · Summit delivered a record 148.6 petaflops on a benchmark test called High Performance Linpack, or HPL, a TOP500 press release said Monday. That ...
  171. [171]
    El Capitan reigns supreme across three major supercomputing ...
    Jun 16, 2025 · Lawrence Livermore National Laboratory's (LLNL) flagship exascale machine El Capitan maintained its status as the fastest supercomputer on the ...
  172. [172]
    El Capitan Supercomputer at Lawrence Livermore National Lab - HPE
    El Capitan achieves 1.742 exaflops, becoming the most powerful supercomputer in the world. The supercomputer features 100% fanless direct liquid-cooling ...
  173. [173]
    Beyond Beowulf Clusters - ACM Queue
    May 4, 2007 · Midsize clusters are now about 100 machines in strength, big clusters consist of 1,000 machines, and the biggest supercomputers are even larger ...
  174. [174]
    Record-breaking run on Frontier sets new bar for simulating the ...
    Nov 20, 2024 · Researchers set a new benchmark, capturing the physics of dark and atomic matter with unmatched precision.
  175. [175]
    LLNL team accelerates multi-physics simulations with El Capitan ...
    Apr 24, 2024 · Researchers at Lawrence Livermore National Laboratory (LLNL) have achieved a milestone in accelerating and adding features to complex multi-physics simulations.
  176. [176]
    Pulling clouds into focus: Frontier simulations bring long-range ...
    Nov 14, 2023 · The world's first exascale supercomputer will help scientists peer into the future of global climate change and open a window into weather patterns.
  177. [177]
    DOE Announces $70 Million to Improve Supercomputer Model of ...
    Aug 31, 2022 · E3SM is an ultra-high-resolution model of Earth that is run on exascale supercomputers—digital computers like the Frontier at Oak Ridge National ...
  178. [178]
    From HPSFCon 2025: Status and Trends in the HPC Landscape Panel
    Aug 13, 2025 · Key trends include: portability, avoiding complexity, good tooling, incremental adoption, open standards, and models that are portable, ...
  179. [179]
    TRUST: the HPC open-source CFD platform – from CPU to GPU
    However, this kernel underwent a substantial rewrite in 2025, with enhancements such as improved data locality and the use of faster storage (e.g., registers ...
  180. [180]
    High Performance Computing (HPC) - Amazon AWS
    Using AWS, expedite your high performance computing (HPC) workloads & save money by choosing from low-cost pricing models that match utilization needs.Getting Started · Resources · AWS Parallel Computing Service · AWS Batch
  181. [181]
    HPC solution | Google Cloud
    Use Cluster Toolkit to quickly customize and deploy an HPC environment, including partner integrations, or select from pre-built industry blueprints. Use Google ...Missing: search | Show results with:search
  182. [182]
    What is Amazon EC2? - Amazon Elastic Compute Cloud
    Use Amazon EC2 for scalable computing capacity in the AWS Cloud so you can develop and deploy applications without hardware constraints.AWS Backup Documentation · Amazon EC2 instance types · Elastic Load Balancing
  183. [183]
    OpenStack: Open Source Cloud Computing Infrastructure
    OpenStack is an open source cloud computing infrastructure software project and is one of the three most active open source projects in the world.Software · Get Started · OpenStack Documentation · Setting Up Your Gerrit Account
  184. [184]
    Elasticity in Cloud Computing: The Complete 2025 Guide - BuzzClan
    Jul 18, 2025 · Elastic computing can reduce cloud costs by 30-40% through dynamic resource allocation, ensuring you only pay for actively used resources. It ...
  185. [185]
    Working with Spot Instances - AWS ParallelCluster
    AWS ParallelCluster uses Spot Instances if you have set SlurmQueues / CapacityType or AwsBatchQueues / CapacityType to SPOT in the cluster configuration file.
  186. [186]
    Managed Kubernetes Service - Amazon EKS - AWS
    With Amazon EKS Auto Mode, you can fully automate Kubernetes cluster management for compute, storage, and networking with a single click. Amazon EKS ...EKS Features · Amazon EKS FAQs · Pricing · Getting Started
  187. [187]
    Kubernetes concepts - Amazon EKS - AWS Documentation
    You can build and test individual containers on your local computer with Docker or another container runtime , before deploying them to your Kubernetes cluster.
  188. [188]
    Collective Communication for 100k+ GPUs - arXiv
    Oct 21, 2025 · The framework is designed to support complex workloads on clusters exceeding 100,000 GPUs, ensuring reliable, high-throughput, and low-latency ...
  189. [189]
    Powering AI for billions means innovating at every layer ... - Facebook
    Oct 10, 2025 · AI Superclusters: From 4,000 to a groundbreaking 100k+ GPU cluster pushing synchronous parallelism limits. ⚡ Multi-Gigawatt Data Centers: ...
  190. [190]
    Knative: Home
    Knative provides building blocks for cloud applications, enabling serverless workloads on Kubernetes. It has two main components: Serving and Eventing.Knative Serving · Using a Knative-based offering · Install Knative on your cluster
  191. [191]
    5G architecture for hybrid and multi-cloud environments - Ericsson
    Mar 29, 2022 · CSPs and network software vendors need a common hybrid/multi-cloud strategy for efficient operation of 5G services in public and private ...
  192. [192]
    SETI@home: An Experiment in Public-Resource Computing
    SETI@home uses computers in homes and offices around the world to analyze radio telescope signals. This approach, though it presents some difficulties, has ...
  193. [193]
    EGEE Portal: Enabling Grids for E-sciencE
    EGEE: Enabling Grids for E-sciencE is the largest grid infrastructure in the world, which brings together more than 140 institutions to produce a reliable ...
  194. [194]
    Globus Toolkit: Grid Computing Middleware | Argonne National ...
    Globus is open source grid software for distributed resource sharing, including software for security, resource management, monitoring, discovery, and data ...Missing: definition SETI@ home EGEE<|control11|><|separator|>
  195. [195]
    SaaS vs PaaS vs IaaS – Types of Cloud Computing - Amazon AWS
    IaaS contains the basic building blocks for cloud IT and typically provides access to networking features, computers (virtual or on dedicated hardware), and ...
  196. [196]
    Serverless computing and applications | Microsoft Azure
    Serverless computing enables developers to build applications faster by eliminating the need for them to manage infrastructure.No Infrastructure Management · More Efficient Use Of... · Serverless FunctionsMissing: alternative | Show results with:alternative
  197. [197]
    Fog Computing and Edge Computing as Alternatives to the Cloud
    Jan 10, 2024 · Edge computing moves data processing to the network's edge, while fog computing extends cloud power to the network's periphery, both moving ...
  198. [198]
    SparkGrid: Blockchain Assisted Secure Query Scheduling and ...
    Feb 14, 2025 · Grid computing is an emerging technology that enables the heterogeneous collection of data and provision of services to users.
  199. [199]
    Serverless Computing: A Survey of Opportunities, Challenges, and ...
    Function as a service (FaaS) is a paradigm in which the customers can develop, run, and manage functions without the trouble of building and maintaining the ...
  200. [200]
    Serverless Computing - Communications of the ACM
    Sep 1, 2023 · Serverless computing is commonly understood as an approach to developing and running cloud applications without requiring server management. The ...
  201. [201]
    [1602.05629] Communication-Efficient Learning of Deep Networks ...
    Feb 17, 2016 · Communication-Efficient Learning of Deep Networks from Decentralized Data. Authors:H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, ...
  202. [202]
    [PDF] Blockchains from a Distributed Computing Perspective
    Jan 16, 2018 · It is helpful to start by reviewing a blockchain precursor, the so- called universal construction for lock-free data structures [12]. Alice runs ...Missing: seminal | Show results with:seminal
  203. [203]
    Review on variants of reliable and security aware peer ... - IEEE Xplore
    Review on variants of reliable and security aware peer to peer content distribution using network coding. Abstract: Content distribution system can transmit ...
  204. [204]
    The Evolution of IBM's Quantum Information Software Kit (Qiskit)
    Aug 17, 2025 · We show how Qiskit facilitates hybrid classical-quantum workflows and enables the deployment of algorithms on physical quantum hardware ...
  205. [205]
    Intel Builds World's Largest Neuromorphic System to Enable More ...
    Apr 17, 2024 · Loihi-based systems can perform AI inference and solve optimization problems using 100 times less energy at speeds as much as 50 times ...
  206. [206]
    Toward Decentralized Operationalization of Zero Trust Architecture ...
    Apr 11, 2025 · Our pioneering decentralized Zero Trust Architecture (dZTA), proposed in this paper, redefines protection for IoT and remote collaboration.Missing: systems | Show results with:systems