Fact-checked by Grok 2 weeks ago

IBM BladeCenter

The BladeCenter is a modular platform developed by , first introduced in November 2002, that enables high-density by integrating multiple thin server blades into a single sharing common resources such as power supplies, cooling fans, networking, and management infrastructure. This design optimizes space utilization in data centers, reduces cabling complexity, and lowers operational costs through features like hot-swappable components and redundant systems. Key models include the BladeCenter E (7U supporting up to 14 single-wide blades), H (9U with high-speed I/O for up to 14 blades), S (7U with integrated for small to medium enterprises), and T (compact 8U for environments), each tailored for scalability and workload flexibility across x86 and architectures. Introduced as part of IBM's eServer xSeries lineup and later evolving into the portfolio, the BladeCenter supported diverse processors like and IBM POWER7, with memory capacities up to 1.28 TB in certain configurations, and I/O options including , , , and 10 GbE. It facilitated advanced virtualization through compatibility with , AIX, , and Windows, while incorporating reliability features such as N+1 power redundancy, memory mirroring, and hot-swap bays for minimal . The platform's capabilities, powered by tools like the Advanced Management Module (AMM) and Systems Director, enable remote monitoring, firmware updates, and security protocols including SSL, VLANs, and access controls. By 2014, enhancements like GPU expansion with cards and integration with zEnterprise for hybrid environments further solidified its role in enterprise data centers, though production of new units ceased around 2014, with the x86 line acquired by and succeeded by products like Flex System, while continued with Power Systems. In 2014, sold its x86 server business, including BladeCenter, to .

Overview

Definition and Purpose

The IBM BladeCenter is a system designed as a high-density, scalable that houses multiple thin server modules, known as blades, within a single enclosure. These blades are independent equipped with their own processors, , , and operating systems, but they share essential including power, cooling, networking, and management resources through a high-availability midplane. This modular architecture allows for up to 14 hot-swappable blades per in certain configurations, enabling efficient resource utilization and simplified deployment in rack-optimized environments. The primary purpose of the IBM BladeCenter is to support high-density in centers by consolidating multiple servers into a compact , thereby reducing cabling complexity, floor space requirements, and overall power consumption compared to traditional standalone or tower servers. This shared infrastructure lowers the (TCO) while enhancing scalability and flexibility, as blades can be easily added, removed, or upgraded without disrupting operations. It is particularly suited for applications such as , (HPC), and server clustering, where dense compute resources are critical for handling demanding workloads efficiently. Initially targeted at to markets, the BladeCenter addresses the need for cost-effective solutions in - and space-constrained IT environments, such as distributed enterprises, telecom facilities, and business-critical applications like or hosted services. Unlike conventional 1U or 2U servers, which require individual supplies, cooling units, and connections, the BladeCenter's design promotes and centralized management, resulting in up to significant reductions in operational complexity and energy use.

Key Architectural Features

The IBM BladeCenter employs a shared midplane as its central interconnect, which links blade servers directly to power supplies, cooling fans, and I/O modules without requiring external cabling, thereby simplifying deployment and maintenance. This dual-redundant, high-availability midplane supports high-speed fabrics, including 10 Gbps Ethernet and connections, routed to switch modules for efficient data transfer across the chassis. The system's modular architecture accommodates up to 14 half-height or 7 full-height blades within a single , promoting density and flexibility in . Redundant power and cooling components, including configurations with multiple hot-swappable power supplies and blower modules, ensure continuous operation even during failures, minimizing in environments. For instance, power supplies rated at 2000 W provide scalable delivery tailored to workload demands. Management capabilities are centralized through the Advanced Management Module (AMM), which facilitates remote monitoring of system health, firmware updates, and light path diagnostics via LED indicators on the chassis and components to pinpoint faults rapidly. The AMM supports multiple interfaces, including web-based access, (CLI), and (SNMP), with IPMI 1.5 compliance for standardized remote control. Scalability is enabled by chassis clustering and switch stacking features, allowing of multiple units into larger fabrics, such as up to 8 chassis in specialized configurations or stacking up to 8 switches for expanded networking. This design supports growth from small deployments to enterprise-scale clusters while maintaining . is prioritized through optimized power distribution and airflow , with 2000 W power supplies that adjust output based on load and front-to-back cooling paths that enhance thermal performance across the chassis. These elements reduce overall power consumption and operational costs in settings.

History

Development and Introduction

The development of IBM BladeCenter originated in the fall of 1999, when members of IBM's xSeries server brand technical team—previously known as the Netfinity brand—conceived the system as a response to escalating demands for higher density, centralization, and in environments. This initiative was driven by the broader industry transition toward blade architectures, which promised to address challenges such as space constraints, power consumption, and remote management in rapidly expanding s, amid growing pressures to reduce operational costs and enhance system reliability for applications like and clustering. IBM formally announced the BladeCenter in November , marking its entry into the market with the launch of the BladeCenter E chassis, designed to support up to 14 blades in a compact 7U and scalable to 84 blades per rack. The initial offering featured dual processors, positioning it as the first major vendor's -based blade solution and emphasizing in a dense to meet needs without compromising on speed or efficiency. Spearheaded by IBM's server division, the project aimed to establish a competitive foothold against established blade offerings from and , which had already gained traction in the market by providing modular, high-density alternatives to traditional rack servers. Key innovations focused on integrated design, shared infrastructure for power and cooling, and simplified provisioning to lower while supporting robust workloads. Central to the BladeCenter's debut were initial partnerships, notably with , which collaborated on system architecture, processors like and future 2, chipsets, and management software to optimize performance and . Networking integration drew from partners including , whose switch modules were compatible from the outset to enable connectivity within the BladeCenter environment.

Product Evolution

Following its initial launch, the IBM BladeCenter platform expanded in 2006 with the introduction of the chassis, a 9U designed to support high-speed I/O requirements through dedicated bays for advanced modules such as and 4X switches. This model, designated 8852-5Tx, accommodated up to 14 blades and enhanced networking flexibility with features like the Multi-Switch Interconnect Module, enabling up to eight network connections per blade while maintaining compatibility with existing BladeCenter E blades. The chassis addressed demands for high-bandwidth applications, such as data-intensive workloads, by incorporating redundant supplies rated at 2,900 W or higher and improved cooling systems. In 2007, IBM further broadened the platform's applicability with the BladeCenter S chassis, a compact 7U model (8886-1Tx) optimized for rack integration in smaller environments, supporting up to six blades alongside integrated / storage for up to 12 hot-swap drives. This variant emphasized ease of deployment in office settings, featuring office-friendly acoustics, a Pass-thru Module for simplified connectivity, and compatibility with a wide range of switch modules. Concurrently, the platform evolved to support multi-architecture configurations, allowing mixed deployments of Xeon, Opteron, , and Cell Broadband Engine-based blades within the same chassis, which expanded its utility across diverse computing needs like and . Management capabilities advanced in 2007 with the rollout of the Advanced Management Module (AMM), which provided enhanced firmware for BladeCenter chassis, including Protected Mode for isolated network management and Serial over LAN support to facilitate remote access in virtualized environments. A key milestone that year was the integration of BladeCenter into the Roadrunner supercomputer project, where QS22 blades leveraging Cell processors and LS21 AMD Opteron blades delivered hybrid performance for scientific simulations at Los Alamos National Laboratory. By 2008, IBM introduced energy-efficient variants, such as the HS21 XM blade with low-voltage quad-core processors, achieving competitive SPEC CPU2006 benchmarks while reducing power draw in dense configurations. These updates, including second-generation support in BladeCenter servers, prioritized for operations without compromising scalability.

Discontinuation and Legacy

In 2012, IBM announced the PureFlex System, a platform that effectively replaced the BladeCenter as the company's primary offering, signaling the end of active development and new marketing for the BladeCenter line. IBM provided end-of-support (EOS) dates for various BladeCenter components, with the BladeCenter E chassis reaching EOS on December 31, 2013, while the H and S chassis reached EOS on December 31, 2021. The 2014 sale of IBM's x86 server business to included the BladeCenter portfolio, transferring primary support responsibilities for x86-based BladeCenter products to and shifting ongoing maintenance to the new owner. The BladeCenter's legacy endures in its influence on modern converged systems, where its modular chassis and integrated management features informed designs like the Flex System, enabling more streamlined hybrid environments. It remains deployed in legacy (HPC) and setups for reliable, dense computing needs. Additionally, BladeCenter advanced energy-efficient computing standards by demonstrating up to 30% lower power usage compared to competitors through optimized power supplies and cooling, contributing to broader industry practices for reducing energy consumption.

Enclosures

BladeCenter E Chassis

The IBM BladeCenter E chassis, introduced in November 2002, represents the foundational enclosure in the BladeCenter family, designed as a cost-effective solution for high-density server deployments. This 7U rack-mounted unit supports up to 14 single-width (half-height) blade servers or 7 double-width (full-height) blade servers, enabling efficient consolidation of computing resources within a compact form factor. The chassis features a modular architecture with shared infrastructure, including bays for I/O modules, power supplies, and cooling components, to minimize cabling and operational overhead. Key to its design are four standard I/O bays for switch modules, primarily supporting Ethernet connectivity up to 1 Gb/s, and two additional high-speed bays (positions 7 and 9) capable of accommodating 10 Gb Ethernet or pass-through modules for enhanced networking in select . Power is provided by up to four hot-swappable 2000 (200-240 V AC) modules, installed in redundant pairs to ensure continuous operation, with a maximum system power draw of approximately 6938 depending on configuration. Cooling is managed by two hot-swappable variable-speed blowers delivering front-to-back airflow at up to 500 CFM total, with allowing one blower to sustain full load if the other fails. Targeted at general-purpose computing environments such as server consolidation, e-business applications, and enterprise infrastructure including file/print services and collaboration tools, the BladeCenter E prioritizes density and over specialized performance needs. Its Ethernet-centric I/O architecture makes it well-suited for standard networked workloads but less ideal for high-bandwidth applications like intensive or storage area networks, where it offers limited native support compared to higher-end .

BladeCenter H Chassis

The IBM BladeCenter H chassis, introduced in February 2006, was designed as a high-density to address demanding I/O-intensive workloads in and high-performance environments. It evolved from the earlier BladeCenter E by prioritizing enhanced networking over basic server density, enabling faster data transfer rates essential for applications like large-scale simulations and processing. Occupying 9U of space, the supports up to 14 half-height or 7 full-height servers, allowing for scalable deployment in standard 19-inch racks up to 28 inches deep. Its I/O architecture includes 10 bays—6 for standard switch modules (bays 1-6) and 4 dedicated to high-speed fabrics (bays 7-10)—facilitating connections via (10GbE) and , which provide up to four 10Gb data channels per for low-latency, high-throughput networking. This configuration supports fabrics for traditional Ethernet, , and pass-through modules in the standard bays, while the high-speed bays accommodate advanced options like 4X switches to minimize bottlenecks in data-heavy operations. Power delivery is handled by up to four hot-swappable 2,900-watt (or variant 2,980-watt) supply units, with a standard pair sufficient for the first seven blade bays and additional pairs required for full population; earlier models also supported 2,000-watt or 2,320-watt options for varied efficiency needs. Cooling is managed through two redundant hot-swap blower assemblies, supplemented by up to 12 fans integrated into the power supplies, ensuring front-to-back airflow and thermal redundancy even under maximum load. These features made the BladeCenter H suitable for (HPC) and financial trading applications, where rapid data access is critical. Notably, the chassis formed the foundational enclosure for the IBM Roadrunner supercomputer, deployed at , where clusters of BladeCenter H units housed custom TriBlade modules to achieve petaflop-scale performance for scientific simulations from 2008 to 2013. This integration highlighted its role in enabling hybrid architectures combining general-purpose processors with accelerator technologies for complex, bandwidth-sensitive tasks.

BladeCenter S Chassis

The IBM BladeCenter S , introduced in 2008, represents a storage-optimized enclosure designed to integrate , , and networking capabilities into a compact suitable for small and medium-sized businesses (SMBs). This 7U rack-mounted unit supports up to six hot-swap full-height (30 mm) blade or three double-wide (60 mm) blades, enabling direct attachment of resources to simplify deployment in distributed or office environments. Like other BladeCenter models, it employs a shared midplane for interconnecting blades with I/O and management modules. Key to its storage focus, the BladeCenter S includes two dedicated expansion bays for SAS/SATA disk storage modules, accommodating up to 12 hot-swap 3.5-inch drives or 24 hot-swap 2.5-inch drives, providing a maximum capacity of approximately 5.4 TB using contemporary 450 GB drives at the time of launch. For I/O connectivity, it features four hot-swap bays for switch modules, with two bays dedicated to standard Ethernet and the remaining two supporting Ethernet, , or expansions to facilitate integrated /NAS configurations. An integrated controller module, installed in one of two battery-backed bays, enables levels 0, 1, 5, and 10 for simplified storage management without external arrays. Power and cooling are provisioned for reliability and scalability in SMB settings, with support for up to four hot-swap 950 (110-127 V ) or 1450 (200-240 V ) power supplies arranged in redundant configurations to handle varying loads. Cooling is managed by four hot-swap redundant fan modules, each containing two fans for a total of eight, delivering airflow up to 400 CFM to maintain operational temperatures in rack or standalone tower modes via an optional Enablement Kit. This design emphasizes ease of use, with tools like the BladeCenter Start Now Advisor for quick setup and quiet operation suitable for non-data-center locations.

Specialized Enclosure Variants

The IBM BladeCenter T, introduced in , is a specialized variant designed for environments, offering carrier-grade reliability through compliance with (NEBS) Level 3 and standards. This chassis fits in 23-inch wide racks, with a compact 7U height and maximum depth of 600 mm, enabling high-density deployments in central office settings. It supports up to 8 hot-swappable blade servers, maintaining the modular architecture of standard BladeCenter designs while incorporating redundant power supplies and enhanced seismic protection for NEBS environments. The BladeCenter HT, announced in 2007, extends the high-performance H chassis for demanding conditions, particularly those with elevated ambient temperatures. It operates in standard environments up to 40°C but supports short-term excursions to 55°C, facilitated by four hot-swappable fan modules with variable-speed control and customer-replaceable filters for improved airflow in hot-aisle configurations. This 12U enclosure accommodates up to 12 single-wide blades or 6 double-wide blades, along with four standard I/O bays and four high-speed bays, while providing DC or options for redundancy. Both variants target telecommunications use cases, such as VoIP gateways, media servers, and systems in central offices for the T, and high-availability applications in warmer data centers for the HT, where traditional cooling limits apply. They preserve core BladeCenter modularity—shared I/O fabrics, management modules, and expansion options—but adapt mechanically and thermally for specialized reliability in telco-grade or environmentally challenged infrastructures.

Blade Servers

Intel-Based Blades

The Intel-based blades for the IBM BladeCenter represented the primary x86 computing nodes, evolving from early dual-processor designs to scalable multi-socket configurations optimized for general-purpose workloads such as , databases, and enterprise applications. These blades utilized processors, progressing through generations from Pentium-era multiprocessor (MP) variants to advanced Nehalem, Westmere, and architectures, with support for increasing core counts, memory capacities, and I/O capabilities like PCIe Gen2. They were designed as single-wide (30 mm) or double-wide (60 mm) form factors to fit within standard BladeCenter enclosures, enabling dense deployments of up to per in the E or H models. The BladeCenter HS20, introduced in 2002, was the foundational -based blade, supporting up to two MP processors at speeds of 2.8 GHz or 3.6 GHz with 800 MHz and 1 MB per processor. It featured up to 8 GB of PC2-3200 DDR2 across four slots, two hot-swap or drives, and integrated dual [Gigabit Ethernet](/page/Gigabit Ethernet) ports, making it suitable for initial blade adoption in data centers. The HS20's emphasized with a 2,000-watt compatibility and for reliable multi-processor operation. Succeeding the HS20, the HS21 launched in 2006 as a dual-core upgrade, accommodating up to two processors (dual-core at up to 3.0 GHz or quad-core at up to 2.66 GHz) with 1333 MHz and up to 8 MB . capacity expanded to 32 GB of 667 MHz PC2-5300 DDR2 ECC via eight slots, alongside support for two hot-swap / drives and dual controllers. This model introduced enhanced support through Intel VT technology and was compatible with BladeCenter E and H chassis for improved scalability in mid-range server environments. The HS22, released in 2009, advanced to quad-core 5500 or 5600 series processors (Nehalem/Westmere architecture) at up to 3.6 GHz, with up to 12 cores total and integrated for better multi-threaded performance. It supported up to 192 GB of DDR3 across 12 slots (using 16 GB modules), two hot-swap /SATA/SSD drives, and PCIe Gen2 for faster expansion card integration, positioning it as a high-density option for and applications. The HS22's 30 mm single-wide allowed full-height or half-height configurations, maximizing compatibility across BladeCenter enclosures. In 2012, the HS23 and HS23E models marked the transition to Sandy Bridge-based E5 processors, with the HS23 offering up to two E5-2600 series CPUs (up to 8 cores each at 2.6 GHz or higher) for . The HS23 supported up to 512 GB of DDR3 via 16 slots, two hot-swap drives (SAS/SATA/SSD), and PCIe Gen2 x16 connectivity, emphasizing energy efficiency with 80W TDP options. The entry-level HS23E variant used single-socket E5-2400 processors (up to 8 cores at 2.0 GHz) with up to 192 GB memory across 12 DIMMs, targeting cost-sensitive web serving and light , both in 30 mm form factors. The BladeCenter HX5, introduced in 2010, provided scalable capabilities as a double-wide (60 mm) blade supporting 2 to 4 sockets with E7-4800 or E7-2800 series processors (up to 10 cores each at 2.0 GHz or higher), enabling configurations of up to 40 cores per blade. It featured expandable up to 1.5 TB DDR3 using a MAX5 memory expansion blade (40 slots total), four hot-swap /SATA drives, and PCIe Gen2 support for I/O-intensive workloads like large-scale databases. This model's allowed seamless integration into BladeCenter H , supporting up to 56 HX5 blades per rack for massive consolidation. For entry-level needs, the HC10 blade debuted in 2007 as a single-socket, 30 mm workstation-oriented powered by Intel Core 2 Duo processors (E6300 to E6700 series at up to 2.66 GHz with 4 MB L2 cache and 1066 MHz FSB). It offered up to 8 GB DDR2 across four slots, one 2.5-inch drive (up to 60 GB), integrated NVS 120M graphics, and dual ports, ideal for web serving, CAD, and remote desktop applications in BladeCenter S or E chassis.

AMD-Based Blades

The IBM BladeCenter LS series comprised a family of AMD Opteron-based blade servers designed for cost-effective, high-density x86 computing, emphasizing and scalability for workloads such as (HPC) and . These blades leveraged 's 64-bit processors to deliver competitive performance in memory-intensive applications while maintaining lower power consumption compared to contemporary Intel-based alternatives. The series was optimized for the BladeCenter's shared I/O architecture, allowing seamless integration into standard enclosures without dedicated per-blade networking or storage controllers. The inaugural model, the BladeCenter LS20, was introduced in June 2005 as IBM's first blade server, supporting single- or dual-socket configurations with initial single-core processors like the 2 GHz model 248, and later dual-core variants such as the 2 GHz 270. It offered up to 16 GB of and optional integrated graphics via an ATI Radeon 7000M for basic console access, making it suitable for entry-level clustering and technical computing tasks. The LS20's design prioritized density, fitting 14 blades per E or chassis, with a (TDP) under 200W per blade to support dense deployments. Succeeding the LS20, the LS21 arrived in October 2006, featuring dual-socket dual-core processors at speeds of 2.0 to 2.6 GHz, each with 1 MB L2 cache per core and full 64-bit support. capacity expanded to 64 GB of DDR2-667 registered DIMMs with Chipkill error correction, enabling robust and scientific workloads. Like its predecessor, the LS21 was compatible with BladeCenter E and H chassis, drawing approximately 180W to balance performance and efficiency in environments requiring up to 28 blades per . The LS22, launched in 2008, advanced the series with quad-core "Barcelona" processors, such as the 2.3 GHz model 2356, delivering enhanced multi-threaded performance for HPC applications. It supported up to 64 GB of DDR2-667 memory across eight slots and maintained the low-power profile of earlier LS models, with a focus on scalability for clustered . The LS22's architecture emphasized multi-core efficiency, allowing it to handle tasks more effectively than prior generations while fitting the same E and H chassis form factors. For multi-socket needs, the double-wide LS41 debuted alongside the LS21 in 2006, scalable from two to four dual-core processors, with support for up to 64 GB of DDR2 memory. Updated in 2007 with 65W TDP dual-core Opterons, it targeted enterprise workloads like warehousing, offering up to two internal drives for local storage. The LS41's four-socket capability provided up to 30% better efficiency in idle states compared to similar blades, suiting in BladeCenter H chassis. The LS42, introduced in , represented the series pinnacle with support for up to four quad-core "Shanghai" processors, such as the 2.7 GHz model 8384, and expanded memory to 128 GB of DDR2-800 very low-profile (VLP) DIMMs. This model excelled in HPC and , accommodating up to three hot-swap drives and integrated graphics options for management. With a power draw around 300W, the LS42 optimized for BladeCenter E and H enclosures, delivering scalable performance for demanding, multi-threaded environments.

Power-Based Blades

The IBM BladeCenter Power-based blades utilized IBM POWER processors to deliver for enterprise environments, particularly suited for Unix-based workloads such as AIX and distributions. These blades emphasized scalability, through PowerVM, and via features like Active , enabling dense deployments in BladeCenter H and S for tasks including consolidation and high-availability applications. The JS series represented the initial lineup of Power-based blades, starting with the JS20 and JS21 models introduced around 2005 and 2006, respectively, both powered by architecture with dual processors operating at speeds up to 2.2 GHz and supporting up to 8 GB of PC2700 . The JS20 featured two single-core or dual-core options with 1.6 GHz or 2.2 GHz clocks, integrated 512 KB L2 cache per core, and options for up to 120 GB internal ATA100 storage, while the JS21 advanced to dual-core 2.3 GHz or 2.5 GHz MP processors with up to 16 GB of memory and support for drives up to 146 GB. These entry-level blades supported AIX 5L and , targeting midrange enterprise applications with modular expansion via I/O cards for Ethernet and connectivity. Subsequent JS models built on this foundation with processors. The JS22, released in 2008, incorporated dual 4.0 GHz dual-core processors for four cores total, up to 32 of DDR2 Chipkill across four slots, and a single 73 or 146 SAS drive, with integrated dual and support via Integrated Virtualization Manager (IVM). The JS23, introduced in 2007, offered a single-wide with a quad-core 4.2 GHz processor, up to 64 of DDR2 in eight VLP slots, and options for 73 to 300 SAS or SSD , emphasizing blade clustering for and database workloads on AIX or . Complementing these, the entry-level JS12 (2007) provided two 3.8 GHz cores, up to 64 DDR2 , and integrated SAS controller, while the double-wide JS43 (2007) scaled to eight 4.2 GHz cores, 128 in 16 slots, and up to 600 with 0/1 support, ideal for demanding systems and with up to 320 micro-partitions. The PS series shifted to Power7 processors for enhanced multithreading and performance, with the PS700 and PS701 models announced in 2010, featuring 3.0 GHz POWER7 cores (four for PS700, eight for PS701), supporting up to 64 GB and 128 GB DDR3 memory respectively, and single or dual 2.5-inch SAS drives with RAID 0/10. Evolving further, the PS702 (2010) doubled to 16 cores across two sockets with 256 GB maximum memory, while the PS703 and PS704 (2010-2011) provided 16 and 32 cores at 2.4 GHz, up to 256 GB and 512 GB DDR3 VLP memory (with Active Memory Expansion), and advanced storage options including up to four 1.8-inch SSDs or dual SAS drives with RAID 0/5/6/10 support, enabling up to 512 GB RAM configurations for large-scale database and ERP deployments on AIX. These blades integrated seamlessly with PowerVM for live partition mobility and supported coexistence in mixed BladeCenter environments, prioritizing enterprise reliability for workloads like transaction processing.

Cell-Based Blades

Cell-based blades in the IBM BladeCenter lineup utilized the Cell Broadband Engine (Cell/B.E.) , a joint development by , , and , optimized for tasks requiring intensive vector processing. These blades, available in single-wide (QS21) or double-wide (QS20, QS22) s and designed primarily for the BladeCenter H chassis, targeted applications in multimedia rendering and scientific simulations where parallel floating-point operations were critical. Introduced as part of IBM's effort to extend the Cell architecture beyond like the , they provided dense computational power in a compact . The QS series represented the progression of Cell-based offerings. The QS20, launched in 2006, featured two 3.2 GHz Cell/B.E. processors, 1 GB of XDRAM (512 MB per processor), and integrated dual Gigabit Ethernet with optional InfiniBand connectivity for high-speed clustering. It supported Linux distributions and was suited for initial deployments in compute-intensive environments. The QS21, released in 2007, built on this with enhanced I/O capabilities in a single-wide form factor, including dual-port DDR InfiniBand host channel adapters and increased memory to 2 GB XDRAM (1 GB per processor), enabling better scalability for networked workloads. By 2008, the QS22 advanced the line further with two 3.2 GHz PowerXCell 8i processors—each delivering up to 256 GFLOPS in single-precision floating-point performance—along with support for up to 32 GB of DDR2 SDRAM and PCIe expansion slots for additional accelerators like GPUs. At the core of these blades was the Cell/B.E. architecture, comprising one Power Processing Element (PPE)—a general-purpose 64-bit PowerPC core—and eight Synergistic Processing Elements () per processor, each with 256 KB of local store for SIMD vector operations. The PPE handled system management and I/O coordination, while the SPEs executed compute tasks, interconnected via an element interconnect bus for efficient data flow. Maximum configurations reached 32 GB in later models like the QS22, with PCIe interfaces allowing integration of coprocessors for hybrid acceleration. These blades were compatible with Power-based BladeCenter environments, sharing chassis infrastructure for mixed deployments. Applications for Cell-based blades focused on domains leveraging their vector processing strengths, such as real-time video encoding and 3D rendering for surveillance and medical imaging, where the QS21 demonstrated efficient H.264 compression at scale. In scientific computing, they excelled in simulations like molecular dynamics and climate modeling, with the QS22 contributing to hybrid systems. Notably, QS22 blades formed a key component in the IBM Roadrunner supercomputer, providing the Cell processing elements in TriBlade modules that achieved petaflop-scale performance in 2008.

Specialized Blades

The IBM BladeCenter supported a range of specialized blades designed for niche and networking requirements beyond standard x86 or architectures, enabling compatibility with diverse workloads in shared chassis environments. These blades addressed specific needs such as running legacy applications or enhancing network consolidation and security, often through partnerships with third-party vendors. One prominent example is the T2BC blade, introduced in 2008 as a single-wide based on the Sun UltraSPARC T2 processor. This uniprocessor blade featured an 8-core UltraSPARC T2 chip operating at 1.2 GHz, with each core supporting four threads for a total of 64 simultaneous threads, integrated floating-point units, and hardware cryptographic acceleration for secure processing tasks. It supported up to 32 GB of fully buffered across eight slots and included two onboard ports, with options for , , or expansion via a connector. Designed to run 10 natively, the T2BC facilitated migration of older 8 and 9 applications through compatibility tools, making it suitable for environments requiring SPARC-based legacy support within IBM BladeCenter H, T, or HT chassis. Another specialized blade was the IBM PN41, released in October 2008 as a high-performance, programmable unit focused on (DPI) and network optimization. This full-height blade utilized an IXP2805 network processor to handle intensive packet processing, incorporating four 1 controllers and four 10 ports for high-throughput connectivity and pass-through functionality. It supported an open development environment with the Eclipse IDE, allowing customization for telecom and government applications such as , security enhancement, and revenue-generating services like usage-based billing. The PN41 was compatible with BladeCenter H and HT chassis, consuming up to 150 watts, and required integration with compatible modules for external connectivity. These blades exemplified the BladeCenter's extensibility for niche deployments, such as workload consolidation in mixed environments or advanced networking to reduce infrastructure costs without disrupting standard server operations. Their adoption was limited to specialized scenarios, including migrations and custom high-speed architectures.

Expansion Modules

Switch Modules

Switch modules in the IBM BladeCenter provide essential networking and storage connectivity for blade servers within the , enabling standard-speed data transfer without requiring external switches for basic operations. These modules occupy the standard I/O bays (typically bays 1 through 4) and are hot-swappable, allowing or replacement without powering down the . They are managed through the BladeCenter Management Module (BCMM), which offers web-based, , or command-line interfaces for configuration, monitoring, and diagnostics, including setup, port status, and updates. For connectivity, the Systems Intelligent Switch Module (CIGESM) serves as a representative option, delivering Layer 2 switching with 14 internal ports connecting to blade servers via the midplane and 4 external copper ports for uplink to the network. It supports up to 250 virtual LANs (VLANs) using trunking, enabling traffic segmentation and EtherChannel aggregation for improved redundancy and bandwidth. Additional features include through filtering and to optimize traffic, making it suitable for enterprise LAN environments. also offered similar 1GbE modules, such as those from (now ), with comparable port configurations for basic Ethernet switching. Storage area network (SAN) access is facilitated by Fibre Channel switch modules from vendors like QLogic and , providing dedicated paths for high-availability storage. The QLogic 20-port 4Gb SAN Switch Module features 14 internal ports for blade connectivity and 6 external ports supporting 1, 2, or 4 Gbps speeds, with options to license additional ports for scalability. For higher throughput, the QLogic 20-port 8Gb SAN Switch Module extends this to 2, 4, or 8 Gbps per port, ensuring compatibility with existing 4Gb transceivers while enabling faster SAN fabrics. 's 20-port 8Gb SAN Switch Module similarly offers 14 internal and 6 external auto-sensing ports at up to 8 Gbps, operating in full-fabric mode for with IBM b-type SANs or Access Gateway mode using NPIV for into standard fabrics, with low around 0.74 µsec to support demanding storage workloads. InfiniBand switch modules address low-latency requirements for high-performance clustering applications, such as scientific computing and . The Cisco 4x InfiniBand Switch Module provides 14 internal 4x ports at 10 Gbps for blade-to-switch connectivity and 8 external 4x ports that autonegotiate up to 20 Gbps (DDR), with full non-blocking switching across 24 ports total to minimize bottlenecks in cluster environments. It includes an on-board subnet manager for distributed network intelligence and supports redundant topologies, delivering sub-microsecond latencies ideal for tasks. Pass-through and bridge modules offer simplified direct connections or Ethernet bridging without active switching logic, useful for integrating with external . The IBM BladeCenter Copper Pass-thru Module includes 14 internal bi-directional ports to blades and 3 external ports (each handling up to 5 channels at 1 Gbps), using CAT5E cables for straightforward cabling to upstream switches. Bridge modules, such as Ethernet variants, allow transparent extension of external networks into the , bypassing internal switching for custom topologies. These modules install directly into standard I/O bays via a slide-in , with status verified through front-panel LEDs and BCMM integration for basic monitoring.

High-Speed Switch Modules

High-speed switch modules in the IBM BladeCenter are specialized I/O components designed for high-bandwidth, low-latency networking in performance-intensive environments, exclusively compatible with the BladeCenter H and HT chassis via dedicated double-height bays 7-10. These modules enable advanced fabrics beyond standard , supporting protocols like and to facilitate scalable cluster interconnects for (HPC). By occupying the high-speed bays, they provide non-disruptive integration with existing chassis infrastructure while prioritizing throughput and reliability for demanding workloads. A representative 10 Gigabit Ethernet option is the BNT 10-port 10 Gb Ethernet Switch Module, which delivers full Layer 2/3 switching and routing at line-rate speeds up to 10 Gbps per port across a total of 24 ports, including 10 external SFP+ uplinks (configurable for 1 Gb fallback) and 14 internal links to server blades. This module supports integration with standard Ethernet fabrics while offering enhanced scalability for applications in blade-dense setups, with no under full load. Although specific InfiniBand emulation is not native, compatible high-speed pass-through modules can extend protocol support in conjunction with Ethernet operations. For -based high-speed connectivity, the 40 Gb Switch Module—developed in partnership with and later maintained under after acquisition—provides 4X QDR performance at up to 40 Gbps bidirectional, featuring 14 internal 4X ports for direct blade connections and 16 external auto-sensing QSFP ports. This module includes integrated subnet management via an on-board CPU for fabric configuration and monitoring, making it suitable for HPC environments requiring efficient cluster scaling. It offers with DDR at 20 Gbps and SDR at 10 Gbps, enabling emulation of lower-speed links for legacy integration without performance degradation. Key features encompass (QoS) through fully non-blocking switching, support via redundant topologies, and ultra-low below 200 ns for blade-to-blade transfers, significantly reducing communication overhead in large-scale clusters of up to 126 nodes. Deployment in the H chassis bays ensures hot-swappable operation with minimal downtime.

Custom Modules

Beyond standard offerings, custom modules for the IBM BladeCenter were limited to proprietary integrations tailored for specific (OEM) requirements, such as specialized storage or I/O extensions. These custom implementations underscored the BladeCenter's architectural flexibility, allowing adaptation for extreme environments like simulation and modeling, though such modules were not widely commercialized.

References

  1. [1]
    [PDF] IBM BladeCenter Products and Technology
    Feb 6, 2014 · This document describes the BladeCenter chassis and blade server technology, I/O modules, expansion options, networking, and storage ...<|control11|><|separator|>
  2. [2]
    What is a Blade Server? | IBM
    Blade servers, also known as high-density servers, are compact computing modules that manage the distribution of data across a computer network.What is a blade server? · How do blade servers work?
  3. [3]
    Overview - IBM BladeCenter (Type 8677, 1881)
    The IBM BladeCenter has a media tray, module bays, 14 blade bays, up to four 2000-watt power supplies, and a high-availability midplane.
  4. [4]
    [PDF] Hardware Maintenance Manual and Troubleshooting Guide - IBM
    The BladeCenter system provides common resources that are shared by the blade servers, such as power, cooling, system management, network connections, and input ...<|control11|><|separator|>
  5. [5]
    [PDF] The IBM eServer BladeCenter JS20
    The Management Module is connected to each blade server and I/O module via management buses embedded in the BladeCenter chassis midplanes. The. Management ...<|control11|><|separator|>
  6. [6]
    Troubleshooting Light path diagnostics - IBM BladeCenter (8677)
    Jan 29, 2019 · Check the error log for messages. Look for an error LED on the modules and blades to locate the component. If the error LED is on a module, ...
  7. [7]
    BladeCenter system overview - ResearchGate
    Aug 9, 2025 · The IBM eServer™ BladeCenter® system was conceived in the fall of 1999 by members of the IBM xSeries® server brand technical team (at the ...
  8. [8]
    IBM and Intel forge blade server pact - The Register
    IBM and Intel today announced plans to jointly develop blade servers, promising reduced-cost systems without sacrificing system performance.
  9. [9]
    IBM introduces Xeon-based blade server - digitimes
    On September 24, IBM launched a dual-Xeon-powered blade server designed to hold up to 84 blades per rack. The BladeCenter, the first blade ...
  10. [10]
    Introducing IBM BladeCenter Family
    Mar 24, 2006 · ▫ Connector on the back of the BladeCenter assures that the cable can not be installed incorrectly. ▫ These cables work in the same fashion ...
  11. [11]
    [PDF] IBM BladeCenter S Product Guide
    BladeCenter S provides real-time hardware event monitoring with IBM Service Manager. Service Manager simplifies operations for remote branch users and beginner ...
  12. [12]
    [PDF] IBM BladeCenter PS703 and PS704 Technical Overview and ...
    The IBM BladeCenter PS703 and PS704 feature the POWER7 processor, multi-core technology, and are managed by the Systems Director Management Console. Models are ...
  13. [13]
    [PDF] IBM® BladeCenter® Advanced Management Module Protected Mode
    Apr 6, 2007 · This section describes preparing the BladeCenter Advanced Management Module and the Cisco. Intelligent Gigabit Ethernet Switch Module for ...
  14. [14]
    [PDF] Roadrunner Platform Overview - cs.wisc.edu
    Mar 13, 2008 · • LS21 is an IBM dual-socket Opteron blade. • 4-wide IBM BladeCenter packaging. • Roadrunner Triblades are completely diskless and run from ...<|separator|>
  15. [15]
    [PDF] IBM posts SPEC CPU2006 scores for BladeCenter HS21 XM low ...
    Jun 3, 2008 · The HS21 XM blade server, using Intel's latest long-life, low-voltage, quad-core technology, has demonstrated competitive performance on the ...
  16. [16]
    [PDF] Energy management strategies and data center complexities ... - IBM
    Jul 8, 2008 · using energy efficient IBM BladeCenter servers with Second-Generation AMD. Opteron™ processors. “ We can run the. Live Earth Web site on IBM ...
  17. [17]
    Peeling back the skins on IBM's Flex System iron - The Register
    Apr 16, 2012 · IBM announced the PureSystems converged systems last week, mashing up servers, storage, networking, and systems software into a ball of self-managing ...
  18. [18]
    BladeCenter HS22 (7809-H22) - Withdrawal notification - IBM
    Dec 13, 2023 · General Availability: 19-Feb-2010 ; No longer available for order, Withdrawn from Market: 07-Oct-2013 ; Transition to End of Support Services: 31- ...<|separator|>
  19. [19]
    IBM BladeCenter End of Life Dates - Park Place Technologies
    Park Place Technologies makes it easy to find the latest End of Life (EOL) and End of Service Life (EOSL) information for your BladeCenter hardware.
  20. [20]
    Lenovo Completes Initial Closing for Acquisition of IBM's x86 Server ...
    Oct 1, 2014 · Lenovo is acquiring System x, BladeCenter and Flex System blade servers and switches, x86-based Flex integrated systems, NeXtScale and iDataPlex ...
  21. [21]
    [PDF] Blade Server Power Study - IBM
    and were tested with — a differing number of blades. To properly ...
  22. [22]
    IBM blasts energy-efficiency of HP's blade servers - InfoWorld
    Nov 16, 2006 · IBM Thursday released test results that show that its BladeCenter line of servers uses up to 30 percent less electricity than HP's BladeSystem ...
  23. [23]
    [PDF] BladeCenter Type 8677: Planning and Installation Guide
    The following table provides a summary of the features and specifications for the BladeCenter unit. v Additional management module optional for redundancy.
  24. [24]
    New IBM Blade Computers Speed Business Data up to Ten Times ...
    The new ultra low power, dual-core Intel Xeon-based BladeCenter HS20 will be available in April at a starting price of $1,749. The Cisco InfiniBand Switch will ...
  25. [25]
    BladeCenter H Product Guide (withdrawn product) - Lenovo Press
    7–19 day delivery 30-day returnsMar 29, 2013 · BladeCenter H delivers high performance, extreme reliability, and ultimate flexibility to even the most demanding IT environments.
  26. [26]
    Overview - IBM BladeCenter H
    Jan 24, 2019 · The IBM BladeCenter H has management, I/O, and networking module bays, supports multiple fabrics, has redundant cooling, and power modules. It ...
  27. [27]
    Power accessories - IBM BladeCenter H (8852)
    Apr 10, 2023 · IBM BladeCenter H 2980 W AC Power Module Pair with Fan Pack, 68Y6601, 39Y7415. IBM BladeCenter H enhanced cooling modules, 68Y6650, 68Y8205.
  28. [28]
    Supercomputer Component, Roadrunner TriBlade | Smithsonian ...
    Three TriBlades fit into one BladeCenter H chassis which is placed in a Connected Unit of 60 BladeCenter H. Roadrunner was decommissioned on March 31, 2013.
  29. [29]
    IBM's Roadrunner smashes 4-minute mile of supercomputing
    Jun 9, 2008 · The machine, which has 80 terabytes of memory, has 296 IBM BladeCenter H racks. It takes up 6,000 square feet, uses 57 miles of fiber optic ...
  30. [30]
    EXCLUSIVE: IBM BladeCenter S | IT Pro - ITPro
    Rating 5.0 · Review by Dave MitchellMay 21, 2008 · ... IBM struck first by announcing its BladeCenter S back in June. However, HP countered strongly by delivering its BladeSystem c3000, or ...
  31. [31]
    BladeCenter S Product Guide (withdrawn product) - Lenovo Press
    7–19 day delivery 30-day returnsBladeCenter S combines the power of blade servers with integrated storage, all in an easy-to-use package that is designed specifically for the office and ...
  32. [32]
    Features and specifications | BladeCenter S | Lenovo Docs
    Features and specifications · Height: 306.3 mm (12 in.) · Depth: 733.4 mm (28.9 in.) · Width: 444 mm (17.5 in.) · Weight: Fully configured weight with blade servers ...
  33. [33]
    Overview - IBM BladeCenter S SAS RAID Controller Module
    Nov 2, 2020 · SCM release 1.20.0 is required to support RSSM. Product dates. Announce date: 09 September 2008. Planned availability date: 24 October 2008 ...Missing: introduction | Show results with:introduction
  34. [34]
    [PDF] BladeCenter T Types 8720 and 8730: Planning and Installation Guide
    The BladeCenter T unit is a rack-mounted, high-density, high-performance blade-server system developed for Network Equipment Building System (NEBS).
  35. [35]
    [PDF] IBM BladeCenter HT ac and dc model chassis accommodates ...
    management modules. Features include: • Rack-optimized design for 19.5-inch wide, industry-standard rack cabinets. • ac and dc worldwide power supplies ...
  36. [36]
    Overview - IBM BladeCenter HS20 (Type 8843)
    Apr 17, 2023 · Features and specifications. Processors. Up to two Intel Xeon; 2.8 or 3.6 GHz / 800 MHz; 1MB L2 Cache. Memory. Up to 8 GB PC2-3200 ...
  37. [37]
    BladeCenter HS22 Product Guide (withdrawn product) - Lenovo Press
    7–19 day delivery 30-day returnsStandard specifications ; Form factor, Single-wide (30 mm) ; Processor model, Intel Xeon 5600 series processors, up to 3.6 GHz. Intel Xeon 5500 series processors, ...
  38. [38]
    [PDF] System x & BladeCenter - IBM
    ▫ The 30MM High Density Offering. ▻ 4 FB DIMMs (up to 16GB of memory per blade) ... ▫ Superior performance for high performance computing applications.
  39. [39]
    Overview - IBM BladeCenter HS21 (Type 8853)
    Apr 18, 2023 · The IBM BladeCenter HS21 has quad-core processors, up to 32GB memory, up to two SAS HDDs, dual Gigabit Ethernet, and IBM Director for ...
  40. [40]
    Overview - IBM BladeCenter HS21 XM (Type 1915, 7995)
    Jan 24, 2019 · The server supports one or two Intel Xeon processors, up to 32 GB DDR2 ECC memory, one non-hot swap SAS SFF hard drive, and dual Broadcom 5708S ...
  41. [41]
    Overview - IBM BladeCenter HS23 & HS23E
    Jan 24, 2019 · Capacity: 80 GB, up to 160 GB compressed · Performance: Up to 6.0 MB/s, up to 12 MB/s compressed, transfer rate · Backward compatibility: Read/ ...Missing: specifications | Show results with:specifications
  42. [42]
    [PDF] IBM BladeCenter HS23 high-performance blade server
    Mar 6, 2012 · The IBM BladeCenter HS23 is a versatile, high-performance server with flexible configurations, high-throughput, and scalable I/O, designed for ...
  43. [43]
    BladeCenter HS23E Product Guide (withdrawn ... - Lenovo Press
    7–19 day delivery 30-day returnsIts features include the latest Intel Xeon processor E5-2400 product family, choice of hot-swap HDDs and SSDs for tailored storage, and 12 DIMM slots supporting ...
  44. [44]
    IBM BladeCenter HX5 (7873) - Lenovo Press
    7–19 day delivery 30-day returnsThe IBM BladeCenter HX5 supports up to two processors, using latest "EX" generation of Intel Xeon E7-4800 and E7-2800 series processors. Two HX5 servers can be ...
  45. [45]
    [PDF] IBM BladeCenter HX5 - Spectra
    With enhanced MAX5 scalability, the HX5 blade offers memory capacity of up to 1.25 TB—in a double-wide blade. The result is optimal server utilization with more ...
  46. [46]
    Overview - IBM BladeCenter HC10
    Jan 24, 2019 · Processor. Supports Intel Core(TM) 2 Duo E6300, 6400, 6700 Processors, with up to 4 MB L2 cache and 1066 MHz front-side bus (FSB) · Memory.
  47. [47]
    [PDF] BladeCenter HC10 - Type 7996 - Installation and User's Guide
    The IBM® BladeCenter® HC10 Type 7996 blade workstation is a workstation ... This section provides a summary of the features and specifications of the blade.
  48. [48]
    IBM Extends X86 Product Portfolio With Next-Generation AMD ...
    Aug 15, 2006 · Each of these systems is designed to be upgradeable to planned future generation AMD Opteron processors with four cores. * BladeCenter LS21 ...
  49. [49]
    [PDF] Performance of the AMD Opteron LS21 for IBM BladeCenter®
    Aug 1, 2006 · The real value proposition of the LS21 is its small form factor, ease of management and low-power consumption, at which the. LS21 excels. So the ...Missing: LS | Show results with:LS
  50. [50]
    Overview - IBM BladeCenter LS21 and LS41
    Jan 24, 2019 · Processors. 2.0, 2.4, or 2.6 GHz dual-core AMD Opteron processors, including 1 MB L2 per core and full 64-bit support.Missing: LS blades
  51. [51]
    IBM releases Opteron blade servers - CNET
    Jun 15, 2005 · IBM announced the Opteron blades on stage during AMD's dual-core Opteron launch in April and has been promoting the systems on its Web site ...Missing: Barcelona | Show results with:Barcelona
  52. [52]
    IBM BladeCenter LS20 8850 - Opteron 2.4 GHz
    IBM BladeCenter LS20 8850 - Server - blade - 2-way - 2 x Opteron 2.4 GHz - RAM 8 GB - HDD 2 x 73.4 GB - Radeon 7000M - Gigabit Ethernet - Monitor : none.
  53. [53]
    IBM Unveils New Clustering Technology and Integration of AMD ...
    Jun 15, 2005 · The Cluster 1350 will feature support for the new AMD Opteron LS20 for IBM eServer BladeCenter, making it the only integrated cluster solution ...
  54. [54]
    [PDF] IBM Corporation: IBM BladeCenter LS22 (AMD Opteron ... - SPEC.org
    IBM BladeCenter LS22 (AMD Opteron 2356). SPECfp 2006 = 19.2. SPECfp_base2006 = 18.2. CPU2006 license: 11. Test date: ... -Mipa=fast -Mipa=inline -tp barcelona-64 ...
  55. [55]
    Elite server makers open up for Shanghai Opterons - The Register
    Nov 13, 2008 · IBM will ship Shanghai chips in the BladeCenter LS22 and LS42 blades by the end of November and will ship it in the System x 3455 and 3755 ...Missing: announcement | Show results with:announcement
  56. [56]
  57. [57]
    IBM Unveils New Opteron Servers - eWeek
    Aug 1, 2006 · The systems include the BladeCenter LS41, which can scale from two to four sockets, and the two-socket LS21. In addition, IBM introduced the ...
  58. [58]
    IBM Rolls Out Low-Power Servers - Phys.org
    Apr 12, 2007 · In addition, IBM will use AMD's 68-watt, dual-core Opteron processors in its BladeCenter LS21, LS41 and 2U x3655 systems. These processors ...
  59. [59]
    IBM Offers More Cool Data Center Solutions - eWeek
    Nov 16, 2006 · In its study, IBM said that its LS21 AMD Opteron blade compared to HPs BL465c was 30 percent more efficient when idle and 18 percent more ...
  60. [60]
    IBM BladeCenters still touring Shanghai as Istanbul beckons • The ...
    The Opteron-based LS42 blade server that IBM is rolling out today is based on the Shanghai 8384 processor, which runs at 2.7 GHz and which has 512 KB of L2 ...
  61. [61]
    IBM BladeCenter LS42 - Vibrant Technologies
    IBM BladeCenter LS42 ; Cache (max):, 1 MB L2 per processor core ; Memory (max):, Up to 128 GB1 DDR II VLP (800 MHz) ; Internal Storage: Up to 293.6 GB ; Expansion ...
  62. [62]
    [PDF] Demo-Days lightV - IBM
    Enterprise Blade. Page 55. 55. © 2009 IBM Corporation. IBM Systems and Technology Group. Announcing… ... ▫ The LS42 blade server offers the following new ...
  63. [63]
    [PDF] IBM BladeCenter JS23 and JS43 Implementation Guide
    Remote control is a feature of the management module installed in a. BladeCenter chassis. It can connect over an IP connection to the management module and ...
  64. [64]
    Overview - IBM BladeCenter JS20
    Jan 24, 2019 · Memory. High-speed 1 GB DDR ECC SDRAM memory standard; High-speed 2 GB ECC PC2700 DDR SDRAM memory standard (8842-RTx only); Up to 8 GB memory- ...Missing: JS21 | Show results with:JS21
  65. [65]
    Overview - IBM BladeCenter JS21
    Processor. Supports dual core 2.3 GHz or 2.5 GHz, single core 2.7 GHz 64-bit PowerPC 970 processors ; Memory. Up to 16 GB of system memory; The JS21 system board ...
  66. [66]
    Overview - IBM BladeCenter JS22
    Jan 24, 2019 · 4.0 GHz high- performance, dual-core, 64-bit POWER6 processors · Integrated systems management processor.Missing: JS23 | Show results with:JS23
  67. [67]
    [PDF] IBM BladeCenter JS12 and JS22 Implementation Guide
    This edition applies to IBM BladeCenter JS12, IBM BladeCenter JS22, IBM AIX Version 6.1, IBM i 6.1, Red Hat Enterprise Linux for POWER Version 5.2, SUSE Linux ...
  68. [68]
    [PDF] IBM Power Systems Facts and Features POWER7 Blades and Servers
    BladeCenter PS701 & PS702 Express. Product Line. IBM BladeCenter PS701 Express. IBM BladeCenter PS702 Express. Machine type. 8406-71Y. 8406-71Y + FC 8358.
  69. [69]
    [PDF] BladeCenter PS700/701/702 POWER7 Blades - IBM
    May 27, 2010 · Most scalable & highest performance Power Blades ever! * IDC 4Q2009 Server Tracker RISC/Itanium blades. IBM BladeCenter. PS700/701/702 Express.
  70. [70]
    Overview - IBM BladeCenter QS20
    Jan 24, 2019 · Memory. 1 GB XDRAM (512 MB per processor) ; Storage. Blade-mounted 40 GB IDE hard drive ; Networking. Two 1 Gb Ethernet controllers ; Slots. Two ...Missing: maximum | Show results with:maximum
  71. [71]
    Overview - IBM BladeCenter QS21
    Jan 24, 2019 · Features and specifications. Processors. 3.2 GHz Cell Broadband Engine processors (512 KB L2 cache per core). Memory. 400 MHz XDRAM memory.
  72. [72]
    [PDF] IBM BladeCenter: The right choice
    BladeCenter, a compre hensive virtualization solution, is the only blade server solution in the industry that allows you to consolidate and simplify your Linux® ...
  73. [73]
    Cell blade based H.264 video encoding engine for large scale video ...
    Dec 10, 2010 · Cell blade based H.264 video encoding engine for large scale video surveillance applications for VCIP 2010 by Ligang Lu et al.
  74. [74]
    IBM Unsheathes New Cell Blade - HPCwire
    May 14, 2008 · Big Blue claims the PowerXCell 8i achieves nearly 109 double precision (DP) gigaflops. That's on par with the 102 DP gigaflops provided by AMD's ...<|control11|><|separator|>
  75. [75]
    Overview - IBM BladeCenter PN41
    Jan 24, 2019 · IBM BladeCenter® PN41 is a highly programmable, high-performance blade capable of helping drive revenue, control costs, and protect network ...Missing: 10GbE | Show results with:10GbE
  76. [76]
    Heaven and Earth moved to slot Sun chip in IBM blade • The Register
    Themis Computer today said it will use Sun's eight-core UltraSPARC T2 - aka Niagara - processors in a new line of blade servers, called T2BC. The deal is a rare ...
  77. [77]
    [PDF] Themis T2BC Blade Server
    Themis' new T2BC Blade Server enables Solaris applications to run natively, on an UltraSPARC® T2® Chip Multi-Threading. (CMT) processor, within an IBM® ...Missing: 2BC | Show results with:2BC
  78. [78]
    Cisco Systems Intelligent Gigabit Ethernet Switch Module (CIGESM ...
    Jan 24, 2019 · Overview. This Intelligent Gigabit Ethernet Switch Module provides layer 2 Ethernet switching functions for the IBM BladeCenter server chassis.Missing: 1GbE | Show results with:1GbE
  79. [79]
    [PDF] BladeCenter 2-Port Fibre Channel Switch Module Management - IBM
    You can manage and configure your IBM® ERserver BladeCenter™ 2-Port Fibre. Channel Switch Module through a Telnet connection to the embedded command.
  80. [80]
    Cisco Systems Intelligent Gigabit Ethernet Switch Module (CIGESM ...
    This Intelligent Gigabit Ethernet Switch Module provides layer 2 Ethernet switching functions for the IBM BladeCenter server chassis. Up to 250 virtual LANs can ...Missing: 1GbE | Show results with:1GbE
  81. [81]
    [PDF] Choosing the Right Switch Modules for the Power Based ... - IBM
    The QLogic 20-port 4Gb Fibre Channel SAN Switch Module and QLogic 10-port 4Gb. SAN Switch Module enable high performance SAN solutions and allow BladeCenter ...
  82. [82]
    Brocade 20-port 8Gb SAN Switch Modules for BladeCenter
    7–19 day delivery 30-day returnsSep 3, 2009 · The Brocade switch modules include the following hardware features: Six external auto-sensing Fibre Channel ports (2 Gbps, 4 Gbps, or 8 Gbps ...
  83. [83]
    Cisco 4x InfiniBand Switch Module for IBM Blade Center User Guide
    Mar 20, 2015 · The Server Switch Module provides full non-blocking switching for all 24 ports. On the backplane, 14 of the internal 4x ports provide 10 Gbps ...Missing: 20Gbps latency clustering
  84. [84]
    [PDF] IBM BladeCenter Copper Pass-thru Module: Installation Guide
    If the Ethernet controller that is associated with I/O-module bay 1 does not have the designation that you want, modify the blade-server operating-system.
  85. [85]
    BNT 10-port 10 Gb Ethernet Switch Module for IBM BladeCenter H, HT
    Jan 24, 2019 · Up to 10-port 10 Gb Ethernet switch with 14 internal, midplane ports for server connectivity and 10 external, SFP-enabled ports for target or ...
  86. [86]
    Voltaire 40 Gb InfiniBand Switch Module for BladeCenter
    7–19 day delivery 30-day returnsFeatures and specifications · Full QDR rate InfiniBand switching · Based on the Infiniscale-IV device · Up to 40 Gbps performance for clusters and grids ( ...
  87. [87]
    IBM Roadrunner Takes the Gold in the Petaflop Race - HPCwire
    A compute node is a “TriBlade,” consisting of a single 2-socket dual-core Opteron LS21 blade connected to two dual-socket QS22 Cell blades.
  88. [88]
    [PDF] Roadrunner
    • Each TriBlade node is capable of 400 gigaflops of double precision compute power. Page 42. Multithreading. • Threads alternate fetch and dispatch cycles. • To ...
  89. [89]
    Supercomputer Component, Roadrunner TriBlade
    A TriBlade is a cluster of four computer boards, part of the Roadrunner supercomputer, containing AMD Opteron and IBM PowerXCell components.