Fact-checked by Grok 2 weeks ago

XDR DRAM

XDR DRAM, or eXtreme Data Rate , is a high-performance () architecture developed by Inc. that enhances standard cores with a high-speed to achieve superior bandwidth efficiency using fewer pins and lower power compared to contemporary technologies. Announced in July 2003 as a successor to Rambus's earlier , XDR DRAM employs octal data rate (ODR) signaling, transmitting eight bits of data per clock cycle per pin, with an initial of 400 MHz enabling per-pin data rates of 3.2 Gbps. This signaling approach, using Rambus's FlexPhase timing alignment, allows scalable operation up to 7.2 Gbps or higher in later implementations, providing peak bandwidths such as 9.6 GB/s from a single 2-byte-wide device at 4.8 Gbps. Key features include programmable lane widths (up to 32 lanes per chip), support for 8 internal banks, dynamic width control (x1 to x16), and low-latency access times around 2-3.33 ns per request packet, all powered at 1.8 V with options for power-down and self-refresh modes to optimize energy use. The technology was first commercialized by in December 2003 with 512-Mbit devices, followed by and Elpida (now part of Micron), targeting applications in , graphics processing, and networking where high in compact, cost-sensitive systems is critical. By 2009, over 100 million XDR DRAM units had shipped, with notable adoption in Sony's console, which utilized 256 MB of XDR DRAM at 3.2 GHz to deliver 25.6 GB/s of system shared between CPU and GPU. Despite its performance advantages, XDR DRAM saw limited mainstream PC uptake due to proprietary licensing requirements and competition from open-standard variants, though evolved the architecture into XDR2 (announced 2005) and later XDRn variants for mobile and high-end computing.

History and Development

Origins and Announcement

Inc. developed as a successor to its earlier technology, aiming to deliver significantly higher for demanding applications. The technology, formerly code-named Yellowstone, was officially announced on July 10, 2003, in collaboration with Elpida Memory Inc. and Corp., who committed to manufacturing the devices. This partnership marked a strategic effort to position as a scalable, high-performance . The initial specifications targeted a data rate of 3.2 Gbps using octal data rate signaling, with a roadmap extending to 6.4 Gbps and beyond, enabling system bandwidths up to 100 GB/s—eight times that of contemporary PC memory. Toshiba began sampling 512 Mbit XDR DRAM devices in December 2003 at 3.2 Gbps per pin. Samples were slated to ship in 2004, with volume production ramping up in 2005 across densities from 256 Mbit to 8 Gbit and device widths from x1 to x32. The motivation stemmed from the need to overcome bandwidth limitations in consumer electronics, graphics, and networking systems, providing a cost-effective alternative to specialty DRAMs while supporting emerging broadband architectures like Sony's Cell processor. This came after RDRAM's commercial challenges, including high costs and compatibility issues that limited its mainstream adoption despite its speed advantages. Key partnerships expanded beyond the initial collaborators, with beginning mass of 256 Mbit XDR devices in early 2005, followed by 512 Mbit versions later that year. These Samsung devices, operating at up to 3.2 Gbps per pin, were claimed to be the world's fastest at the time, offering up to 9.6 /s bandwidth in applications and underscoring XDR's early production milestones.

Evolution and Variants

Following its initial introduction, XDR DRAM underwent iterative improvements in data rates to meet demands for higher bandwidth in high-performance computing applications. The technology began with a pin data rate of 3.2 Gbps in 2003, enabling peak bandwidths of approximately 6.4 GB/s per 16-bit device. By 2006, advancements allowed operation at 4.0 Gbps, increasing peak bandwidth to 8.0 GB/s per device through refinements in signaling and timing circuits. Further escalation occurred in 2008 with the achievement of 4.8 Gbps, delivering up to 9.6 GB/s per device and supporting sustained transfers in the 8-9.6 GB/s range for optimized architectures. Capacity enhancements paralleled these speed increases, with early 512 Mbit devices from 2003 giving way to broader adoption including 256 Mbit and higher densities by 2005, which better accommodated denser memory configurations while maintaining the high-speed interface. This growth in density, combined with the core architecture's octal prefetch mechanism, enabled reliable high-throughput operations without proportional increases in power consumption. In July 2005, proposed XDR2 as a successor variant, announced on July 7 with an initial target data rate of 8 Gbps to achieve even greater , incorporating features like micro-threading for parallel access and . Intended for licensing and potential shipping by 2007, particularly in applications, XDR2 was never commercialized, remaining a conceptual extension of the XDR family. Market adoption reflected these refinements, with over 50 million XDR DRAM units shipped worldwide by March 2008, driven by integration in . Shipments surpassed 100 million by June 2009, underscoring the technology's niche scaling in specialized high-bandwidth systems despite competition from variants.

Technical Architecture

Core Design Principles

XDR DRAM utilizes a architecture that integrates a conventional DRAM core with Rambus's proprietary high-speed signaling interface to deliver enhanced performance while maintaining compatibility with standard memory fabrication processes. This design leverages the reliability and cost-effectiveness of traditional arrays for and retrieval, augmented by specialized circuitry for rapid I/O operations. The core operates on established principles of , including capacitor-based cells refreshed periodically to retain data, but the interface innovations enable significantly higher transfer rates without altering the fundamental storage mechanism. Central to the architecture is the use of differential signaling for and clock signals using Differential Rambus Signaling Level (DRSL), while address and control signals use single-ended Signaling Level (RSL). This approach transmits signals over paired true and complementary lines for DRSL, providing improved immunity and enabling bi-directional communication at multi-GHz speeds without dedicated ground pins, thus optimizing pin efficiency compared to single-ended methods. DRSL supports data rate (ODR) encoding, where eight bits are transferred per clock on each lane, allowing a 400 MHz clock to achieve effective data rates up to 3.2 Gbps initially, with scalability to higher frequencies. A key design principle emphasizes minimizing the number of high-speed pins to maximize per-pin bandwidth and simplify board routing, contrasting with the wider parallel buses in synchronous DRAMs. For instance, configurations typically employ 32 data pins—comprising 16 differential pairs (DQ and DQN)—to handle narrow but ultra-fast channels, reducing crosstalk and power dissipation while supporting programmable widths such as x8, x16, or x32 for flexibility in system integration. This serialized, point-to-point topology facilitates higher aggregate throughput in bandwidth-intensive applications. Signal integrity in multi-device environments is ensured through on-die termination (ODT), a programmable feature that matches channel impedance directly at the receiver to minimize reflections and stubs. ODT resistors, typically valued at 40–60 Ω, are integrated into the device and calibrated for varying loads, enabling robust operation in daisy-chain or multi-drop topologies without external components. This innovation, rooted in Rambus's earlier developments, addresses challenges of high-frequency signaling over longer traces. The inherently supports multi-channel configurations to scale , with devices organized into up to eight internal banks for interleaved , allowing systems to aggregate multiple independent channels—such as the dual-channel setup in —for overall system throughput exceeding 25 GB/s in practical deployments.

Interface Specifications

The XDR DRAM interface utilizes a compact 144-ball fine-pitch ball grid array (FBGA) package to accommodate high-density pin assignments while minimizing footprint for integration in space-constrained systems. This package supports dedicated pins for key signals, including a differential clock pair (CFM and CFMN) for precise timing synchronization, 16 differential data pin pairs (DQ[15:0] and corresponding DQN[15:0]) for bidirectional transfers, and 12 multiplexed pins (RQ[11:0]) that handle address, command, and control information in a serialized format. The request bus allows parallel (multi-drop) connection to multiple memory devices, enabling shared access to commands and addresses. Additional pins manage termination voltage (VTERM), reference voltage (VREF), and a low-speed serial interface (SDI/SDO with CMD and SCK) for device configuration and initialization, enabling robust operation without external configuration hardware. Signaling in the XDR DRAM interface employs proprietary standards optimized for multi-gigabit speeds, with Signaling Level (DRSL) used for the lines to provide noise immunity and high through low-voltage pairs, akin to LVDS but tailored for protocols. Signaling Level (RSL), a single-ended low-voltage complementary metal-oxide-semiconductor (LVCMOS-like) scheme, drives the request and control pins for simpler, lower-power transmission of commands and addresses. The interface operates at rates, transferring 8 bits per clock cycle per pin via a combination of (transfers on both clock edges) and internal , achieving up to 4 Gbps per pin at a 500 MHz clock while maintaining signal integrity through on-die termination () features. The channel architecture of XDR DRAM is fundamentally point-to-point for the high-speed data paths, ensuring minimal reflections and optimal signal quality by directly connecting each device to the without shared buses for data. This design supports dynamic bus width adjustment from x1 to x16 via programmable registers, allowing flexibility for varying system bandwidth needs, and interleaves transactions across eight internal banks for sustained throughput. The low-speed serial configuration bus, however, employs a daisy-chain connecting up to multiple devices in series (RST, SCK, and CMD driven in to all chips, with SDI/SDO chained), facilitating initialization and mode setting without impacting the primary data channel. Electrical specifications emphasize low-voltage operation to reduce power and , with a core supply voltage () of 1.8 V ±0.09 V for internal logic and array operations, and I/O signaling at 1.2 V ±0.06 V for both DRSL termination (VTERM,DRSL) and reference levels. This dual-voltage approach separates core and interface domains, enabling efficient power delivery while supporting the high-speed differential clock with cycle times as low as 2.00 ns for maximum performance. The interface incorporates fly-by elements in the clock and command distribution to minimize stubs and timing skew in multi-device configurations, though primary remains point-to-point to preserve .

Performance Characteristics

Bandwidth and Throughput

XDR DRAM achieves peak bandwidth of up to 9.6 /s per when operating at 4.8 Gbps per pin in x16 mode, leveraging its 16-bit data interface with signaling across 32 pins (16 DQ and 16 DQN pairs). This configuration enables high per-pin data rates through octal data rate (ODR) signaling, where data is transferred on multiple clock phases to maximize throughput efficiency. The theoretical maximum can be derived as follows: with 4.8 Gbps per pair across 16 pairs (32 pins total), the aggregate is 4.8 Gbps × 16 = 76.8 Gbps, or 9.6 /s when divided by 8 bits per byte. Sustained throughput typically ranges from 6.4 GB/s to 8.0 GB/s in multi-burst operations, depending on the clock speed (3.2 Gbps to 4.0 Gbps per pin) and system configuration, achieving over 95% bus utilization in optimized scenarios. This performance is supported by burst length of 16 words, allowing sequential data transfers without bank conflicts in ideal conditions, which minimizes idle time on the bus. aggregation further scales ; for instance, a 3- setup can reach 28.8 GB/s by interleaving accesses across multiple independent . The architecture achieves high effective data rates of 3.2 to 4.8 Gbps per pin through octal data rate (ODR) signaling, transmitting eight bits per clock cycle, building on multi-data-rate techniques from prior architectures.

Latency and Timing

XDR DRAM's profile is defined by key timing parameters that balance high-speed data transfer with reliable access times. The column address strobe (, denoted as tCAC, is programmable with absolute values typically ranging from 2.0 to 3.33 ns, depending on the speed bin and operating , allowing the memory to deliver the first data word after the column command while accommodating the signaling overhead of the octal data rate interface. Row-related command timings further shape access patterns in XDR DRAM. The row-to-column delay (tRCD) is generally around 15 ns for read operations, representing the minimum interval between a row (ACT) command and a subsequent read (RD) or write (WR) command, with values spanning 5 to 7 cycles depending on the speed bin. Similarly, the row precharge time (tRP), which specifies the delay before a new row can be activated in the same bank, is approximately 10 ns or 6 to 7 cycles. These timings support bank-level parallelism across the device's 8 internal banks. Refresh operations occur to maintain , with the full array requiring refresh within a 64 ms retention window, distributed across multiple refresh commands to minimize disruption. To handle the demands of high-frequency operations, XDR DRAM incorporates deep pipelining, enabling up to 4 outstanding transactions to overlap across banks and reduce effective wait states. However, this results in higher cycle-count latencies compared to SDRAM technologies like , as the faster clock and differential signaling require additional cycles for and synchronization. The interface's periodic , performed via dedicated commands to adjust output driver impedances, introduces negligible overhead—typically on the order of a few cycles per event—but ensures precise timing margins under varying voltage and temperature conditions.

Operational Features

Power Management

XDR DRAM employs a core operating voltage of 1.8 V ± 0.09 V for both the memory and interface logic, paired with a 1.2 V ± 0.06 V termination voltage for its differential signaling levels (DRSL) I/O interface. This dual-voltage architecture balances high-speed performance with reduced power dissipation in the signaling domain. Under active operation, a typical XDR DRAM device consumes approximately 2-3 W at 4 Gbps per pin data rates, with read currents around 1.2 A and write currents near 1.12 A at 1.8 V, translating to roughly 2.16 W and 2.02 W respectively for a 256 x16 device. At maximum speeds up to 4.8 Gbps, consumption scales to about 4 W per device due to increased switching activity. Standby power remains low at around 0.61 W (340 ), dropping further to approximately 17 mW in power-down self-refresh mode, where internal refresh maintains without external clocking. Key power management features include dedicated power-down modes, entered via a PDN command in the column address packet (with XOP[3:0] = 1100), which deactivates the high-speed while enabling self-refresh through the Refresh Bank Control Register. This mode, analogous to traditional DRAM's CKE-low power-down, requires a minimum of 16 clock cycles post-command for entry and up to 4096 cycles for subsequent command issuance upon exit, allowing significant energy savings during idle periods. In terms of efficiency, XDR DRAM delivers superior per watt compared to its predecessor , achieving 2-3 /s per watt at peak operation—for instance, an 8 /s at ~3 W yields over 2.6 /s per watt—thanks to optimized signaling and low-power PLL/DLL designs that enhance overall energy proportionality. The high-speed interface from the core design principles supports this by enabling sustained transfers with reduced overhead, contributing to up to 40% lower power than comparable graphics memory systems at equivalent .

System Integration Aspects

XDR DRAM's channel architecture supports flexible configurations, allowing up to four per channel through a daisy-chain for interfaces like SDO and SDI lines, where the output of one connects to the input of the next, and the final output links back to the controller. This daisy-chaining reduces the number of (PCB) traces required, simplifying board layout and lowering manufacturing costs compared to parallel configurations that demand dedicated lines for each . On-chip features significantly ease system design by minimizing the need for external components. Programmable on-die termination () with adaptive adjusts automatically, providing internal termination resistance for data pins typically between 40 Ω and 60 Ω to mitigate reflections without additional discrete resistors. Similarly, integrated voltage regulation support, including dedicated VTERM pins for differential signaling level (DRSL) termination at 1.2 V ± 0.06 V, reduces reliance on board-level regulators and enhances stability across process, voltage, and temperature variations. The of XDR DRAM devices, such as the 144-ball package in typical implementations, facilitates better thermal dissipation by allowing more efficient heat spreading across the package and board. However, operating at high data rates up to 4 Gbps per pin necessitates careful routing to manage and prevent thermal hotspots from concentrated power delivery. Junction temperatures are specified to remain below 100°C under normal operation to ensure reliability. Compatibility with standard controllers is achieved through Rambus-provided () blocks, including the XDR PHY (XIO) and (XCG), which integrate seamlessly into system-on-chip () designs for and graphics applications. This enables pin-count reduction and supports high-bandwidth interfaces without major redesigns, as demonstrated in XDR variants that deliver over 17 /s from a single device while aligning with existing manufacturing processes.

Protocol and Commands

Data Transfer Commands

The XDR DRAM protocol employs a set of core commands for managing data transfers, utilizing a 12-bit command bus (RQ[11:0]) that multiplexes addresses over two clock cycles per command to enable efficient high-speed operations. This structure supports the transmission of 24-bit request packets, including opcodes and address information, across the request queue (RQ) signals during complementary clock phases (CFM/CFMN). The row activate command, denoted by the ACT opcode in the ROWA packet, selects and opens a specific row within a designated bank, preparing it for subsequent column accesses. It includes the bank address (BA) and row address (R) fields, multiplexed over the two-cycle packet, and establishes the row buffer for data availability. Following issuance, the minimum row-to-column delay (tRCD) must elapse before a read or write can target that bank, 5-7 clock cycles for reads and 1-3 for writes depending on the speed grade, ensuring internal array stabilization. Read commands utilize the opcode within the packet, specifying the column and initiating from the activated row. The command supports a burst length of 16 transfers, with prefetch mechanisms allowing sequential column data to be queued for output on the differential data strobe (DQS) lines, optimizing throughput in multi-bank interleaving scenarios. Column addresses (C) and sub-column bits (SC) are provided in the packet, enabling fine-grained access to the row buffer contents. Write commands employ the WR in the COL packet for unmasked transfers or the WRM variant in the packet for byte-level masking, directing data input to the specified columns. Masked writes use an 8-bit mask in the command to selectively enable or disable individual bytes within each burst transfer, preventing overwrites on non-targeted data lanes and supporting partial updates. Like reads, writes operate with a burst length of 16, adhering to a write-to-read delay (tWTR) after completion to maintain protocol integrity.

Control and Maintenance Commands

XDR DRAM employs specific control commands to manage operations and ensure through precharge and refresh mechanisms. The precharge command, denoted as PRE, closes an active specified by the bits BA within a ROWP packet, initiating the precharge phase with a row precharge delay of t_RP cycles, 6-7 clock cycles depending on the speed grade. This command is essential for deactivating open rows to prepare for subsequent activations in the same . For refresh operations, the command, encoded in the ROWP packet using the ROP field (such as REFA for all-bank refresh or REFI for incremental refresh), performs auto-refresh across all banks, maintaining with a refresh interval of 64 ms and a per-bank refresh time of t_RFC, which aligns with parameters detailed in the specifications. These commands prevent in the volatile cells by periodically restoring charge levels. Calibration and power management commands in XDR DRAM facilitate signal integrity and energy efficiency. The ZQCL command, issued via a COLX packet with the XOP field set to CALZ or similar encodings, performs impedance calibration by adjusting on-die termination (ODT) resistors to match external conditions, executed periodically every 100 ms with a calibration duration of approximately 12 t_CYCLE to ensure optimal output driver strength and input matching. Power-down entry and exit are controlled through the PD command in the COLX packet (XOP=1100), transitioning the device to a low-power state while preserving data, with entry latency of 16 t_CYCLE and exit managed via the Clock Enable (CKE) signal or Power Management (PM) register settings; CKE low initiates power-down, and high resumes normal operation. These features allow XDR DRAM to reduce power consumption during idle periods without compromising accessibility. Mode sets () configure key operational parameters in XDR DRAM during initialization and runtime adjustments. The command, transmitted over the command bus, programs s such as the () for burst length (fixed at 16 transfers) and the () enable bit to synchronize internal clocks with the external clock, reducing skew for high-speed operations. is set via the DLY , specifying additive latency values like 6 t_CYCLE for read-to-output timing, ensuring precise data timing aligned with . These settings are loaded via or interfaces post-reset, enabling flexible adaptation to different system bandwidth needs. The low-speed serial bus in XDR DRAM provides a dedicated for initialization, mode programming, and tasks like reporting, operating independently of the high-speed data paths. This bus uses a multi-wire including (RST), serial clock (SCK), command (CMD), serial data in (SDI), and serial data out (SDO) signals in a daisy-chain , allowing broadcast or targeted access to multiple with a of around 50 MHz. Commands follow a structured format with a 4-bit (e.g., SBW for serial broadcast write, SDR for serial device read), followed by address, payload data, and a (CRC) for detection, typically spanning 32 SCK cycles per transaction to configure or report status without interrupting main memory operations. This bus ensures reliable setup and diagnostics, particularly during power-up sequences and periodic .

Applications and Legacy

Commercial Adoption

The primary commercial application of XDR DRAM was in the console, launched in 2006, which utilized 256 MB XDR DRAM modules clocked at 3.2 GHz to achieve a system of 25.6 GB/s. This implementation leveraged XDR's high-speed differential signaling to support the console's demanding graphics and processing requirements. Beyond gaming consoles, XDR DRAM found use in networking equipment and graphics accelerators, where its superior addressed high-throughput needs in specialized systems. However, adoption in personal computers remained limited due to competition from more cost-effective standards. Major manufacturers including , Elpida Memory, and produced XDR DRAM devices, with total global shipments surpassing 100 million units by 2009, driven largely by console demand. By the , XDR DRAM was phased out as DDR3 and GDDR5 technologies dominated mainstream consumer, graphics, and computing markets, offering better scalability and lower costs without proprietary licensing requirements.

Comparison to Competing Technologies

XDR DRAM represents an evolution from its predecessor, , primarily through enhanced bandwidth and improved power efficiency that mitigates the thermal challenges inherent in RDRAM designs. While RDRAM systems, such as those in the , delivered peak bandwidths of 3.2 GB/s across a 32 MB configuration, XDR DRAM scaled to 25.6 GB/s in the PlayStation 3's 256 MB setup, enabling sustained high-throughput operations without the excessive heat generation that plagued RDRAM due to its higher operating voltages and less efficient signaling. This improvement stems from XDR's adoption of differential signaling and octal data rate techniques, which reduce power dissipation per bit transferred compared to RDRAM's earlier architecture. In comparison to , XDR DRAM excels in per-pin , achieving 4.8 Gbps versus DDR2-800's 0.8 Gbps, allowing a single XDR device to match the aggregate output of six DDR2-800 x16 devices for equivalent 9.6 /s throughput. However, this advantage comes at the expense of higher access and elevated costs, as XDR's specialized demands custom controllers and licensing, making it less suitable for general-purpose computing where DDR2's lower (typically 4-6 cycles) and standardization support broader, more affordable integration. XDR thus found favor in -intensive applications like gaming consoles, where its superior peak performance justified the trade-offs. Against GDDR4, XDR DRAM offered comparable high-speed capabilities, with data rates up to 4.8 Gbps per pin, but its serial-like interface simplified multi-device configurations on narrow buses, potentially easing integration in compact systems. GDDR4, however, gained prevalence in graphics processing units due to its alignment with standards, which facilitated widespread manufacturer support and cost reductions, ultimately leading to its adoption in AMD's HD 2000-4000 series before being supplanted by GDDR5. Overall, XDR DRAM achieved niche success in console hardware, such as the , where its high bandwidth supported demanding real-time rendering, but its proprietary architecture—lacking JEDEC compliance—restricted scalability and ecosystem development, contrasting with technologies' open standards that enabled ubiquitous adoption across PCs and servers.

References

  1. [1]
    XDR DRAM Surpasses 100 Million Units Shipped - Rambus
    Jun 9, 2009 · XDR DRAM is a high-performance memory that turbo-charges standard CMOS DRAM cores with a high-speed interface capable of 7.2Gbps data rates ...
  2. [2]
    Rambus discloses details of new XDR DRAM interface - EE Times
    XDR DRAM uses Rambus' FlexPhase circuit design, which aligns data bits with each clock cycle, eliminating extra trace length matching on the printed-circuit ...
  3. [3]
    XDR DRAM Passes 50 Million Units Shipped Milestone - Rambus
    Mar 31, 2008 · The XDR memory architecture operating at 4.8Gbps provides an unmatched 9.6GB/s of peak memory bandwidth with a single, 2-byte wide XDR DRAM.
  4. [4]
    [PDF] 256Mbit XDR DRAM(F-die) - BDTIC
    The 256Mb XDR DRAM device is a CMOS DRAM organized as 16M words by 16bits. The use of Differential Rambus Signaling. Level(DRSL) technology permits 4000/3200/ ...
  5. [5]
    Toshiba First to Commercialize 512-megabit XDR DRAM
    Dec 29, 2003 · XDR DRAMs are designed for high-performance broadband applications, including digital consumer electronics, network systems and graphic systems.
  6. [6]
    Rambus Technology is Adopted in PLAYSTATION®3 Computer ...
    Nov 30, 2006 · Rambus' XDR memory and FlexIO interface enhance PS3 performance, with 256MB memory, 65GB/s bandwidth, and 3.2-8.0GHz data rates.
  7. [7]
    Rambus to bring XDR memory to mainstream in '06 - EE Times
    Oct 13, 2003 · Running at 3.2-GHz, XDR DRAM offers eight times the bandwidth of today's PC memory, according to the Los Altos, Calif.-based company. Toshiba ...
  8. [8]
    Rambus reveals details of new XDR DRAM interface - EE Times
    XDR DRAM — and a timetable showing sample chips ...
  9. [9]
    Rambus, Toshiba and Elpida Announce XDR DRAM, the World's ...
    Running at 3.2GHz, XDR DRAM offers 8x the bandwidth of today's best-in-class PC memory. As Rambus announced earlier this year, Sony Corporation and Sony ...
  10. [10]
    Rambus marks finish line for new memory - CNET
    Jul 10, 2003 · July 10, 2003 ... Manufacturers Toshiba and Elpida stated that they expect to begin shipping XDR DRAM in 2004, with volume production coming in ...Missing: announcement | Show results with:announcement
  11. [11]
    Samsung to Produce World's Fastest XDR DRAM - Phys.org
    Jan 26, 2005 · Short for Extreme Data Rate, the XDR DRAM architecture is based on a limited number of very high-speed signals used for address, data, and ...
  12. [12]
    Samsung's first 90nm 512Mb DRAM memory adopted for use in ...
    May 20, 2005 · The Samsung 512Mb XDR DRAM can transmit data with up to 9.6 GigaBytes per second, 12 times faster than DDR400 memory and 6 times faster than ...Missing: Mbit
  13. [13]
    Industry Ships 25 Million XDR™ DRAM Memory Devices - Rambus
    Jun 11, 2007 · The XDR memory architecture operating at 4.0GHz provides an unmatched 8.0 GB/s of peak memory bandwidth with a single, 2-byte wide XDR DRAM. XDR ...Missing: limitations | Show results with:limitations
  14. [14]
    ISSCC 2005: The CELL Microprocessor - Page 11 of 12
    Feb 10, 2005 · In such a configuration, a two channel XDR memory can support upwards of 2 GB of ECC protected memory with 256 Mbit DRAM devices or 4 GB of ECC ...
  15. [15]
    Rambus unveils 8GHz XDR 2 - The Register
    Dubbed XDR 2, the technology will provide initial clock speeds of 8GHz. XDR 1 operates at up to 6.4GHz. The new version of XDR incorporates four ...
  16. [16]
    XDR(TM) DRAM Surpasses 100 Million Units Shipped - Rambus Inc.
    Jun 9, 2009 · Today announced that its customers have shipped over 100 million XDR(TM) DRAM devices worldwide. XDR DRAM is part of a total memory solution developed by ...Missing: 50 2008
  17. [17]
    Rambus Ships Over 100 Million XDR DRAM Modules | HotHardware
    Jun 10, 2009 · XDR DRAM is a high-performance memory that turbo-charges standard CMOS DRAM cores with a high-speed interface capable of 7.2Gbps data rates ...Missing: per | Show results with:per
  18. [18]
    On Die Termination Calibration - Rambus
    Rambus ODT Calibration determines an optimal termination impedance to reduce signal reflections and compensate for variations across PVT.Missing: XDR differential channel
  19. [19]
    CELL Microprocessor Revisited - Page 2 of 5 - Real World Tech
    Feb 28, 2005 · The XDR memory system on the CELL processor has two physical channels that are 36 bits wide per channel). Rambus's XDR memory system is an ...
  20. [20]
    Elpida Introduces The World's Fastest DRAM Based On ... - Rambus
    Oct 4, 2007 · Elpida Introduces The World's Fastest DRAM Based On The Rambus XDR Memory Architecture. 4.8GHz XDR DRAM achieves 6x performance increase over ...Missing: July 11 2003 Toshiba
  21. [21]
    [PDF] XDR DRAM - Digchip
    Our 512Mb XDR DRAM devices provide an industry-leading data transfer rate of. 6.4GB/s, 8.0GB/s, 9.6GB/s within a single device, through 3.2Gbps, 4.0Gbps,. 4.8 ...
  22. [22]
    Rambus To Discuss Trends And Developments In High-Speed ...
    ... interface technology and the application benefits of high-speed interface solutions. ... XDR™ memory ... specifications as open industry standards. About ...
  23. [23]
    A DRAM Refresh Tutorial - Utah Arch
    Nov 27, 2013 · The DDR standard requires every cell to be refreshed within a 64 ms interval, referred to as the retention time. At temperatures higher than 85° ...Missing: XDR | Show results with:XDR
  24. [24]
    XDRn Memory - Rambus
    The Rambus XDRn architectures are complete memory subsystem solutions, including PHYs and DRAMs, that achieve higher performance at lower power, while requiring ...
  25. [25]
    US8667312B2 - Performing power management operations
    Second, granularity is provided, such that CKE power management will power down DRAM circuits as needed in each DIMM. Third, the CKE power management can be ...
  26. [26]
    Rambus Demonstrates Superior Power Efficiency of World's Fastest ...
    Jun 23, 2009 · Rambus Inc. showcased a silicon demonstration of a complete XDR memory system running at data rates up to 7.2Gbps with superior power ...
  27. [27]
    Rambus Unveils Mobile XDR Memory for Next-Generation Mobile ...
    “Mobile XDR memory provides the ideal solution for designers to offer leading-edge mobile content in a dramatically lower power and cost-effective manner.
  28. [28]
    PlayStation 3: The Full Specs List - CNET
    May 17, 2005 · PlayStation 3: The Full Specs List · 256MB XDR Main RAM @3.2GHz · 256MB GDDR3 VRAM @700MHz · System Bandwidth Main RAM -- 25.6GB/s · VRAM --22.4GB/sMissing: DRAM | Show results with:DRAM
  29. [29]
    Toshiba first to market with 512-Mbit XDR DRAM - EE Times
    Saito said Toshiba will aim for volume production in 2005. Along with Toshiba, Rambus has licensed XDR technology to Elpida Memory Inc. and Samsung Electronics.
  30. [30]
    Rambus unveils XDR DRAM with Elpida, Toshiba - InfoWorld
    Toshiba and Elpida Memory will manufacture Rambus' new memory technology, formerly known as Yellowstone, by 2005, the companies announced Thursday.
  31. [31]
    Rambus DRAM - PS2 Developer wiki - PSDevWiki
    Apr 20, 2025 · All PSX, System 256, and System 148 consoles use 2 RDRAM chips with 36 MB each for a total of 72 MB (the system actually uses only 64 MB).
  32. [32]
    [PDF] Article 7 2 Decision 15 01 2010 - European Commission
    Jan 15, 2010 · Rambus' RDRAM technology and its successor, XDR DRAM, represent the main non-JEDEC-compliant DRAM interface technologies. 23. As Rambus ...
  33. [33]
    Qimonda Started Volume Production of Rambus XDR™ DRAM for ...
    Aug 26, 2008 · The XDR memory solution extends the Qimonda specialty RAM portfolio to better serve high-performance and high-bandwidth applications for the ...<|control11|><|separator|>