Fact-checked by Grok 2 weeks ago

DIMM

A Dual In-line Memory Module (DIMM) is a type of hardware that consists of a small populated with multiple (RAM) chips, featuring pins on both sides for connecting to a and enabling a 64-bit path for efficient temporary and retrieval in desktops, laptops, workstations, and servers. DIMMs evolved from earlier single in-line memory modules (SIMMs) in the early 1990s to support the 64-bit architecture of processors like Intel's , addressing the limitations of 32-bit SIMMs by doubling the data bandwidth through independent pin connections on each side of the module. Initially featuring 168-pin connectors, DIMMs were standardized by for interoperability and quickly became the dominant for PC upgrades by the mid-1990s. Over time, they have incorporated advancements in technology, progressing from synchronous DRAM (SDRAM) to (DDR) variants, with modern iterations like DDR5 supporting capacities up to 128 GB per module and clock speeds exceeding 8,000 MT/s. Key types of DIMMs include unbuffered DIMMs (UDIMMs), which are non- modules commonly used in consumer desktops and laptops for cost-effective performance; registered DIMMs (RDIMMs), which incorporate a to reduce and enhance stability in multi-module configurations; and load-reduced DIMMs (LRDIMMs), designed for high-capacity environments by using a memory buffer to minimize signal degradation. Smaller variants, known as SO-DIMMs, adapt the DIMM design for compact devices like laptops, measuring about half the length of standard DIMMs while maintaining similar pin counts in later generations (e.g., 260 pins for DDR4). DIMMs often include features like error-correcting code () for in applications, heat spreaders for in high-density setups, and support for multi-channel configurations to boost overall system throughput.

Overview

Definition and Purpose

A Dual In-line Memory Module (DIMM) is a type of (RAM) module consisting of multiple (DRAM) chips mounted on a , featuring independent pins on both sides of the board to enable separate electrical connections and addressing. This design allows for a wider data pathway compared to earlier modules, facilitating efficient transfer within computer systems. The primary purpose of a DIMM is to serve as high-capacity, high-speed that temporarily stores data and instructions for quick access by the , supporting the operational needs of various computing devices. It enables users to easily upgrade and expand system memory by installing additional modules into motherboard slots, thereby improving overall performance without requiring complex modifications. DIMMs are commonly used in personal computers, workstations, and servers to handle demanding workloads such as and multitasking. Variants have evolved for use in laptops, adapting the while retaining core functionality. At its core, a DIMM operates by storing data in its integrated chips, which are organized to provide access via a standard 64-bit wide data bus for transferring information to and from the system's . This configuration ensures reliable, low-latency retrieval of volatile data essential for running applications and operating systems.

Advantages Over Predecessors

DIMM modules introduced significant improvements over their predecessors, particularly Single In-line Modules (SIMMs), by enabling independent electrical contacts on both sides of the . This design allows for a native 64-bit data path without the need for interleaving or pairing modules, effectively doubling the compared to the 32-bit paths of SIMMs. In terms of scalability, DIMMs supported higher capacities per , reaching up to 128 in their initial implementations during the mid-1990s, compared to the typical 16-32 limits of SIMMs at the time. This advancement facilitated easier multi-module configurations in systems, allowing for greater overall memory expansion without the constraints of paired installations required by SIMMs. DIMM's architecture was specifically tailored for compatibility with 64-bit processors, such as Intel's series, which featured a 64-bit external data bus. Unlike SIMMs, which necessitated the use of two modules in tandem to achieve full bus utilization, a single DIMM could populate the entire data bus, streamlining system design and reducing complexity. From a and efficiency standpoint, the standardized dual-sided layout of DIMMs simplified processes and minimized signal through independent electrical contacts on each side of the . This resulted in lower power consumption—operating at 3.3 V versus SIMMs' 5 V—and enhanced reliability in high-density configurations, making DIMMs more cost-effective for and deployment.

Historical Development

Origins in the 1990s

The Dual In-Line Memory Module (DIMM) emerged in the early as a response to the evolving demands of computing architectures requiring wider memory interfaces. The Intel Pentium processor, released in March 1993, featured a 64-bit external data bus, necessitating a shift from the 32-bit Single In-Line Memory Module (SIMM) design, which required pairing two modules to achieve the necessary bandwidth. This transition addressed the limitations of SIMM configurations in supporting higher data throughput without increasing complexity in design. JEDEC, the Joint Electron Device Engineering Council, played a pivotal role in formalizing the SDRAM standard in 1993, with the 168-pin DIMM mechanical specification following in 1995 as a standardized successor to specifically tailored for 64-bit systems. The initial DIMM design incorporated Extended Data Out () Dynamic Random-Access () chips, which improved access times over prior Fast Page Mode (FPM) by allowing data output to begin before the next address was fully latched. 's standardization efforts focused on establishing through precise electrical characteristics, such as signal timing and voltage levels, and mechanical features like pin layouts and connector notches to prevent incorrect insertions. Early commercial adoption of DIMMs began in 1994, primarily in personal computers and workstations equipped with processors, where they simplified memory expansion by providing a single module for 64-bit access. The 168-pin configuration quickly gained prominence as the for subsequent Synchronous DRAM (SDRAM) implementations, enabling broader compatibility across vendors. JEDEC's collaborative process involved industry stakeholders in iterative reviews to refine these specifications, ensuring reliable performance in emerging 64-bit environments without proprietary variations.

Key Milestones and Transitions

The transition to Synchronous Dynamic Random-Access Memory (SDRAM) marked a pivotal shift in DIMM technology during the mid-1990s, with widespread adoption of 168-pin SDR DIMMs occurring between 1996 and 1997 as they replaced earlier Fast Page Mode (FPM) and Extended Data Out (EDO) modules. This change synchronized memory operations with the system clock, enabling higher speeds and better performance in personal computers and early servers compared to asynchronous predecessors. The introduction of Double Data Rate (DDR) SDRAM in 2000 represented the next major evolution, launching 184-pin DDR DIMMs that effectively doubled data transfer rates over SDRAM by capturing data on both rising and falling clock edges. This standard, formalized as JESD79-1 in June 2000, quickly gained traction in consumer and enterprise systems. Subsequent generations followed: DDR2 SDRAM in 2003 with 240-pin DIMMs under JESD79-2, offering improved power efficiency and higher bandwidth; and DDR3 SDRAM in 2007, also using 240-pin configurations via JESD79-3, which further reduced operating voltages to 1.5V while supporting greater module capacities. More recent advancements include , standardized in September 2012 under JESD79-4 and entering the market in 2014 with 288-pin DIMMs designed for higher densities and speeds up to 3200 MT/s. followed in July 2020 via JESD79-5, retaining the 288-pin form factor but incorporating an on-module (PMIC) to enhance and efficiency, with initial speeds reaching 4800 MT/s and updates supporting speeds up to 9200 MT/s as of October 2025. These transitions have profoundly influenced industry adoption, particularly in servers where became prevalent in the 2000s to handle higher channel populations and ensure in multi-socket environments. Capacity growth per DIMM module, driven by advancements aligned with principles of exponential density increases, evolved from typical 256 MB in early eras to up to 512 GB per module in DDR5 configurations as of 2025, enabling scalable architectures.

Physical Design

Form Factors and Dimensions

The standard full-size Dual In-Line Memory Module (DIMM) measures 133.35 mm in length, 31.25 mm in height, and approximately 4 mm in thickness, adhering to mechanical outline specifications such as MO-309 for DDR4 variants. This features a gold-plated with 240 pins for DDR3 modules and 288 pins for DDR4 modules, ensuring reliable and compatibility with and motherboards. The dimensions provide a balance between component density and ease of insertion into standard sockets, with tolerances defined by to maintain interchangeability across manufacturers. A compact variant, the Small Outline DIMM (SO-DIMM), is designed for laptops and space-constrained systems, measuring 67.6 mm in length while retaining a height of approximately 30 mm and a thickness of 3.8 mm, as outlined in standards for SO-DIMMs. SO-DIMMs use 200 pins for DDR2, 204 pins for DDR3, 260 pins for DDR4, and 262 pins for DDR5, depending on the generation, offering a thinner profile to fit into narrower without compromising performance in mobile applications. Unbuffered DIMMs (UDIMMs) and registered DIMMs (RDIMMs) share the core form factor but differ slightly in height due to the additional register chip on RDIMMs, which can increase the overall module height by up to 1-2 mm in some designs for better thermal dissipation. Both types include optional heat spreaders—aluminum or copper plates attached to the PCB—for enhanced thermal management in high-load scenarios, though these add minimal thickness (typically 0.5-1 mm) and are not part of the base JEDEC outline. Notch positions on the edge connector serve as keying mechanisms: the primary notch differentiates unbuffered (right position), registered (middle), and reserved/future use (left) configurations to prevent incompatible insertions, while a secondary voltage key notch ensures proper voltage alignment. JEDEC specifications also define precise mechanical tolerances, including a PCB thickness of 1.27 mm ±0.1 mm and edge connector lead spacing of 1.0 mm for DDR3 and 0.85 mm for DDR4 DIMMs, ensuring robust mechanical integrity and alignment during socket insertion. These parameters, along with guidelines for hole spacing in manufacturing, support consistent production and prevent issues like warping or misalignment in assembled systems.

Pin Configurations

The pin configurations of Dual In-line Memory Modules (DIMMs) define the electrical interfaces between the module and the system motherboard, encompassing signal lines for data, addresses, commands, clocks, power, and ground, while ensuring backward incompatibility through distinct layouts across generations. These configurations evolve with each DDR iteration to support higher densities, faster signaling, and improved integrity, standardized by the Joint Electron Device Engineering Council (JEDEC). The 168-pin (SDRAM) DIMM, introduced for single data rate operation, features 84 pins per side of the printed wiring board (PWB), operating at 3.3 V. It allocates 12 to 13 address pins for row and column selection (A0–A12), 64 data input/output pins (DQ0–DQ63) for the primary 64-bit wide bus, and dedicated control pins including Row Address Strobe (RAS#), Column Address Strobe (CAS#), and Write Enable (WE#), along with clock (CLK), (CS#), and bank address lines (BA0–BA1). Power () and ground (VSS) pins are distributed throughout for stable supply, with additional pins for optional error correction () in 72-bit variants using check bits (CB0–CB7). Succeeding it, the 184-pin Double Data Rate (DDR) SDRAM DIMM maintains a similar structure but increases to 92 pins per side, reducing voltage to 2.5 V for VDD and VDDQ to enable higher speeds while preserving compatibility with the 64-bit data bus (DQ0–DQ63). Key enhancements include differential clock pairs (CK and CK#) for reduced noise, along with strobe signals (DQS and DQS#) per byte lane for data synchronization, and multiplexed address/command pins (A0–A12, BA0–BA1) that combine row/column and bank addressing. Control signals like RAS#, CAS#, and WE# persist, with power and ground pins similarly interspersed, and an optional ECC extension to 72 bits. The 240-pin configurations for DDR2 and DDR3 SDRAM DIMMs expand to 120 pins per side, supporting 1.8 V operation for DDR2 and 1.5 V for DDR3, with provisions for additional bank addressing (up to BA0–BA2) via extra pins (A13, A14 in higher densities) to handle increased internal banks (up to 16). Both retain the 64-bit DQ bus with per-byte DQS/DQS# pairs and differential clocks, but DDR3 introduces a fly-by topology where address, command, and clock signals daisy-chain across ranks on the module for improved signal integrity and reduced skew, compared to the T-branch topology in DDR2. Control pins (RAS#, CAS#, WE#, ODT for on-die termination) and power/ground distribution evolve accordingly, with 72-bit ECC support. Modern 288-pin DDR4 and DDR5 DIMMs use 144 pins per side, operating at 1.2 V for DDR4 and introducing further refinements in DDR5 with dual 32-bit sub-channels per for better efficiency. DDR4 employs a fly-by with (Pseudo Open Drain) signaling on data lines for lower and swing, featuring 17 row bits (A0–A16), bank groups (BG0–BG1), and banks (BA0–BA1), alongside the 64-bit DQ with DQS/DQS# and differential CK/CK#. DDR5 builds on this with on-die integrated into each device (eliminating module-level ECC pins in base configs), signaling across more lines, and dedicated pins for the Power Management Integrated Circuit (PMIC), which regulates voltages like (1.1 V) and VPP from a 12 V input. Control signals include enhanced CS#, CKE, and bits for command/ reliability, with / pins optimized for multi-rank support up to 8. To prevent cross-compatibility issues, DIMMs incorporate keying notches at specific positions along the pin edge: for example, the notch for 168-pin SDR is centered differently from the offset position in 184-pin (around pin 92), while 240-pin DDR2/DDR3 notches are further shifted (near pin 120), and 288-pin DDR4/DDR5 notches are positioned even more offset (around pin 144) to ensure physical mismatch with prior sockets.
GenerationPin CountVoltage (VDD)Key SignalsTopology/Signaling Notes
SDR (168-pin)1683.3 VA0–A12, DQ0–DQ63, RAS#/CAS#/WE#Single-ended clock; T-branch
DDR (184-pin)1842.5 VA0–A12, BA0–BA1, DQ0–DQ63, DQS/DQS#Differential clock pairs; T-branch
DDR2/DDR3 (240-pin)2401.8 V / 1.5 VA0–A14, BA0–BA2, DQ0–DQ63Fly-by (DDR3); increased banks
DDR4/DDR5 (288-pin)2881.2 V / 1.1 VA0–A16/17, BG/BA, DQ0–DQ63 (dual sub-channels in DDR5)Fly-by; POD signaling; PMIC pins (DDR5)

Memory Architecture

Internal Organization

A Dual In-Line Memory Module (DIMM) internally organizes to provide a standardized 64-bit (or 72-bit for variants) data interface to the system . The are arranged along the edges of the , with their data pins () connected in parallel to form the module's data width. Typically, unbuffered DIMMs use 8 to 18 to achieve this width, depending on the chip's data organization—x4 (4 bits per chip, requiring 16 per for 64 bits), x8 (8 bits per chip, requiring 8 per ), or x16 (16 bits per chip, requiring 4 per )—and the presence of error-correcting code () , which add one extra per . The total capacity of a DIMM is calculated based on the number of chips, each chip's density (expressed in gigabits, Gb), and the overall structure, converting total bits to bytes via division by 8. For a single-rank unbuffered non-ECC DIMM using x8 organization, the formula simplifies to total capacity (in GB) = (number of chips × chip density in Gb) / 8; for example, 8 chips each of 8 Gb density yield (8 × 8) / 8 = 8 GB. This scales with higher-density chips or additional ranks, enabling modules from 1 GB to 128 GB or more in modern configurations. Addressing within a DIMM follows the standard DRAM row-and-column multiplexed scheme, where the sends row addresses followed by column addresses over shared pins to select data locations. In DDR4, each chip includes 16 s divided into 4 bank groups (with 4 banks per group), supporting fine-grained parallelism by allowing independent access to different groups while minimizing conflicts; DDR5 extends this to 32 banks organized into 8 bank groups. Row addresses typically span 14 to 18 bits (16K to 256K rows), and column addresses use 9 to 10 bits (512 to 1K columns), varying by density and organization. DIMM rank structure defines how chips are grouped for access: a single-rank module connects all chips to the same chip-select (CS) and control signals, treating them as one accessible unit for simpler, lower-density designs. In contrast, a dual-rank module interleaves two independent sets of chips—often placed on opposite sides of the PCB—with distinct CS signals, enabling the controller to alternate accesses between ranks for higher effective throughput and density, though at the potential cost of slightly increased latency due to rank switching.

Channel Ranking

In memory systems, refers to the of across one or more DIMMs connected to a single , where a constitutes a 64-bit (or 72-bit with ) wide set of chips that can be accessed simultaneously via shared signals. Single-rank configurations, featuring one such set per DIMM, prioritize simplicity and potentially higher operating speeds due to lower electrical loading on the , while multi-rank setups—such as dual-rank or quad-rank DIMMs—enable greater density by allowing multiple independent 64-bit accesses per through interleaving, though they introduce overhead from switching. Common configurations include dual-channel architectures, prevalent in consumer and entry-level platforms, where two independent 64-bit channels operate in to achieve an effective 128-bit data width and double the of a single-channel setup; this typically involves populating one or two DIMMs per channel for balanced performance and . In high-end , quad-channel configurations extend this to four 64-bit channels for 256-bit effective width, quadrupling and supporting denser populations, such as multiple multi-rank DIMMs per channel to maximize system-scale . Increasing ranks per channel enhances overall capacity but can degrade maximum achievable speeds owing to heightened bus loading, which amplifies challenges and necessitates timing adjustments like extended all-bank row active times. Unbuffered DIMMs (UDIMMs) are generally limited to 2-4 total ranks per to mitigate excessive loading, restricting them to one or two DIMMs in most setups. To this, registered DIMMs (RDIMMs) employ a to command and signals, reducing the on those lines and enabling up to three DIMMs per without proportional speed penalties. Load-reduced DIMMs (LRDIMMs) further optimize by fully buffering data, command, and signals via an isolation memory , which supports daisy-chained topologies and allows up to three DIMMs per even with higher-rank modules, prioritizing density in large-scale servers.

Performance Specifications

Data Speeds and Transfer Rates

Data speeds for DIMMs are typically measured in mega-transfers per second (MT/s), which indicates the number of data transfers occurring per second on the memory bus. This metric reflects the effective , accounting for the (DDR) mechanism where data is transferred on both the rising and falling edges of the . For instance, a DDR4-3200 DIMM operates at 3200 MT/s, enabling high-throughput data movement between the memory modules and the system controller. The evolution of DIMM speeds has progressed significantly across generations, starting from (SDRAM) DIMMs with clock speeds of 66-133 MHz (equivalent to 66-133 MT/s due to single data rate operation). Subsequent generations doubled and then multiplied these rates: reached 266-400 MT/s, advanced to 533-800 MT/s, to 800-2133 MT/s, and to 2133-3200 MT/s. , the current standard as of 2025, begins at 4800 MT/s and extends up to 9200 MT/s per specifications, representing a substantial increase in transfer capabilities for modern computing demands.
GenerationStandard MT/s RangePeak Bandwidth per DIMM (GB/s)
SDRAM66-1330.53-1.06
DDR1266-4002.1-3.2
DDR2533-8004.3-6.4
DDR3800-21336.4-17.1
DDR42133-320017.1-25.6
DDR54800-920038.4-73.6
Bandwidth, or the maximum data transfer rate per DIMM, is calculated using the formula: \text{Bandwidth (GB/s)} = \frac{\text{MT/s} \times 64 \text{ bits}}{8 \times 1000} This simplifies to MT/s × 8 bytes per transfer divided by 1000 for gigabytes, assuming a standard 64-bit wide bus. For example, a DDR5-6400 DIMM achieves 51.2 /s, doubling the effective of comparable DDR4 modules through higher transfer rates and architectural optimizations like dual 32-bit sub-channels per DIMM. To exceed JEDEC-standard speeds, users often employ via Intel Extreme Memory Profile (XMP) technology, which embeds pre-configured profiles in the DIMM's (SPD) . These profiles automatically adjust clock speeds, timings, and voltages—such as increasing from 1.2V to 1.35V or higher—for non-standard operation, like pushing DDR5 beyond 6400 MT/s to 8000 MT/s or more, provided the and cooling support it. However, requires stability testing to avoid .

Timings and Latency

DIMM performance is significantly influenced by timing parameters that dictate the delays in accessing and refreshing data within the memory modules. These timings, specified in clock cycles, determine how quickly the memory can respond to read or write requests from the . The primary timing metrics include (CL), which measures the number of clock cycles between a column strobe () command and the availability of the first data bit; row-to-column delay (tRCD), representing the cycles needed to activate a row and then select a column within it; and row precharge time (tRP), the cycles required to close the current row and prepare for the next row activation. For instance, a typical DDR4 DIMM might operate at CL22, tRCD 22, and tRP 22, as standardized by for modules like DDR4-3200. To convert these cycle-based timings into practical measures of responsiveness, the effective latency is calculated in nanoseconds using the formula: \text{Effective Latency (ns)} = \frac{\text{CL}}{\frac{\text{Data Rate (MT/s)}}{2000}} This accounts for the memory's clock frequency, where the data rate in mega-transfers per second (MT/s) is divided by 2000 to yield gigahertz-equivalent frequency. For example, a DDR4-3200 module with CL22 yields an effective CAS latency of approximately 14 ns (22 / (3200 / 2000) = 13.75 ns), providing a benchmark for comparing responsiveness across different speeds. Similarly, tRCD and tRP can be converted using the same divisor to assess full access times, often resulting in latencies around 11-14 ns for standard DDR4 configurations. This metric highlights how higher data rates can mitigate the impact of increased cycle counts in faster generations. Trade-offs in these timings balance improvements against power consumption and stability. Lower , tRCD, or tRP values reduce access delays, enhancing responsiveness for latency-sensitive applications like or , but they demand higher voltage or more robust signaling, increasing power draw—typically by 10-20% for aggressive timings. Generational advancements have progressively lowered these latencies; DDR5 DIMMs, for example, achieve tCL values of 32-40 cycles at speeds up to 6400 MT/s, translating to effective latencies of about 10-12.5 , thanks to on-die error correction and refined bank architectures that allow tighter timings without excessive power penalties. Timings are formally defined by standards to ensure interoperability, with modules tested for compliance at specified voltages and temperatures, but real-world performance often varies due to or system-specific factors. Tools like provide benchmarks measuring actual read/write latencies, revealing differences of 1-3 between JEDEC-compliant operation and optimized setups on modern platforms, underscoring the gap between standardized specs and practical deployment.

Integrated Features

Serial Presence Detect (SPD)

Serial Presence Detect (SPD) is a standardized feature on DIMM modules that enables automatic by storing essential module parameters in a non-volatile chip. This chip, typically ranging from 256 bytes for older generations to 512 bytes for DDR4, is accessible via the (SMBus) or protocol, allowing the host system to query the without manual intervention. The primary function of SPD is to provide the or with accurate details about the installed , ensuring and optimal operation by preventing configuration mismatches that could lead to instability or failure. The -defined SPD format organizes into structured fields within the , including the module's manufacturing information such as the manufacturer ID (a JEDEC-assigned code) and for unique identification. Key operational parameters stored encompass the module's (e.g., total in gigabits), supported speeds (e.g., maximum clock frequency), and timings (e.g., and row access times). Additional fields cover supported operating voltages (e.g., 1.2V for DDR4) and optional profiles like Extreme Memory Profile (XMP), which encode settings for enhanced performance beyond standard JEDEC limits. These fields are encoded in binary or ASCII formats, with the first 128 bytes dedicated to core JEDEC parameters and subsequent bytes for vendor-specific or extended . During system initialization, the motherboard's reads the SPD data from the over the dedicated SMBus lines (typically pins on the DIMM connector) at boot time, using the address assigned to the SPD device (e.g., 0x50 or 0x51). This process allows the system to automatically program the with the appropriate voltage, , and timing values derived from the SPD, thereby configuring the subsystem for reliable operation. If multiple modules are present, the compares their SPD data to determine the settings, avoiding incompatibilities such as differing speeds or ranks. This read-only interaction (with enabled post-manufacture) ensures and simplifies installation for end users. The SPD specification has evolved to accommodate advancing memory technologies, with DDR5 introducing an expanded 1024-byte EEPROM capacity under the JESD400-5 standard to support more complex configurations. As of the October 2025 update to version 1.4, enhancements include additional fields for (PMIC) data, enabling finer control over on-module , alongside support for higher speeds up to DDR5-9200. These updates reflect the growing demands of DDR5's dual-channel architecture and integrated features, while maintaining backward compatibility with core principles.

Error Correction and Reliability

Error-Correcting Code () enhances the reliability of DIMMs by incorporating parity bits that enable the detection and correction of data errors arising from transient faults. In standard implementations, appends 8 parity bits to 64 data bits, creating a 72-bit codeword based on an extended that corrects single-bit errors and detects double-bit errors. ECC is typically realized through dedicated parity chips mounted on the DIMM or, in newer designs, integrated directly into the DRAM devices. Registered DIMMs (RDIMMs), optimized for server workloads, incorporate as a standard feature to safeguard against errors in high-density configurations. Unbuffered DIMMs (UDIMMs), prevalent in and client systems, provide support optionally, allowing flexibility based on application needs. By correcting single-bit soft errors—transient bit flips often induced by cosmic rays or alpha particles—ECC significantly reduces the soft error rate (SER), which quantifies error occurrences per unit of memory and time. Complementary memory scrubbing periodically reads data blocks, applies ECC correction if needed, and rewrites the verified content, preempting multi-bit error escalation and bolstering overall system dependability. DDR5 DIMMs introduce on-die ECC, which internally corrects single-bit errors within individual DRAM chips to enhance manufacturing yields and operational stability independent of external ECC. In server-grade setups, Chipkill ECC advances reliability further by tolerating multi-bit failures, such as those from an entire failing chip, through redundant data distribution and advanced Reed-Solomon coding across modules.

Types and Variants

Standard Full-Size DIMMs

Standard full-size DIMMs represent the baseline for desktop computers and entry-level servers, featuring unbuffered and basic registered implementations that prioritize compatibility and cost-effectiveness across memory generations. These modules adhere to standards, providing a standardized for integrating (DRAM) into mainstream computing systems. With a typical physical of 133.35 mm in length and 31.25 mm in height, they support efficient in non-buffered configurations suitable for consumer-grade applications. Key characteristics include a 288-pin connector for DDR4 and DDR5 generations, enabling high-density data paths without additional buffering in unbuffered variants (UDIMMs). These DIMMs support capacities up to 64 GB per module for DDR4 UDIMMs under specifications, extending to 128 GB for DDR5 unbuffered modules through advancements in die stacking and error correction integration. Primarily deployed in desktops and entry-level servers, they facilitate reliable performance in environments requiring up to dual-processor support without the overhead of advanced buffering. Evolutionary generations of standard full-size DIMMs trace from the obsolete SDR variants, which utilized a 168-pin layout for early synchronous implementations in late-1990s systems. Subsequent DDR3 unbuffered DIMMs, with 240 pins, became the standard for PCs in the mid-2000s, operating at 1.5 V nominally or 1.35 V in low-voltage modes to balance power and performance. DDR4 unbuffered DIMMs, introduced in 2014, maintain the 288-pin form while reducing operating voltage to 1.2 V, enhancing by approximately 20% over DDR3 equivalents. DDR5 further refines this with 1.1 V operation, supporting higher capacities and speeds while preserving the 288-pin interface for backward-compatible designs. In applications, standard full-size DIMMs excel in single- or dual- configurations, where they maximize in and entry-level setups by populating two modules per channel for optimal interleaving. A keying positioned approximately 60 pins from one edge on DDR4 modules ensures proper and prevents insertion into incompatible slots, such as those for DDR3. This choice supports seamless upgrades within compatible platforms, though compact alternatives like SO-DIMMs are preferred for space-constrained laptops. Limitations of these DIMMs include a maximum of 4 to 8 modules per in unbuffered configurations, constrained by signal loading and electrical tolerances to avoid in timing and . Beyond this, buffering becomes necessary for higher densities. Heat dissipation is managed through optional heatsinks attached to the module, particularly in high-speed DDR4/DDR5 variants operating above 3200 MT/s, to maintain thermal thresholds under sustained workloads.

Small Outline DIMMs (SO-DIMMs)

Small Outline DIMMs (SO-DIMMs) represent a compact variant of in-line modules optimized for space-limited environments, measuring approximately half the length of standard full-size DIMMs at about 67.6 mm compared to 133.35 mm. This reduced form factor enables their integration into portable and systems without compromising core functionality. SO-DIMMs adhere to the same underlying standards as full-size DIMMs but adapt the pin configuration to suit their smaller footprint: 204 pins for DDR3 implementations, 260 pins for DDR4, and 262 pins for DDR5. These modules are predominantly deployed in laptops, where their diminutive size facilitates slim designs, as well as in printers and routers that require reliable memory in constrained enclosures. DDR5 SO-DIMMs, in particular, support module capacities up to 64 GB, enabling higher memory densities in modern portable as of 2025. Unlike their full-size counterparts, SO-DIMMs often incorporate lower power specifications, with DDR5 variants operating at 1.1 V to contribute to extended runtime in battery-powered devices. In ultra-thin laptops and tablets, SO-DIMMs are commonly soldered directly onto the to further minimize thickness and eliminate removable slots. SO-DIMMs ensure broad compatibility with full-size DIMMs by employing identical (SPD) protocols and timing parameters defined under standards, allowing systems to automatically configure memory settings upon detection. The compact nature of SO-DIMM-based motherboards inherently features shorter trace lengths between the and modules, which improves by reducing propagation delays and minimizing in high-speed environments. This design adaptation supports stable operation in densely packed layouts without necessitating additional buffering for most consumer applications.

Registered and Buffered Variants

Registered DIMMs (RDIMMs) incorporate a , typically a clock driver (RCD), that retimes and buffers command and address signals to reduce the on the , enabling support for up to three DIMMs per in server systems. This buffering isolates the from the capacitive load of multiple modules, improving and allowing higher operating frequencies compared to unbuffered DIMMs. However, the introduces an additional one clock cycle of , as commands are held and retransmitted on the next cycle. Load-Reduced DIMMs (LRDIMMs) extend this buffering approach by incorporating isolation memory buffers (iMBs) that handle not only command and address signals but also data input/output (DQ) lines, presenting a single low-load differential signal to the for each . This load reduction allows for three or more DIMMs per without significant signal degradation, supporting greater memory density—such as up to 768 total capacity in configurations using LRDIMMs across multiple channels in enterprise servers. Other buffered variants include 3D-stacked () DIMMs, which use through-silicon vias (TSVs) to vertically stack multiple DRAM dies, achieving higher densities like 128 GB or 256 GB per module while maintaining compatibility with RDIMM or LRDIMM architectures. Some systems support an online spare feature on RDIMMs or LRDIMMs, which incorporates to automatically disable failed modules and activate spares, minimizing downtime in mission-critical environments without hot-swapping hardware. These variants are primarily deployed in data centers and (HPC) applications, where high memory capacity and reliability are essential. Emerging options like Clocked Unbuffered DIMMs (CUDIMMs) for DDR5 integrate a clock driver to enable 4-rank configurations with up to 128 per module as of 2025. In DDR5 implementations, LRDIMMs integrate a (PMIC) to regulate the 1.1 V operating voltage, enhancing energy efficiency by up to 20% over DDR4 while supporting dense configurations for and workloads as of 2025.

References

  1. [1]
    What is DIMM (dual in-line memory module)? - TechTarget
    Mar 22, 2024 · DIMM is a module that contains one or several random access memory (RAM) chips on a small circuit board with pins that connect it to the computer motherboard.Missing: history | Show results with:history
  2. [2]
    What is a dual in-line memory module (DIMM)? - IBM
    A dual in-line memory module (DIMM) is a common type of computer memory modular hardware used in desktops, laptops and servers consisting of multiple random ...Missing: history | Show results with:history
  3. [3]
  4. [4]
    Dual In-Line Memory Module (DIMM) Characteristics and Types
    May 3, 2023 · A dual in-line memory module (DIMM) is defined as a 64-bit memory unit containing multiple RAM chips on a circuit board with pins that connect ...Missing: history | Show results with:history
  5. [5]
  6. [6]
    What Is DIMM? | Enterprise Storage Forum
    Jul 18, 2019 · DIMM stands for Dual In-Line Memory Module, a type of computer memory that installs in the motherboard's memory slots.Dimm Architecture · Dimm's Predecessor: Simm · The Question Of Dimm Slots
  7. [7]
    [PDF] DDR4 SDRAM UDIMM Design Specification Revision 1.00 ... - JEDEC
    These DDR4 Unbuffered DIMMs (UDIMMs) are intended for use as main memory when installed in PCs.<|control11|><|separator|>
  8. [8]
    Dual Inline Memory Module - an overview | ScienceDirect Topics
    A Dual Inline Memory Module (DIMM) is a type of computer memory module that provides a 64- or 72-bit-wide data bus interface to the memory system and is ...2. Technical Architecture... · 3. Types And Variants Of... · 5. Performance Optimization...
  9. [9]
    Tech Flashback: The SIMMs | Gough's Tech Zone
    Mar 22, 2014 · DIMMs were a godsend, because of their larger size and the JEDEC standard. Mind you, there were also giant ECC SDRAM-DIMMs for HP and IBM ...
  10. [10]
    Understanding Computer Memory: From SIMM and DIMM to DDR5
    May 23, 2025 · DIMM, Dual in-line memory module, is a memory stick that appeared after the launch of Pentium CPU, which provides 64-bit data channels. DIMM is ...<|control11|><|separator|>
  11. [11]
  12. [12]
    The Birth of Pentium - Explore Intel's history
    Intel released Pentium, its fifth-generation x86 chip and the first Intel processor to be named with a word instead of a number.Missing: DIMM | Show results with:DIMM
  13. [13]
    Memory - DOS Days
    The third type of memory for PCs (chronologically) was DIMMs - Dual Inline Memory Modules. They began to take over from 72-pin SIMMs during the later Pentium ...
  14. [14]
    What Is SIMM (Single In-line Memory Module)? - Computer Hope
    Feb 8, 2020 · However, because the Pentium is 64-bit and a SIMM is only 32-bits ... The SIMM was replaced by DIMMs. 72-pin computer SIMM. 72-Pinn SIMM ...
  15. [15]
    Dates when DIMMs were released? - OpenWritings.net
    The first DIMMs emerged in the late 1980s: These ... FPM (Fast Page Mode) DIMMs: Introduced in 1994, FPM DIMMs provided faster performance than EDO DIMMs.
  16. [16]
    [PDF] SDRAMs Ready to Enter PC Mainstream - CECS
    EDO also marked the first time that a main memory technology was chosen, at least in part, for its marketing appeal: “Performance EDO DRAM.” As the DRAM market.
  17. [17]
    DIMM Design Files - JEDEC
    Published JEDEC documents on this website are self-service and searchable directly from the homepage by keyword or document number. Click here for website, ...<|control11|><|separator|>
  18. [18]
    168 Pin DRAM DIMM - JEDEC
    168 Pin DRAM DIMM. MODULE4.5.1. Published: Mar 1999. Release No.9. A list of RAND License Assurance/Disclosure Forms is available to JEDEC members on the ...
  19. [19]
    What Are the JEDEC Standards and How Do They Affect ... - Z2Data
    Dec 7, 2023 · The JEDEC standards for memory are primarily organized into three categories: main memory, flash memory, and mobile memory.History Of Jedec: It Started... · Jedec's Expansion To The... · Challenges Of Jedec...<|control11|><|separator|>
  20. [20]
    The Evolution of Memory Technology – eBook - Kingston Technology
    First let's start with the journey—from the introduction of FPM DRAM in the mid-1980s to SDRAM in the 1990s, which aligned with the CPU clock for more efficient ...
  21. [21]
    DOUBLE DATA RATE (DDR) SDRAM STANDARD - JEDEC
    This comprehensive standard defines all required aspects of 64Mb through 1Gb DDR SDRAMs with X4/X8/X16 data interfaces.Missing: DIMM milestones
  22. [22]
    DDR2 SDRAM STANDARD - JEDEC
    DDR2 SDRAM STANDARD. JESD79-2F. Published: Nov 2009. This comprehensive standard defines all required aspects of 256Mb through 4Gb DDR2 SDRAMs with x4/x8/x16 ...
  23. [23]
    JEDEC finally unveils official DDR3 standards - Engadget
    Jun 26, 2007 · ... launch before the official specs get ratified, but JEDEC is doing the honors today by introducing the DDR3 (Double Data Rate 3) memory device ...
  24. [24]
    JEDEC Announces Publication of DDR4 Standard
    Sep 25, 2012 · The new DDR4 standard is available for free download from the JEDEC website at http://www.jedec.org/standards-documents/results/jesd79-4%20ddr4.
  25. [25]
    JEDEC Publishes New DDR5 Standard for Advancing Next ...
    JESD79-5 DDR5 is now available for download from the JEDEC website. DDR5 was designed to meet increasing needs for efficient performance in a ...
  26. [26]
    JEDEC Updates JESD79-5C DDR5 SDRAM Standard
    Apr 17, 2024 · JESD79-5C is now available for download from the JEDEC website. JESD79-5C introduces an innovative solution to improve DRAM data integrity ...
  27. [27]
    [PDF] DDR4 SDRAM UDIMM Design Specification Revision 1.10 ... - JEDEC
    Aug 10, 2015 · This spec defines 288-pin, 1.2V DDR4 SDRAM UDIMMs for PC main memory, with 2GB-64GB capacity, 133.35mm x 31.25mm dimensions, and 2.5V/3.3V ...
  28. [28]
    Introduce of DDR4/DDR3/DDR2/DDR1 Memory Module Form Factors
    Oct 6, 2009 · The standard-length module is the most commonly used module form factor. With a typical length of 133.35mm, these DIMM memory modules will ...
  29. [29]
    [PDF] JESD205 - JEDEC STANDARD
    Mar 5, 2007 · Within MO-256, several DIMM thickness variations are defined. Variations are denoted by a two letter refer- ence “XX”. The first (MSB) ...
  30. [30]
    [PDF] 4.5.1 – 168 PIN DRAM DIMM FAMILY - JEDEC
    PACKAGE––168 PIN JEDEC DIMM MEMORY MODULE ... JEDEC Standard No. 21–C. Page 4.5.1–2. Release 4–7. Figure 4.5.1–A. 168 PIN, 64, 72, or 80 BIT DIMM PINOUT, TOP HALF ...
  31. [31]
    [PDF] JEDEC Standard No. 21C
    Jul 3, 2010 · Pinout comparison, 168 Pin DRAM and SDRAM DIMM ...................................5-7c8 ............ 4.5.3-6. 4.5.3F. 8-Byte Unbuffered DRAM ...
  32. [32]
    184 Pin Unbuffered DDR SDRAM DIMM - JEDEC
    184 Pin Unbuffered DDR SDRAM DIMM. MODULE4.5.10. Published: May 2021. Release No.31. This revision contains terminology updates only.Missing: pinout specification
  33. [33]
    [PDF] PC1800/2100 DDR SDRAM Unbuffered DIMM Design Specification ...
    This specification defines the electrical and mechanical requirements for 184-pin, 2.5 Volt (VDD)/ 2.5 Volt. (VDDQ), Unbuffered, Double Data Rate, Synchronous ...
  34. [34]
    [PDF] DDR3 SDRAM Unbuffered DIMM Design Specification ... - JEDEC
    Apr 20, 2019 · This specification defines the electrical and mechanical requirements for 240-pin, 1.5 Volt (VDD)/1.5 Volt (VDDQ),.
  35. [35]
    [PDF] DDR4 SDRAM Registered DIMM Design Specification Revision ...
    This spec covers a 288-Pin, 1.2V DDR4 SDRAM DIMM, including product description, environmental requirements, connector pinout, power details, and DIMM design ...
  36. [36]
    DDR5 288 Pin U/R/LR DIMM Connector Performance Standard, DDR5
    This standard defines the form, fit and function of DDR5 connectors for U/R/LR modules supporting channels with transfer rates up to 6.4 GT/S.Missing: DDR4 POD signaling
  37. [37]
    [PDF] DDR3 SDRAM Standard JESD79-3F - JEDEC
    2 DDR3 SDRAM Package Pinout and Addressing ... sheetand/or the DIMM SPD to determine if DDR3 SDRAM devices support this option.<|separator|>
  38. [38]
    DDR4 vs DDR3: Key Differences, Speed, and Compatibility Explained
    Jul 1, 2022 · DDR Key Notch Positions. Figure 1. The pin count and key notch location for each DDR generation are different. How does DDR4 differ from ...
  39. [39]
  40. [40]
    [PDF] datasheet - Samsung
    Feb 2, 2017 · The 8Gb DDR4 SDRAM B-die is organized as a 128Mbit x 4 I/Os x. 16banks or 64Mbit x8 I/Os x 16banks device. This synchronous device achieves high ...
  41. [41]
  42. [42]
    [PDF] DRAM MEMORY MODULE RANK CALCULATION - DigiKey
    For example, if current DRAM technology supports 8-GB single-rank DIMMs, a dual-rank DIMM would be 16 GB, and a quad-rank DIMM would be 32 GB.Missing: structure | Show results with:structure
  43. [43]
    A deeper look at memory ranks, channels and types - TechTarget
    Apr 10, 2013 · More ranks on each DIMM can increase memory density and overall capacity. Single- and dual-rank UDIMMs provide essentially equal performance in ...
  44. [44]
    [PDF] Server Memory Trends (Past and Future) - JEDEC
    Server memory requirements are diversifying. CAGR was 47.7% (2011-2015) and 43.7% (2005-2010). Cloud computing drives growth, and high density requirements ...Missing: 1993 history<|separator|>
  45. [45]
    Guide DDR DDR2 DDR3 DDR4 and DDR5 Bandwidth by Generation
    Feb 4, 2023 · We have a quick guide with the memory bandwidth in GB/s and MT/s for common server DDR, DDR2, DDR3, DDR4, and DDR5 speeds.Missing: SDR | Show results with:SDR
  46. [46]
    What is PC-66, PC-100, and PC-133? - 4AllMemory.com
    PC-66 runs at 66 MHz, the clock speed of the bus between the CPU and the memory, or Front-Side Bus as it is called. PC-100 is a faster main memory bus, ...
  47. [47]
  48. [48]
    Intel® Extreme Memory Profile (Intel® XMP) and Overclock RAM
    Intel® XMP allows you to overclock DDR3/DDR4 RAM memory with unlocked Intel® processors to perform beyond standard for the best gaming performance.
  49. [49]
    [PDF] MEMORY MODULE SERIAL PRESENCE DETECT (SPD)
    2 JEDEC Standards​​ SPD has two basic parts: the hardware, which consists of the EEPROM and the I2C bus on which it resides and the module configuration ...
  50. [50]
    An Insight Into BIOS/UEFI and SPD Data - ATP Electronics
    Sep 27, 2024 · The SPD data contains crucial information about the DRAM module, including timing parameters, module type, and density. The SPD data is accessed ...
  51. [51]
    DDR5 SERIAL PRESENCE DETECT (SPD) CONTENTS - JEDEC
    JESD400-5D​​ This standard describes the serial presence detect (SPD) values for all DDR5 memory modules. In this context, “modules” applies to memory modules ...
  52. [52]
    News | JEDEC
    **Summary of DDR5 SPD Standard Update:**
  53. [53]
    [PDF] DDR and DDR2 SDRAM ECC Reference Design - Intel
    An extra 8 bits of parity on 64 bits of data allows you to employ a two-bit error detection. single bit correcting Hamming code. Hamming Codes are relatively ...
  54. [54]
    [PDF] AN13566 - ECC on i.MX 8 Series - NXP Semiconductors
    SER is the rate at which a device or system encounters (or is predicted to encounter) soft errors. It is typically expressed as the number of. Failures In Time ...
  55. [55]
    RDIMM | DRAM | Samsung Semiconductor Global
    What is RDIMM ECC? RDIMM (registered memory) attaches a register between the CPU and the DRAM chip for data transmission, which reduces the distance of ...M321RBGA0B40-CWK · M321R4GA3BB6-CQK · M321R2GA3BB6-CQK
  56. [56]
    6.4.4.1. One DIMM per Channel (1DPC) for UDIMM, RDIMM ... - Intel
    Data, Data Strobes, DM, and Optional ECC Signals · 7.3.4. Pin Placements for Agilex™ 7 M-Series FPGA DDR5 EMIF IP x. 7.3.4.1. Address and Command Pin Placement ...Missing: source:
  57. [57]
    [PDF] Soft Errors in Electronic Memory – A White Paper
    Jan 5, 2004 · Existing ECC technologies can greatly reduce the error rate, but they may have unacceptable tradeoffs in power, speed, price, or size. Soft ...
  58. [58]
  59. [59]
    Error Correction Code (ECC) in DDR Memories | Synopsys IP
    Oct 19, 2020 · Hence, on-die ECC provides further protection against single-bit errors inside the DDR5 memory arrays.
  60. [60]
    [PDF] IBM Chipkill Memory - John
    Server outages due to memory failures for servers with 1GB ECC memory are actually higher than those for servers with 32MB parity memory!
  61. [61]
    JEDEC Finalizes DDR5 Memory Spec: 6.4 GB/s Speeds, Lower ...
    Jul 16, 2020 · The JESD79-5 DDR5 SDRAM standard was originally supposed to be released two years ago (2018), but now, new hardware based on the standard is ...
  62. [62]
    [PDF] ddr4 sdram jesd79-4 - JEDEC STANDARD
    DDR4 SDRAM JESD79-4 is a JEDEC standard, designed to facilitate interchangeability and improvement of products. JEDEC Standard No. 79-4.
  63. [63]
    DIMM Sockets - Dual In-Line Memory Modules - TE Connectivity
    For DDR2 and DDR3 DIMM sockets, the pin count is 240 pins; for DDR4 and DDR5 DIM sockets, the pin count is 288 pins. Shop DIMM (30) · Connect with Us. Product ...
  64. [64]
    [PDF] Memory Configuration Guide - Supermicro
    Jan 8, 2014 · All DIMMs must be DDR3. Unbuffered DIMMs can be ECC or non-ECC. A maximum of 8 logical ranks per channel is allowed. Mixing of Physical Rank ...
  65. [65]
  66. [66]
    262 Pin DDR5 SODIMM, 0.50 mm Pitch Package - JEDEC
    Registration - 262 Pin DDR5 SODIMM, 0.50 mm Pitch Package. MO-337B. Published: Feb 2022. Designator: PDMA-N262-I0p5-R69p6x3p7Z30p15R2p55x02p35.
  67. [67]
  68. [68]
    All You Need to Know About Industrial DDR4 SODIMM - Longsys
    Jun 1, 2022 · DDR4 has 288 pins compared to 240 in DDR3, and DDR4 SO-DIMMS have 260 pins compared to 204 in DDR3. The DDR4 key notch has been relocated, and ...
  69. [69]
    Kingston FURY Impact DDR5 SODIMM Memory
    Free delivery 30-day returnsWith Intel XMP 3.0 certification and/or Plug N Play 1 profiles, it offers capacities up to 64GB and speeds up to 6400MT/s for a powerful performance boost.
  70. [70]
    DDR vs. LPDDR Wiki - SemiWiki
    Aug 18, 2025 · LPDDR (Low-Power DDR): DRAM family optimized for mobile, embedded, and thin/edge devices. Uses much lower I/O voltages, aggressive power states, ...
  71. [71]
    [PDF] 4.1.2 Serial Presence Detect (SPD), General Standard ... - JEDEC
    SPD Content, FB-DIMM SPD Definition. The Serial Presence Detect standard calls for various features and items to be defined. Specifically, the definitions ...
  72. [72]
    [PDF] Revision 0.31 February 2012 - JEDEC
    This specification defines the electrical and mechanical requirements for 144-pin, EP3-3200/EP3-4200/EP3-. 5300/EP3-6400, 32 bit-wide, Unbuffered Double ...
  73. [73]
  74. [74]
    Why RCDs are Critical Components of RDIMMs ? - ATP Electronics
    the data signal stays on the RCD for one clock cycle, and then transfers from the ...Missing: retimes channel latency
  75. [75]
    Architecture and Implementation of the Load-Reducing DIMM ...
    In the place of the register chip on an RDIMM, the LRDIMM uses an Isolation Memory Buffer (iMB) that isolates the electrical loading of the memory devices.<|control11|><|separator|>
  76. [76]
    What is LR-DIMM , LRDIMM Memory ? ( Load-Reduce DIMM)
    Oct 13, 2009 · The LRDIMM (Load Reduced DIMM) works very similar to a Registered DIMM. It buffers the address and control signals through register logic.
  77. [77]
    Basics of LRDIMM - EDN Network
    Sep 20, 2011 · The LRDIMM not only enables server systems with higher memory capacities, it does so with minimum power penalty. While the memory buffer on a ...
  78. [78]
    Server Memory: RDIMM vs LRDIMM and When to Use Them - Dasher
    Aug 16, 2017 · At the time, servers had the ability to accept three different types of memory LRDIMM, Registered DIMM (RDIMM) and Unbuffered DIMM (UDIMM).
  79. [79]
    Buying a dl380p can put ddr3 1600 UDIMM all the way to 768gb?
    Apr 18, 2020 · The official maximum capacity is 768 GB with 32 GB LRDIMMs (load reduced) or 384 GB with 16 GB RDIMMs (registered, ECC). Note well that you ...Missing: buffers data enables
  80. [80]
    [PDF] What is 3DS? - JEDEC
    Any module built with 4-High memory stacks typically reflects a much higher cost than a like density module built with planar or. 2-high stacked memory. – Most ...
  81. [81]
    DDR4 3DS DIMMs: The next big thing in the Data Center
    Jan 17, 2018 · 3DS is a game changer when it comes to density. DIMMS of 128GB, 256GB and possibly 512GB on a single DIMM is enabled by this technology.
  82. [82]
    Online-spare memory | System x3550 M3 | Lenovo Docs
    The memory online-spare feature disables the failed memory from the system configuration and activates an online-spare DIMM to replace the failed active DIMM.Missing: ODIMM | Show results with:ODIMM
  83. [83]
  84. [84]
  85. [85]
    DDR5 | DRAM | Samsung Semiconductor Global
    DDR5 has higher speed (up to 7200 Mbps), improved capacity, and reliability, with 20% greater power efficiency than DDR4. It has a higher base speed and higher ...Missing: count | Show results with:count