Fact-checked by Grok 2 weeks ago

Volatile memory

Volatile memory is a type of in computer systems that requires a continuous supply of electrical power to maintain the stored data; without power, the information is lost almost immediately. It serves as the primary for processors, enabling rapid read and write operations essential for executing programs and temporary during computation. The most common forms of volatile memory are dynamic random-access memory (DRAM) and static random-access memory (SRAM), each optimized for different performance needs in . DRAM stores each bit of data in a within a memory cell, typically using one and one per bit, which makes it denser and more cost-effective for large capacities but requires periodic refreshing to counteract charge leakage. In contrast, SRAM uses bistable latching circuitry, often with six s per bit, to hold data without refresh cycles, resulting in faster access times at the expense of higher power consumption and larger physical size per bit. These technologies form the backbone of main (primarily DRAM) and CPU caches (primarily SRAM), balancing speed, density, and cost in the . Unlike such as or , which retains data without power, volatile memory's temporary nature makes it ideal for high-speed operations but necessitates frequent backups to persistent to prevent . Its role is critical in modern computing, from personal devices to supercomputers, where advancements in fabrication processes continue to increase capacities—reaching tens of gigabytes in typical systems—while reducing latency to support demanding applications like and processing.

Definition and Characteristics

Definition

Volatile memory refers to a type of that requires a continuous to retain its stored , with all contents being lost immediately upon interruption or removal of power. This characteristic distinguishes it fundamentally from , which preserves information indefinitely without any power source. In the context of computer systems, volatile memory occupies a critical position in the , serving as high-speed, temporary storage for data and instructions actively processed by the CPU. Its proximity to the processor enables rapid read and write operations, facilitating efficient execution of programs, though it necessitates reloading data after power cycles.

Key Characteristics

Volatile memory is fundamentally defined by its volatility, the property that causes all stored data to be lost immediately upon power interruption or removal. This inherent trait stems from the reliance on active electrical states—such as charged capacitors or biased transistors—to represent , which dissipate without continuous . While this enables rapid data access and manipulation without the encumbrances of persistence safeguards, it imposes the critical requirement for regular backups to non-volatile storage to prevent in practical systems. A key performance hallmark of volatile memory is its exceptionally fast read and write speeds, often achieving access times in the range of 10 to 100 nanoseconds. These speeds arise from the purely electronic nature of data storage and retrieval, which avoids mechanical components and enables direct electrical signaling within integrated circuits. Such low-latency operations make volatile memory ideal for applications demanding frequent, real-time data handling, like processor caches and main system memory. The volatility of these devices also influences power dynamics, as maintaining requires constant energy input to counteract natural decay mechanisms, resulting in steady background power draw. In high-density configurations, this ongoing consumption contributes to elevated generation, potentially exacerbating challenges in compact systems like servers or mobile devices. Periodic refresh operations, essential for data retention in , further amplify this power overhead, accounting for a notable portion of overall energy use in idle or lightly loaded states. Regarding scalability, volatile memory strikes a balance in and , offering high at the of generally lower bit densities and higher costs per bit compared to non-volatile storage technologies like , which facilitates large-scale deployment in cost-sensitive environments. However, the refresh requirements introduce operational overheads that can limit effective throughput and efficiency, particularly as array sizes grow. This trade-off underscores volatile memory's role as a high-performance, economical choice for temporary , distinct from the persistence-focused attributes of .

History

Early Developments

The earliest forms of volatile memory emerged in the mid-20th century as electronic computers required fast, random-access storage that lost data without power. One pioneering technology was the Williams–Kilburn tube, developed in 1947 by Freddie Williams and Tom Kilburn at the . This stored binary data as electrostatic charges on its screen, enabling the —the world's first —to operate in 1948; however, the charges decayed over time, necessitating periodic refreshing to maintain data integrity. Another significant pre-semiconductor example was acoustic delay line memory, which propagated data as sound waves through a medium like mercury; the , delivered in 1951 as the first commercial computer, utilized mercury delay lines for its 1,000-word main memory, offering reliable but serial-access volatile storage. The advent of semiconductor technology in the 1960s revolutionized volatile memory by enabling integrated circuits with higher speed and density. The first integrated (SRAM) was invented in 1963 by Robert H. Norman at , employing bipolar transistors in a flip-flop configuration to hold data stably without refresh cycles. This was followed in 1964 by John Schmidt at the same company, who designed the first (MOS) SRAM—a 64-bit p-channel device that reduced power consumption and paved the way for scaled integration. A parallel breakthrough came with (DRAM), invented by at in 1967. Dennard's single-transistor cell, patented in 1968, stored each bit as charge in a paired with one transistor, allowing four times the density of early at lower cost, though it required periodic refreshing to counteract leakage. The technology reached commercialization in 1970 with Intel's 1103 chip, the first 1-kilobit DRAM, which rapidly displaced older memory forms in computers due to its compact size and affordability.

Modern Advancements

The evolution of (DRAM) since the 1990s has centered on synchronous and architectures to enhance data transfer efficiency and speed. Synchronous DRAM (SDRAM), standardized by in 1993, synchronized memory operations with the system clock, enabling higher compared to earlier asynchronous designs. This was followed by the introduction of SDRAM (DDR SDRAM), which transferred data on both rising and falling clock edges, doubling effective ; DDR1 emerged commercially around 2000. Subsequent generations progressed rapidly: DDR2 in 2003 offered higher densities and lower power consumption, DDR3 in 2007 improved further with on-die termination, DDR4 in 2014 increased speed to 3,200 MT/s while reducing voltage to 1.2 V, and DDR5, published by in July 2020, supports up to 8,400 MT/s with dual-channel architecture per module for enhanced performance in servers and PCs. Parallel to these, low-power variants tailored for mobile devices evolved under standards; LPDDR4, released in 2014, achieved 3,200 Mb/s per pin at 1.1 V, while LPDDR5 in 2019 reached 6,400 Mb/s, and LPDDR5X in 2023 extended speeds to 8,533 Mb/s with improved signaling for and AI-enabled smartphones. Static random-access memory (SRAM) advancements have focused on integration within processors as high-speed caches, with scaling challenges addressed through process node shrinks and low-power optimizations. By the , embedded SRAM became ubiquitous in CPU and GPU designs, forming multi-level caches (L1, , L3) to bridge the speed gap between processors and main memory. Scaling to advanced nodes like 7 nm, achieved by and adopted in processors such as AMD's (2019) and Intel's Ice Lake (2020), reduced bitcell sizes to approximately 0.027 μm², enabling denser caches while maintaining stability. In the , further refinements to sub-7 nm nodes (e.g., 5 nm and 3 nm) introduced low-power SRAM variants with asymmetric cells and voltage scaling, reducing leakage by up to 30% for mobile and edge applications; for instance, 's N3E node in 2023 showed minimal density gains but improved power efficiency through finFET enhancements. These developments support larger on-chip caches, such as the 96 MB L3 in AMD's processors (2022), prioritizing reliability over aggressive scaling. DRAM capacity has scaled dramatically from the 1980s, when chips held megabits (e.g., 1 in 1987), to the 2020s, where single modules reach terabits through die stacking and process improvements. This growth adhered to , roughly doubling transistor density every two years until physical limits emerged around 2025, with per-die capacities rising from 64 Kb in 1981 to 16 Gb by 2010 and 128 Gb by 2024. Module-level capacities followed suit, evolving from 1 systems in the 1980s to 128 GB DIMMs in the 2010s and multi-TB configurations in data centers by 2025, driven by 3D integration and EUV lithography. However, scaling slowed post-2020 due to quantum tunneling and cost barriers, shifting focus from planar shrinks to vertical architectures. As of 2025, recent trends in volatile memory emphasize 3D-stacked to overcome planar limits, particularly high-bandwidth memory (HBM) for workloads. HBM3E, introduced in 2023 and widely adopted by 2025, stacks up to 12 dies vertically using through-silicon vias (TSVs), delivering over 1.2 TB/s bandwidth per stack at 9.2 Gb/s pin speeds and capacities up to 36 GB, as seen in Micron's 12-high configurations for GPUs. This addresses the end of —where power density no longer decreases with node shrinks—by improving bandwidth density and efficiency, reducing off-chip data movement by 50% compared to GDDR6. Mitigation strategies include hybrid bonding and advanced packaging, enabling continued performance gains without proportional power increases, though thermal management remains a key challenge.

Principles of Operation

Read and Write Processes

In volatile memory, the read process involves accessing the stored state of a cell—in destructively by discharging and restoring the capacitor charge via the , and in non-destructively by sensing the voltage differential—on bit lines connected to the cell. A detects this small signal difference, amplifies it to a full (high or low), and outputs the corresponding bit to the system's data bus. This readout maintains during retrieval, with the entire operation relying on precise timing to avoid from adjacent cells. The write process modifies the cell's state by applying specific voltages to the bit lines, which propagate through access transistors to set the storage element—such as charging or discharging a or flipping a bistable . Row and column selection lines activate the target , allowing new data to overwrite the existing state while other cells remain isolated. This operation requires careful voltage control to ensure reliable state transition without disturbing neighboring cells. Addressing in volatile memory uses row and column to enable to any cell in the . The address bus provides binary coordinates: upper bits select a row via the row , activating a word line, while lower bits select a column via the column , routing data through specific bit line pairs. This two-dimensional decoding scheme allows constant-time, O(1) access complexity, independent of size, making it efficient for large-scale . To handle errors during read and write operations, volatile memory systems incorporate basic error detection mechanisms like , which add a single redundant bit per word to check for odd or even parity and detect single-bit flips. For more robust protection, error-correcting codes (ECC) such as extend this by including multiple parity bits, enabling detection and correction of multi-bit errors without halting operations. These techniques are applied at the module level to maintain data reliability in noisy environments. While the core mechanisms are shared, read and write processes vary slightly between static and dynamic implementations, with static favoring latch-based sensing and dynamic relying more on capacitor charge detection.

Refresh and Retention Mechanisms

Volatile memory, particularly dynamic random-access memory (DRAM), relies on refresh mechanisms to counteract the inherent instability of its storage elements, which lose data over time without power. In DRAM cells, data is represented as electrical charge stored in capacitors, but leakage currents cause this charge to dissipate gradually, typically within milliseconds to seconds depending on the cell's characteristics. To prevent data loss, periodic refresh operations are essential, involving reading the data from each cell and rewriting it to restore the charge level. These operations must occur frequently enough to ensure all cells retain their information, addressing the power-dependent nature of retention where continuous supply voltage is required to maintain the stored states. The standard refresh cycle in DRAM is initiated by the memory controller and occurs at intervals of 64 milliseconds, dictated by the worst-case retention time across all cells to guarantee reliability. During this cycle, refresh is performed in bursts, where the controller issues a series of activate commands to open specific rows, allowing sense amplifiers to detect and restore the charge, followed by precharge commands to close the rows and prepare for subsequent operations. This process systematically covers the entire , ensuring comprehensive data preservation without interrupting normal access too severely, though it introduces penalties during execution. Retention times in DRAM cells exhibit significant variations, primarily influenced by environmental factors such as and operating voltage, as higher temperatures accelerate leakage and lower voltages reduce the charge-holding capacity. For instance, elevated temperatures can shorten retention from seconds to mere milliseconds in vulnerable cells, necessitating adjustments in refresh frequency. To mitigate power consumption in low-activity scenarios, self-refresh modes enable the DRAM device to autonomously handle refresh internally, entering a low-power state where the controller is disconnected, thus preserving data during idle periods without external intervention. Refresh operations impose notable power overheads, accounting for approximately 10-40% of DRAM energy consumption, particularly pronounced in idle states where background refresh dominates over active accesses. This overhead arises from the energy required for repeated row activations and charge restorations across the array, exacerbating power draw in densely packed modern devices. To address this, optimizations such as partial array refresh target only active or leaky sub-arrays for refresh, excluding unused portions to reduce unnecessary energy expenditure while maintaining data integrity in powered regions. In contrast, static random-access memory (SRAM) avoids such mechanisms entirely, as its flip-flop-based cells retain state without leakage-driven refresh.

Types

Static Random-Access Memory (SRAM)

() employs a 6-transistor (6T) as its fundamental storage unit, comprising two cross-coupled inverters that form a bistable flip-flop and two n-type access transistors connected to bit lines. This configuration stores each bit as one of two stable voltage states—high (representing logic 1) or low (representing logic 0)—maintained by the between the inverters. The bistable nature of the flip-flop enables SRAM to retain data indefinitely without periodic refresh cycles, as the active transistor circuitry continuously reinforces the stored state while power is applied. Access times are typically on the order of 1 ns, supporting high-speed operations in performance-critical applications. However, the 6T design results in higher static and dynamic power consumption compared to alternatives, along with lower bit due to the six s required per cell. SRAM achieves roughly one-sixth the of DRAM, limiting its use in large-capacity storage. SRAM variants include asynchronous designs, which respond directly to address and control signals without an external clock, and synchronous types, which align operations to a system clock for pipelined burst modes and integration with clocked processors. SRAM macros are widely integrated into system-on-chips (SoCs) to serve as caches, buffers, and , benefiting from process-compatible fabrication with logic circuits. SRAM cells are manufactured using processes, with modern implementations scaled to 2-3 nm nanosheet or FinFET transistors as of 2025, achieving densities up to 38 Mb/mm² in advanced nodes like N2, to meet demands for denser integration. At these nodes, challenges arise from elevated subthreshold leakage currents, which degrade efficiency and necessitate techniques like multi-threshold voltage devices or .

Dynamic Random-Access Memory (DRAM)

Dynamic random-access memory (DRAM) utilizes a one-transistor, one-capacitor (1T1C) to achieve high , where each bit is represented by the presence (logical 1) or absence (logical 0) of electrical charge stored in a . This compact design enables to pack billions of bits into a single chip, offering significant density advantages over other volatile memory types by minimizing the number of components per . However, the stored charge leaks over time due to various mechanisms, necessitating periodic refresh operations to restore the data and prevent loss, typically every 64 milliseconds under standard conditions. In operation, the MOS transistor serves as a gate to connect the capacitor to the bitline during read or write cycles. For a read, the transistor opens, allowing the capacitor's charge to share with the precharged bitline, producing a small differential voltage—on the order of tens of millivolts—that is detected and amplified by sense amplifiers to full logic levels. Write operations similarly involve charging or discharging the capacitor through the transistor to set the desired state, followed by sense amplifier restoration to maintain bitline integrity. These processes support random access but require refresh cycles to counteract leakage, balancing DRAM's density benefits with ongoing power and performance overheads. Common variants of DRAM include synchronous DRAM (SDRAM), which operates in sync with the system clock for improved timing, and (DDR) generations such as DDR4 and DDR5, which transfer data on both clock edges to double bandwidth while adhering to the 1T1C architecture. Graphics-oriented GDDR variants, like GDDR6, optimize for high-speed interfaces in GPUs, achieving data rates exceeding 20 Gbps per pin. To enable scaling, capacitors are fabricated as either trench structures etched into the silicon substrate or stacked configurations built above the , with the latter increasingly favored for maintaining capacitance in advanced nodes. As of 2025, DRAM scaling faces limitations below the 10 nm node, where maintaining sufficient charge storage becomes challenging due to reduced area and material constraints, prompting explorations into 3D DRAM architectures like vertical channel transistors and stacked layers to extend density gains. These developments aim to sustain DRAM's role as the primary main memory in computing systems.

Applications

In Computing Systems

Volatile memory serves as the foundational working storage in computing systems, including desktops, servers, and setups, where it enables rapid data access essential for executing instructions and managing workloads. Primarily, (DRAM) functions as system RAM, holding operating system kernels, running applications, and transient data to support seamless multitasking and real-time processing. Complementing this, (SRAM) is integrated into processor architectures to accelerate access to critical data, minimizing delays that would otherwise performance. DRAM in computing systems is commonly deployed via dual in-line memory modules (DIMMs), which provide scalable capacities tailored to user needs; as of November 2025, personal computers configurations typically range from 8 GB for basic tasks to 128 GB for demanding applications like and . These modules ensure that the operating system and applications can load and manipulate data efficiently, with higher capacities preventing swapping to slower during intensive sessions. For example, midrange desktops often ship with 16 GB or 32 GB as standard, balancing cost and capability for everyday . Within the of central processing units (CPUs) and graphics processing units (GPUs), constitutes the L1, , and L3 caches, which store frequently accessed data closer to the processing s to drastically cut access latencies—from over 100 cycles for main to as low as 1-10 cycles for L1 hits. L1 caches, being the smallest (typically 32-64 per core) and fastest, handle immediate instruction and data needs, while larger (256 to 2 MB per core) and shared L3 (up to 100 MB across cores) levels provide progressive backups, collectively reducing the average time for core operations by caching hot data and prefetching likely accesses. This setup is vital in high-performance environments, where even minor latency reductions translate to substantial throughput gains. The and of volatile memory profoundly shape system performance, particularly in multitasking and scenarios. DDR5 at 6400 MT/s delivers peak of approximately 51.2 /s per , or over 100 /s in dual-channel configurations, enabling faster data movement that enhances multitasking by allowing concurrent applications to share resources without significant slowdowns, and in , it supports higher frame rates in bandwidth-limited titles by streamlining loading and asset streaming. optimizations, such as tighter timings in high-end , further improve responsiveness in latency-bound workloads, though gains are workload-specific and more pronounced in professional simulations than casual use. In server and virtualization contexts, volatile memory underpins large-scale data processing by allocating expansive memory pools for virtual machines and containerized environments, often exceeding hundreds of gigabytes per system. Error-correcting code (ECC) variants of DRAM are ubiquitous here, employing algorithms to detect and correct single-bit errors in real-time while identifying multi-bit faults, thereby ensuring data reliability across distributed workloads like cloud computing and big data analytics without interrupting operations. This capability is critical for enterprise servers handling petabyte-scale processing, where even rare errors could cascade into failures.

In Embedded and Mobile Devices

In embedded and mobile devices, volatile memory is optimized for low power consumption and compact form factors to support life and space constraints. () variants, such as LPDDR4 and LPDDR5X, are widely used in smartphones and tablets, providing high while minimizing energy use. For instance, LPDDR5X supports data rates up to 8.5 Gb/s or higher and capacities reaching 16 GB in select 2025 flagship devices like the S25 Ultra (1 TB storage models), enabling efficient handling of multitasking and workloads. These memories incorporate and power-down modes that reduce by up to 40% compared to LPDDR4, allowing devices to enter ultra-low-power states during idle periods and extend runtime. Static random-access memory (SRAM) plays a critical role in embedded systems, particularly on-chip implementations integrated with microcontrollers based on cores. In wearables such as smartwatches and fitness trackers, on-chip SRAM provides fast, low-latency access for real-time tasks, including sensor data processing and activity recognition algorithms. For example, microcontrollers based on and M7 cores commonly feature 512 KB to 2 MB of embedded SRAM, enabling deterministic performance in power-sensitive environments without the refresh overhead of DRAM. This configuration supports in devices like the Ensemble E3 MCU, where SRAM handles inference for human activity monitoring with minimal energy draw. In automotive and (IoT) applications, volatile memory is engineered for reliability under extreme conditions. Automotive-grade , such as Micron's LPDDR4 variants, operates reliably from -40°C to 85°C, making it suitable for advanced driver-assistance systems (ADAS) that process camera and data in harsh vehicle environments. Battery-backed is employed in IoT data loggers to retain critical information during power interruptions, with devices like Infineon's micropower asynchronous SRAM ensuring non-volatility through integrated battery support for applications such as . These adaptations involve inherent trade-offs between speed, power, and size in resource-limited settings. LPDDR5X modules typically consume less than W under due to reduced operating voltages around V, but achieving higher speeds often requires dynamic voltage scaling to avoid excessive drain on batteries. In embedded designs, SRAM's higher density and static nature offer superior speed over but at the cost of larger die area, necessitating careful partitioning to balance responsiveness with overall system efficiency in wearables and nodes.

Comparison with Non-Volatile Memory

Advantages and Disadvantages

Volatile memory exhibits significant advantages in performance and usability when compared to non-volatile alternatives like NAND flash, particularly in scenarios demanding rapid data access and modification. Its read and write operations are substantially faster, with typical access latencies for ranging from 50 to 100 ns, in contrast to 25–50 µs for NAND flash, enabling volatile memory to achieve 250–500 times lower for random operations. This speed superiority stems from the simpler structures in volatile technologies, such as capacitor-based in , which avoid the charge trapping and tunneling processes inherent to non-volatile s. Furthermore, volatile memory supports virtually unlimited write cycles without physical degradation or wear, unlike NAND flash limited to 10,000–100,000 program/erase cycles per , making it highly suitable for workloads involving frequent data rewriting. In terms of density and cost efficiency for temporary roles, volatile memory benefits from advanced ; modern dies reach up to 32 Gb (4 GB) per chip, as announced in November 2025, with stacked high-bandwidth memory (HBM) configurations achieving 48 GB per package in 2025 designs, though this comes with refresh overhead that consumes additional energy. While the cost per bit for volatile memory is higher than for bulk non-volatile —approximately $5–10 per GB versus under $0.50 per GB for —it remains economically viable for high-performance temporary data handling due to optimized fabrication for speed rather than persistence. Despite these strengths, volatile memory's volatility poses critical drawbacks relative to non-volatile options, primarily the risk of complete upon interruption, which mandates integration with mechanisms or persistent to prevent information loss in practical systems. This limitation restricts its use to powered environments, precluding standalone applications where without continuous supply is required. Additionally, volatile memory incurs higher idle consumption due to periodic refresh operations—typically several milliwatts per for , compared to near-zero in —contributing to elevated overall use in systems with intermittent activity. Reliability concerns further compound these issues, as volatile memory cells are susceptible to soft errors induced by cosmic rays or alpha particles, which can alter stored data; these transient faults are commonly mitigated through error-correcting codes (ECC), adding overhead but enabling robust operation in error-prone environments.

Emerging Hybrid Technologies

Emerging hybrid technologies seek to mitigate the volatility of traditional memory like DRAM and SRAM by integrating non-volatile elements, enabling faster persistence without fully sacrificing performance. One prominent example is persistent memory, exemplified by Intel's Optane using 3D XPoint technology, which combined the speed of DRAM with the data retention of non-volatile storage to serve as a byte-addressable tier between volatile RAM and slower SSDs. This hybrid approach allowed systems to maintain larger working sets in memory during power loss, reducing latency for data-intensive applications, though Intel discontinued Optane shipments by late 2025 due to market challenges. Despite its phase-out, 3D XPoint's architecture influenced subsequent designs by demonstrating how cross-point arrays could bridge volatile and non-volatile properties, offering up to 10 times the endurance of NAND flash while approaching DRAM latencies. Computational random-access memory (RAM) hybrids, such as processing-in-memory (PIM) integrated with high-bandwidth memory (HBM), further address volatility limitations by embedding computation directly within or near volatile memory arrays to minimize data movement overheads. In PIM architectures, logic units are placed inside DRAM chips or the logic layers of 3D-stacked HBM, enabling in-situ operations like bitwise computations or neural network inferences that reduce energy costs by up to 87% compared to traditional CPU-DRAM setups. For instance, designs leveraging HBM's high internal bandwidth have shown 13.8× performance gains in graph processing workloads by performing bulk operations analogously within the memory, hybridizing volatile storage with computational elements to tolerate brief volatility in data-intensive tasks. These systems often incorporate non-volatile buffers for checkpointing, enhancing reliability in volatile environments without full non-volatility. Battery-backed hybrid approaches provide short-term persistence for in critical systems by pairing it with supercapacitors, which supply power during outages to flush data to . This configuration, seen in secure non-volatile memory architectures like SecPB, uses supercapacitors alongside caches to maintain for seconds to minutes post-power loss, avoiding the leakage issues of traditional batteries while enabling instant-on recovery. In mission-critical applications, such as controllers, supercapacitor-backed stacks prevent data loss by sustaining write operations, offering higher cycle life and robustness over battery alternatives. Looking toward future directions, MRAM-SRAM hybrids are poised to enable non-volatile caches by 2030, combining MRAM's retention with 's speed to eliminate volatility in without significant penalties. Advanced spin-orbit torque (SOT)-MRAM designs achieve bit-flipping speeds rivaling SRAM (<1 ns) with over 10¹⁵ cycles, allowing caches where MRAM handles and while SRAM manages frequent accesses; recent tungsten-based SOT-MRAM developments in October 2025 further enhance speed and for such applications. In GPU architectures, MRAM caches have demonstrated up to 50% reductions and improved hit rates for workloads, paving the way for scalable, power-efficient systems. These developments, including sustainable SOT-MRAM variants, target broader adoption in and data-center caches, potentially displacing pure volatile memory in energy-constrained environments.

References

  1. [1]
    Volatile Memory - Glossary | CSRC
    Definitions: Memory that loses its content when power is turned off or lost. Sources: NIST SP 800-101 Rev ...
  2. [2]
    [PDF] Chapter 5 Internal Memory Computer Organization and Architecture ...
    Main memory uses RAM (Random Access Memory), which includes Dynamic RAM (DRAM) and Static RAM (SRAM). RAM is volatile and temporary storage.
  3. [3]
    What are non-volatile memories and solid-state drives?
    ... computer memory that can retain stored information even after power is removed. In contrast, volatile memory needs constant power in order to retain data.
  4. [4]
    [PDF] The Memory Hierarchy
    In practice, a memory system is a hierarchy of storage devices with different capacities, costs, and access times. CPU registers hold the most frequently used ...
  5. [5]
    Memory Hierarchy Design – Basics – Computer Architecture
    Flash memory is an electronic non-volatile computer storage medium that can be electrically erased and reprogrammed. It is a type of EEPROM. It must be erased ( ...
  6. [6]
    [PDF] Memory
    Volatile memories lose their contents when their power is turned off. • Non-volatile memories do not. The memory types currently in common usage are: Every ...
  7. [7]
    Good Memory | Min H. Kao Department of Electrical Engineering ...
    May 18, 2020 · “Because they don't draw as much power, you can manufacture them with much larger capacities. A modern volatile memory unit might hold up to 32 ...Missing: definition | Show results with:definition
  8. [8]
    [PDF] Novel Technologies for Next Generation Memory - Berkeley EECS
    Jul 25, 2013 · In general, volatile memory has faster read-out performance, but it has a smaller storage capacity than non- volatile memory. Most common ...<|separator|>
  9. [9]
  10. [10]
    COMP 100 Lecture Notes
    Access time is generally 10-15 nanoseconds. A nanosecond is 1 billionth of a second; light travels about 1 foot. All RAM memory is "multitasking". A CPU can ...
  11. [11]
    [PDF] Refreshing Thoughts on DRAM: Power Saving vs. Data Integrity
    DRAM refreshes to prevent data loss, but this refresh can cause high power consumption. Researchers are exploring ways to fine-tune refresh rates to save power.
  12. [12]
    [PDF] Thermal modeling and management of DRAM memory systems
    In the calculator, this value includes the power for DRAM refreshing ... We use the isolated DRAM thermal model described in Section 3.4 for performance, power.<|control11|><|separator|>
  13. [13]
    [PDF] DRAM Refresh Mechanisms, Penalties, and Trade-Offs
    Refresh operations negatively affect performance and power. Traditionally, the performance and power overhead of refresh have been insignificant. But as the ...
  14. [14]
    [PDF] Refresh Reduction in Dynamic Memories - The i-acoma group at UIUC
    An effective approach to reduce the static energy consumption of large on-chip memories is to use a low-leakage technology such as embedded DRAM (eDRAM).
  15. [15]
    [PDF] Managing Non-Volatile Memory in Database Systems
    Non-volatile memory (NVM) is a new storage technology that combines the performance and byte addressability of DRAM with the persistence of traditional storage ...
  16. [16]
    Williams-Kilburn Tubes - CHM Revolution - Computer History Museum
    The Williams-Kilburn tube, tested in 1947, offered a solution. This first high-speed, entirely electronic memory used a cathode ray tube (as in a TV) to ...Missing: volatile | Show results with:volatile
  17. [17]
    UNIVAC I supervisory control console - CHM Revolution
    Memory Type: Mercury delay line; Memory Size: 1000 words; Memory Width: 72-bit (decimal); Cost: $950,000. This console could start, interrupt, and stop the ...
  18. [18]
    MOS Dynamic RAM Competes with Magnetic Core Memory on Price
    John Schmidt designed a 64-bit MOS p-channel Static RAM at Fairchild in 1964. Fairchild's 1968 SAM (Semiconductor Active Memory) program for Burroughs ...
  19. [19]
    US3387286A - Field-effect transistor memory - Google Patents
    Dennard, Croton-on-Hudson, N.Y., assignor to International Business Machines Corporation, Armonk, N.Y., a corporation of New York Filed July 14, 1967, Ser. No.
  20. [20]
    The Intel 1103 DRAM - Explore Intel's history
    With the 1103, Intel introduced dynamic random-access memory (DRAM), which would establish semiconductor memory as the new standard technology for computer ...
  21. [21]
    JEDEC History
    Over the next 50 years, JEDEC's work expanded into developing test methods and product standards that proved vital to the development of the semiconductor ...
  22. [22]
    Page not found | JEDEC
    - **Publication Date for DDR5 Standard**: Insufficient relevant content to extract publication date.
  23. [23]
    Samsung Begins Mass Production of Industry's First 12Gb LPDDR5 ...
    Jul 17, 2019 · Based on the latest mobile DRAM standard, the new Samsung 12Gb LPDDR5 maximizes the potential of 5G and AI features in future flagships.
  24. [24]
    TSMC's 3nm Node: No SRAM Scaling Implies More Expensive ...
    Dec 15, 2022 · SRAM Scaling Slows​​ Meanwhile, Intel's Intel 4 (originally called 7nm EUV) reduces SRAM bitcell size to 0.024µm^² from 0.0312µm^² in case of ...
  25. [25]
    SRAM Scaling Issues, And What Comes Next
    Feb 15, 2024 · The inability of SRAM to scale has challenged power and performance goals forcing the design ecosystem to come up with strategies that range from hardware ...Missing: 7nm | Show results with:7nm
  26. [26]
  27. [27]
  28. [28]
    [PDF] Applications Note Understanding DRAM Operation
    Read and write circuitry to store information in the memory's cells or read that which is stored there. • Internal counters or registers to keep track of the.Missing: RAM | Show results with:RAM
  29. [29]
    [PDF] Memory Basics
    – refers to memory with both Read and Write capabilities. • ROM: Read Only Memory. – no capabilities for “online” memory Write operations. – ... • RAM is volatile.
  30. [30]
    [PDF] Inside RAM
    11 address lines are used to select a row, and 11 address lines are used to select a column. This reduces the number of output lines of the decoders from 4M to.
  31. [31]
    [PDF] Notes on Models of Computation and Lower Bounds 1 Introduction
    Sep 7, 1999 · The Random Access Machine (RAM) is a computational model that is (essentially) equivalent to C from the point of view of the time complexity of ...
  32. [32]
    chap10_lect05_memory2.html - UMBC
    For 72-pin SIMMs, the number of data bits is 32 + 4 = 36 (4 parity bits). Parity for Memory Error Detection. 74AS280 Parity Generator/Checker. This circuit ...
  33. [33]
    An experimental study of data retention behavior in modern DRAM ...
    DRAM cells store data in the form of charge on a capacitor. This charge leaks off over time, eventually causing data to be lost.
  34. [34]
  35. [35]
    Retention-Aware Refresh Techniques for Reducing Power and ...
    Aug 1, 2019 · Due the leakage mechanisms exist in DRAM cells, DRAM cells lose stored information over time. Periodic refresh operations are inevitable for ...
  36. [36]
    DRAM Refresh Mechanisms, Penalties, and Trade-Offs
    In practice, DRAM retention times are normally distributed from 64 ms to several seconds. However, the conventional refresh method uses 64 ms as the refresh ...<|control11|><|separator|>
  37. [37]
    Flexible auto-refresh: enabling scalable and energy-efficient DRAM ...
    Consequently, prior schemes are forced to use explicit sequences of activate (ACT) and precharge (PRE) operations to mimic row-level refreshing.
  38. [38]
    Enabling the Mitigation of DRAM Retention Failures via Profiling at ...
    Volatile DRAM cells can retain information across a wide distribution of times ranging from milliseconds to many minutes, but each cell is currently refreshed ...
  39. [39]
    Enhancing DRAM Self-Refresh for Idle Power Reduction
    With our techniques, the retention time of DRAM cells is improved. In our SPICE and mathematical models, ESR and LSR modes result in a 39% and 48% DRAM self- ...
  40. [40]
    Improving the Energy Efficiency of Convolutional Neural Network ...
    DRAM refresh consumes a significant amount of energy and its overhead is expected to further increase in future DRAM devices as DRAM capacity increases [3, 9, ...Missing: percentage | Show results with:percentage
  41. [41]
    [PDF] Flikker: Saving DRAM Refresh-power through Critical Data Partitioning
    Partial Array Self Refresh (PASR) is an enhancement of the self-refresh low power state [18] in which only a portion of the. DRAM array is refreshed. DRAM cells ...
  42. [42]
  43. [43]
    Using Nonvolatile Static RAMs - Analog Devices
    An SRAM is essentially a stable DC flip-flop requiring no clock timing or refreshing. The contents of an SRAM memory are retained as long as power is supplied.Using Nonvolatile Static... · Abstract · Nv Sram: Nonvolatile Static...
  44. [44]
  45. [45]
    The Quest for a Universal Memory - IEEE Spectrum
    May 17, 2012 · But SRAM is expensive—each memory cell needs six or eight transistors. So if you wanted to make high-capacity SRAM, you'd need to sacrifice a ...Missing: 6 | Show results with:6
  46. [46]
    [PDF] 8 sram technology - People @EECS
    For low-power SRAMs, access time is comparable to a standard DRAM. Figure 8-2 shows a partial list of Hitachi's SRAM products and gives an overview of some of ...
  47. [47]
  48. [48]
  49. [49]
    DRAM's Moore's Law Is Still Going Strong - IEEE Spectrum
    Nov 1, 2022 · DRAM stores a bit as charge in a capacitor. Access to each capacitor is gated by a transistor. But the transistor is an imperfect barrier ...
  50. [50]
    DRAM Invention and First Developments - IEEE
    Another challenging approach to the scaling of DRAMs was the 1T1C cell using a so called “Stacked Capacitor.” While the storage capacitance decreases by ...
  51. [51]
    DRAM Refresh Mechanisms, Penalties, and Trade-Offs - IEEE Xplore
    Mar 27, 2015 · However, DRAM cells must be refreshed periodically to preserve their content. Refresh operations negatively affect performance and power.
  52. [52]
  53. [53]
    GIDL Analysis of 1T1C Structure for Sub-20nm DRAM Cell
    In this paper, we propose a low off-current and easily integrated MOSFET suitable for sub-20nm Dynamic Random Access Memory (DRAM).
  54. [54]
    Sense Amplifier - an overview | ScienceDirect Topics
    The sense amplifier amplifies the voltage difference of tens mV on the bitline to the digital level of VINTA or VSS.
  55. [55]
  56. [56]
  57. [57]
    DDR4 SDRAM STANDARD - JEDEC
    This document defines the DDR4 SDRAM specification, including features, functionalities, AC and DC characteristics, packages, and ball/signal assignments.
  58. [58]
    Trench storage node technology for gigabit DRAM generations
    The two mainstream technologies for DRAMs today are based on stacked capacitors and trench capacitors. While topography limitations require development of ...
  59. [59]
    The Stacked Capacitor DRAM Cell and Three-Dimensional Memory
    A stacked capacitor DRAM cell stacks a storage capacitor on a switching transistor, reducing cell area and using high-k materials to increase storage ...
  60. [60]
    SK hynix Presents Future DRAM Technology Roadmap at IEEE ...
    Jun 9, 2025 · - Considering switching to 4F² VG platform from 10-nm level technology due to scaling limitation with current DRAM technology. - Company to ...
  61. [61]
  62. [62]
    The Best PCs (Desktop Computers) for 2025 - PCMag UK
    Oct 6, 2025 · Memory capacities of 8GB or 16GB are fine for most users, and these are the most common configurations on entry-level or midrange desktops of ...Deeper Dive: Our Top Tested... · What Desktop Form Factor Do... · How Much Storage And Memory...
  63. [63]
    CUDIMM, CSODIMM, CAMM2, and MRDIMM - Kingston Technology
    Higher Capacity Modules: A CAMM2 module can be built up to 128GB using today's high capacity DDR5 DRAM, providing one dual channel module solution versus using ...
  64. [64]
    Memory Performance in a Nutshell - Intel
    Jun 6, 2016 · L1 cache, 32 KB, 1 nanosecond ; L2 cache, 256 KB, 4 nanoseconds ; L3 cache, 8 MB or more, 10x slower than L2 ; MCDRAM, 2x slower than L3 ...Missing: SRAM | Show results with:SRAM
  65. [65]
    NVIDIA Ampere Architecture In-Depth | NVIDIA Technical Blog
    May 14, 2020 · The L2 cache is divided into two partitions to enable higher bandwidth and lower latency memory access.
  66. [66]
    Best RAM for Gaming: DDR4, DDR5 Kits for 2025
    ### Summary of DDR5 Speeds, Performance Impact, and Benchmarks from Tom's Hardware
  67. [67]
    HPE ProLiant DL360 Gen10 Server QuickSpecs | HPE
    Advanced ECC uses single device data correction to detect and correct single and all multibit error that occurs within a single DRAM chip. Online Spare.
  68. [68]
    [PDF] Power Systems Enterprise Servers with PowerVM Virtualization and ...
    An error correction code (ECC) word uses. 18 DRAM chips from two DIMM pairs, and a failure on any of the DRAM chips can be fully recovered by the ECC algorithm.
  69. [69]
    Samsung Begins Mass Production of Industry's First 16GB LPDDR5 ...
    [Reference] Samsung Mobile DRAM Timeline: Production/Mass Prod. Date, Capacity, Mobile DRAM. Dec. 2019, 16GB, 10nm-class 12Gb+8Gb LPDDR5, 5500Mb/s. Sept. 2019 ...
  70. [70]
    [News] Samsung Suffers Another Setback as Galaxy S25 Reportedly ...
    Jan 3, 2025 · While Samsung's DS Division was expected to deliver 1b LPDDR samples in various capacities, including 12Gb and 16Gb, to the MX Division by last ...
  71. [71]
    DDR5 vs LPDDR4: Mobile Device Performance Analysis
    Sep 17, 2025 · Their mobile-optimized architecture incorporates deep power-down modes that reduce standby power consumption by up to 40% compared to LPDDR4.
  72. [72]
    Next-Gen ARM Core Boosts Performance | NXP Semiconductors
    Mar 14, 2017 · The 1MB embedded SRAM enables lower system power and larger graphics buffer sizes to deliver both power efficiency and high performance; this ...
  73. [73]
    Ensemble E1, E3, E5, E7 For Edge AI 32-bit Microcontrollers (MCUs)
    Alif Ensemble is the only choice for power efficient microcontrollers that can handle heavy AI and ML workloads for battery-operated IoT devices.
  74. [74]
    [PDF] Micron's Support of the Automotive and Embedded Markets
    Oct 1, 2015 · ... 85 C]; AT [-40 C to 105 C]. Design in products only, legacy product ... DRAM Temperature Ranges. Temp. Tc. Comments. CT. 0°C to +85°C.
  75. [75]
    Asynchronous SRAM - Infineon Technologies
    Asynchronous SRAMs are used as expansion memory in automotive and industrial applications due to its high reliability and long-term support.Missing: variants | Show results with:variants
  76. [76]
  77. [77]
    [PDF] Low-power Volatile and Non-volatile Memory Design
    Due to the required large capacity, on-chip 6T SRAM draws a large amount of leakage current, dominating the system power [2]. Moreover, the power consumption.
  78. [78]
    Evaluation of intel 3D-xpoint NVDIMM technology for memory ...
    New 3D-XPoint™ technology, developed by Intel and Micron, promises to deliver high-density, lower-cost, non-volatile storage with DRAM-like performance ...
  79. [79]
    Intel schedules the end of its 200-series Optane memory DIMMs
    Jul 2, 2024 · The company announced plans to end-of-life its Optane Persistent Memory 200-series modules for servers, but its clients will be able to get them through 2025.
  80. [80]
    [PDF] Intel® Optane™ DC Persistent Memory – Telecom Use Case ...
    This document provides an overview of Intel® Optane™ DC persistent memory for 2nd generation Intel® Xeon® Scalable processors (formerly Cascade Lake).
  81. [81]
    [PDF] INVITED: Enabling Practical Processing in and near Memory for ...
    The key idea is to place computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D- stacked DRAM, in ...
  82. [82]
    [PDF] A Modern Primer on Processing in Memory - ETH Zürich
    Sep 1, 2022 · The key idea is to place computation mechanisms in or near where the data is stored (i.e., inside the memory chips, in the logic layer of 3D- ...
  83. [83]
    [PDF] SecPB: Architectures for Secure Non-Volatile Memory with Battery ...
    eADR relies on battery or supercapacitors to back the SRAM caches such that on power loss, cache content is flushed to the PM. BBB introduced a battery-backed ...
  84. [84]
    Battery-Free Power Backup System Uses Supercapacitors to ...
    Oct 1, 2010 · Supercapacitors replace batteries in RAID systems for battery-free backup, using a stack to support data transfer, and are more robust and ...
  85. [85]
    Bringing SOT-MRAM Tech Closer to Cache Memory - EE Times
    Dec 10, 2024 · SOT-MRAM has the potential to replace SRAM in caches with innovations for better speed, reliability, and scalability.Missing: emerging | Show results with:emerging
  86. [86]
    Advanced hybrid MRAM based novel GPU cache system for graphic ...
    Jan 25, 2024 · A novel magnetic random access memory (MRAM) based cache architecture of GPU systems is proposed for highly efficient graphics processing and computing ...Missing: future | Show results with:future
  87. [87]
    Sustainable SOT-MRAM memory technology could replace cache ...
    Feb 6, 2025 · Sustainable SOT-MRAM memory technology could replace cache memory in computer architecture in the future · an over 50% reduction in overall ...