Fact-checked by Grok 2 weeks ago

zram

zram is a module that creates compressed RAM-based block devices, named /dev/zram<id> where <id> starts from 0, enabling data written to these devices to be compressed and stored directly in system memory for faster I/O operations and reduced compared to traditional disk-based storage. Originally developed as an in-kernel implementation of the user-space tool compcache by Nitin Gupta to improve memory utilization in resource-constrained environments like virtual machines, zram evolved through intermediate forms such as ramzswap before being added to the kernel's in version 2.6.33 and promoted to mainline in version 3.14. Commonly used as a swap to extend effective capacity—achieving ratios around 2:1—or for volatile like /tmp and application caches under /var, zram supports multiple algorithms including LZO, LZ4, and , which can be selected and tuned via parameters for optimal performance based on workload. Configuration involves loading the module with options like num_devices to specify the number of (defaulting to 1), setting disksize for each device's capacity (e.g., 512M), and optionally limiting total memory usage with mem_limit; tools like zramctl from simplify management, including dynamic addition and removal of . Advanced features include idle page tracking for age-based , writeback to backing to prevent memory exhaustion, and multi-algorithm recompression for better , with detailed statistics available under /sys/block/zram<id>/ for monitoring ratios, I/O activity, and usage. Maintained primarily by Nitin Gupta, zram remains a key component of the Linux subsystem, particularly beneficial in systems, devices, and low-RAM servers where disk I/O is a .

Background

Swap Space in Linux

Swap space in Linux functions as a virtual memory extension, utilizing disk or RAM-based storage to accommodate pages when physical RAM is depleted, thereby enabling processes to operate beyond the limits of available physical memory. This mechanism supports the illusion of ample memory by serving as backing storage specifically for or private pages that cannot be discarded. The operation of swap involves a page-out process triggered by memory pressure, where the kernel identifies inactive pages—typically based on their age and usage patterns—and relocates them to the swap device to reclaim RAM for active processes. This is managed primarily by the kswapd daemon for background reclamation or direct reclaim during allocation failures, involving steps such as allocating a slot in the swap area, writing the page content, and updating the corresponding page table entry (PTE) with a swap entry containing the device type and offset. Pages are read back into RAM on demand when accessed, replacing the swap entry in the PTE with a physical frame reference. Swap devices in come in two primary types: traditional disk-based options, which include dedicated partitions formatted for swap or ordinary files on a filesystem, and RAM-based alternatives such as ramdisks created via the brd module or mounts. The limits the system to a maximum of 32 active swap areas (MAX_SWAPFILES), each tracked by a swap_info_struct that details the device's extent map and priority. Disk partitions offer persistent storage but require initialization via tools like mkswap, while files provide flexibility without repartitioning; RAM-based setups, however, directly consume physical memory and are typically used in constrained environments to avoid disk I/O. Performance implications of disk-based swap are significant, as the latency from I/O operations—often milliseconds compared to nanoseconds for access—can severely the system, especially under heavy pressure leading to thrashing, where constant faults overwhelm productive . On solid-state drives (SSDs), frequent writes to swap accelerate media wear due to limited write endurance cycles, potentially shortening device lifespan in swap-intensive workloads. To mitigate these issues, optimizations like clustering related pages (SWAPFILE_CLUSTER) group swaps for efficient I/O, but overall, excessive reliance on disk swap remains a performance liability. The historical evolution of swap in the traces back to early versions in the , where initial implementations drew from Unix traditions and supported basic paging to disk partitions amid limited hardware constraints. By the 2.4 series (around ), swap management included clustered I/O and priority-based device selection, but suffered from fragmented code and inefficient reclaim under large-memory systems. The pivotal shift occurred with the 2.6 (2003 onward), introducing the unified subsystem that integrated with general page allocation, enhanced kswapd for proactive reclaim, and supported file-backed swap alongside partitions. Modern kernels (4.x and later) further refined this through NUMA-aware policies, multi-device clustering, and improvements for massive configurations, culminating in a robust, unified framework.

Compressed Memory Techniques

Memory compression is a technique employed in operating systems to reduce the physical size of data stored in RAM, allowing more memory pages to be retained in volatile memory rather than being evicted to slower storage devices. This approach compresses inactive or less frequently accessed pages on-the-fly, thereby extending the effective capacity of RAM without relying on immediate disk input/output operations. Early implementations of memory compression distinguished between hardware-assisted methods, prevalent in mainframe systems, and software-based approaches integrated into operating system kernels. Hardware-assisted compression, such as IBM's zEnterprise Data Compression (zEDC) feature introduced in mainframes like the z13, leverages dedicated processor instructions to perform with minimal CPU involvement, enabling efficient data handling in high-throughput environments. In contrast, software-based compression relies on kernel-level algorithms to process data streams, as seen in early systems where custom modules compressed page data before storage, though at the expense of additional processing cycles. In systems, compression plays a critical role by targeting pages slated for , transforming them into a compact form that can remain in RAM longer and thus minimizing the associated with to disk. This integration adds a compressed layer to the , effectively delaying or reducing page faults and improving overall system responsiveness under memory pressure. A key trade-off in memory compression involves balancing the computational overhead of compression and decompression against the benefits of reduced I/O activity. While compressing data incurs CPU cycles—potentially 10-20% overhead in some kernel implementations—the savings in disk access times can outweigh this cost, especially on systems with slow storage, leading to net performance gains of up to 2x in memory-bound workloads. Prior to advancements like zram, operating systems employed compressed caching in various forms; for instance, Microsoft's in and later versions uses compression to cache frequently accessed data on flash storage, enhancing hybrid . In the , zswap (introduced in version 3.11) implements a for swap pages in RAM, using algorithms like LZ4 to reduce disk writes before evicting to backing swap devices. Zram serves as a Linux-specific evolution of these techniques, focusing on in-RAM for swap.

Development and History

Initial Creation

zram originated from the need to provide efficient swap space on resource-constrained devices, where traditional disk-based incurs significant performance penalties due to slow I/O operations. In March 2009, Nitin Gupta proposed the initial implementation, known as ramzswap (previously compcache), as a series of patches to the (LKML). This driver created a RAM-based block device that compressed swapped-out pages before storing them in memory, thereby avoiding disk access and reducing wear on flash storage in systems and netbooks. The motivation was particularly acute for low-RAM environments, such as thin clients or devices, enabling them to run more applications without physical swap partitions while maintaining responsiveness. The initial ramzswap design featured on-demand memory allocation using the xvmalloc buddy allocator, allowing the compressed swap to dynamically grow or shrink up to a configurable limit, typically set to a of available to prevent overcommitment. It supported the LZO for its balance of speed and ratio, compressing pages in during swap operations and decompressing them on . Incompressible pages could be forwarded to a backing swap if needed. Early benchmarks demonstrated that ramzswap significantly outperformed disk swap in latency-sensitive workloads. Community discussions on LKML centered around trade-offs between gains and added complexity, including debates on the suitability of the xvmalloc allocator compared to SLAB or SLUB, and concerns over CPU overhead from . Andrew Morton, the MM maintainer, requested detailed performance loss metrics to evaluate its viability for mainline inclusion. These threads highlighted the driver's potential for swapless systems but also its experimental nature, leading to iterative refinements over several patch versions. Ramzswap was renamed to zram in to better reflect its general-purpose block device capabilities beyond just swapping. After spending four years in the kernel's staging area starting from Linux 2.6.33, zram was finally promoted to mainline in kernel version 3.14, released on March 30, 2014, marking its official integration as a stable driver under drivers/block. This milestone came after extensive testing and contributions from maintainers like Minchan Kim and Sergey Senozhatsky, who addressed lingering concerns about stability and scalability.

Key Milestones and Updates

In 2017, with the release of Linux kernel 4.14, zram gained support for the Zstandard (ZSTD) compression algorithm, offering improved compression ratios compared to previous options like LZO and LZ4 while maintaining reasonable performance for memory-constrained systems. This addition allowed users to configure higher compression levels via module parameters, enhancing zram's efficiency for swap usage without requiring kernel recompilation. During the 2015–2018 period, zram also saw enhancements for multi-device handling, enabling the creation of multiple /dev/zram instances to distribute load across CPU cores and improve parallelism in compression operations. This feature, building on the multiple compression streams introduced in kernel 3.15, facilitated better scalability on multi-core systems by allowing independent block devices for different workloads. From 2019 onward, zram and zswap can be configured together on systems with limited RAM, though enabling zswap as a cache on top of a zram swap device may reduce the effectiveness of zram due to overlapping compression. This setup aims to reduce overall memory pressure but requires careful tuning to avoid conflicts. Concurrently, Android kernels adopted zram as the default swap mechanism, leveraging it to optimize memory management on mobile devices with 4–8 GB of RAM, where traditional disk swap is impractical due to flash wear. In recent developments through 2025, zram received optimizations for ARM64 architectures in the kernel 6.x series, including better alignment with ARMv8 extensions for faster compression on devices like smartphones and embedded systems. Kernel 6.2 introduced support for multiple compression algorithms per device via the CONFIG_ZRAM_MULTI_COMP option, allowing dynamic switching between primary and secondary compressors (e.g., LZ4 and ) based on page characteristics to balance speed and ratio. Additionally, kernel updates in the 6.x line added priority-based recompression logic, prioritizing high-gain pages for reprocessing to minimize CPU overhead during swap evictions. In kernel 6.12 (released November 2024), changes to runtime compression algorithm configuration improved flexibility but caused compatibility issues with in some distributions, resolved in subsequent updates like 6.12.7 and kernel 6.13 (January 2025). Bug fixes and deprecations have focused on and ; for instance, older algorithms like LZO remain supported but are no longer the sole default, with emphasis shifting to more secure options amid concerns over side-channel vulnerabilities in . Mitigations for same-page risks, such as those highlighted in research, were incorporated through randomized allocation and per-stream isolation in kernel 5.10 and later. Vendor contributions have been pivotal: engineers, including Sergey Senozhatsky, submitted patches for multi-stream and multi-algorithm support, optimizing zram for OS and environments to handle bursty memory demands efficiently. provides support for zram in recent RHEL versions, including optimizations suitable for enterprise environments.

Technical Implementation

Core Mechanism

zram operates as a module that provides compressed -based block devices, named /dev/zram where starts from 0, functioning as virtual block devices backed entirely by physical RAM. These devices are created by loading the zram module, optionally specifying the number of devices via the num_devices parameter, with a default of 1; additional devices can be created dynamically by writing to /sys/class/zram-control/hot_add. At its core, zram handles data through on-the-fly and within . When a page is written to the device—typically during memory pressure when the swaps out inactive pages—the content is compressed using a selected and stored in an allocated , avoiding the need for slower disk I/O. Upon reading the page back—such as during swap-in to restore it to active —the compressed data is decompressed and returned to the requesting process, enabling rapid access times comparable to uncompressed . This process trades computational overhead for substantial memory efficiency, as compressed pages occupy less space than their originals. Memory allocation for zram is managed dynamically through interfaces, with the device's virtual capacity defined by writing to the disksize attribute (e.g., echo 1G > /sys/block/zram0/disksize), which sets the maximum uncompressed data the device can handle. The actual usage for the compressed pool is constrained by an optional mem_limit attribute, initially unlimited, allowing the virtual disksize to exceed available physical through overcommitment; for instance, a 2:1 effectively doubles usable swap space relative to consumption. Idle initialization consumes negligible memory, approximately 0.1% of the disksize. When the compressed memory pool fills—reaching the mem_limit or exhausting available —zram rejects further allocations, causing write operations to fail unless a backing device is configured for writeback. In setups with writeback enabled (via CONFIG_ZRAM_WRITEBACK), or incompressible pages can be evicted to a designated backing swap device on disk, such as /dev/sda5, to reclaim space in the pool; this is triggered manually through the writeback attribute (e.g., echo > /sys/block/zram0/writeback). As of 6.16, page index ranges can be specified (e.g., page_indexes=1-100) for more granular writeback control. Without writeback, the kernel's may invoke out-of-memory killing or fall back to other swap spaces if available. zram integrates directly with the swap subsystem as a standard block device, initialized using mkswap /dev/zram0 followed by swapon /dev/zram0, allowing the to treat it equivalently to traditional disk-based swap while benefiting from in-memory .

Compression Algorithms

zram employs LZ4 as its default algorithm. It typically yields compression ratios of 2 to 3 times on mixed workloads while supporting speeds over MB/s per core (around 5-7 CPU cycles per byte on modern processors). Supported alternatives include , which delivers higher ratios of approximately 3 to 4 times—especially at configurable levels from 1 to 22—and is suitable for scenarios prioritizing storage efficiency over raw speed, with requiring about 10-20 CPU cycles per byte but maintaining throughput above 1 GB/s. LZO serves as a legacy option, providing speeds comparable to or exceeding LZ4 in some cases (under 5 cycles per byte for ) but with slightly lower ratios around 2 to 2.5 times, making it viable for older or resource-constrained systems. When the CONFIG_ZRAM_MULTI_COMP kernel configuration is enabled, zram supports multiple compression algorithms, allowing it to attempt recompression of pages using a secondary algorithm if the primary one fails to achieve sufficient compression. The primary and secondary algorithms are selected via the comp_algorithm sysfs attribute (e.g., echo lz4:zstd > /sys/block/zram0/comp_algorithm). This feature enhances overall memory savings by handling a broader range of data types more effectively. Users select the compression algorithm through the sysfs interface at /sys/block/zram<id>/comp_algorithm, where <id> denotes the device index (e.g., 0 for the first device), by echoing the desired algorithm name prior to device initialization; available options are listed by reading the file. Runtime switching is possible only after resetting the device via /sys/block/zram<id>/reset, ensuring no active data loss during reconfiguration. For ZSTD, additional parameters like compression level can be set using /sys/block/zram<id>/algorithm_params. These algorithms impact system throughput by influencing CPU utilization during swap operations: LZ4 and LZO enable higher handling rates (up to several thousand pages per second) due to their low , whereas ZSTD may reduce effective throughput by 20-50% under heavy load but saves more overall. Compression effectiveness in zram depends on page content, performing better on text or structured data (ratios up to 4:1 with ) owing to repetitive patterns, while binary or executable pages often achieve only 1.5-2:1 due to lower redundancy. Pages filled with identical bytes (e.g., zero-filled), tracked as same_pages in mm_stat, are handled specially with shared storage to minimize allocation overhead. Pages that do not compress effectively are stored in their uncompressed form, occupying a full page slot in the pool.

Configuration and Usage

Kernel Module Setup

The zram kernel module is loaded into the running Linux kernel using the modprobe command, with an optional num_devices parameter to specify the number of virtual block devices to create. For example, modprobe zram num_devices=4 generates four devices named /dev/zram0 to /dev/zram3, each capable of independent configuration. If omitted, the default creates a single device, /dev/zram0. After loading, each device requires initialization by setting its backing memory pool size via the interface at /sys/[block](/page/Block)/zram<id>/disksize, where <id> is the device number. The command echo 1G > /sys/[block](/page/Block)/zram0/disksize allocates a 1 pool for /dev/zram0, using byte values or suffixes like K, M, or G for convenience. During this step, the compression algorithm can be selected by writing to /sys/[block](/page/Block)/zram0/comp_algorithm, such as echo lzo > /sys/[block](/page/Block)/zram0/comp_algorithm to use LZO, with available options listed by cat /sys/[block](/page/Block)/zram0/comp_algorithm. To enable the device as swap space, format it using mkswap /dev/zram0, which prepares the block device for . Then activate it with swapon /dev/zram0, optionally specifying a via -p (e.g., swapon -p 100 /dev/zram0) to influence the kernel's swap selection order. Multiple devices can be enabled similarly, allowing layered swap configurations. Runtime monitoring of zram devices is performed through attributes under /sys/block/zram<id>/, providing key metrics such as mem_used_total for total allocated memory, compr_data_size for the size of stored compressed pages, and orig_data_size for the equivalent uncompressed data volume. Additional files like mm_stat offer detailed breakdowns of memory usage, while io_stat tracks I/O operations. For boot-time automation and persistent configuration, can manage zram setup via the zram-generator utility, which loads the module, initializes devices, and enables swap by generating units from a at /etc/systemd/zram-generator.conf. This approach ensures devices are ready early in the boot process without manual intervention.

System Integration Examples

In desktop distributions, zram is commonly integrated via packages like Ubuntu's zram-config, which automatically configures a compressed swap device allocating approximately 50% of available to improve on systems with limited physical . This setup is particularly beneficial for everyday multitasking, as it reduces reliance on slower disk-based swap while keeping fast. In mobile and embedded environments, Android employs zram as the default swap mechanism for low-RAM devices to manage memory pressure without frequent disk I/O. For instance, on low-RAM phones such as those with 1 GB of RAM, Android typically configures a zram pool sized to a significant fraction of available RAM (often around half), compressing inactive pages to extend effective memory capacity and prevent app kills during bursts of activity. On servers, zram can enhance workloads with variable memory demands, such as those experiencing sudden spikes, by providing fast compressed swap. Administrators can use tools like zram-generator, a unit generator, to create zram swap devices tailored to the host's without dedicated swap partitions. Custom scripts enable dynamic zram sizing at boot, adapting to the system's total for optimal allocation. A representative script might calculate 50% of available memory and configure the device accordingly, as shown below (run as root during early boot via or ):
bash
#!/bin/bash
# Load zram module with one device
modprobe zram num_devices=1

# Get total RAM in kB from /proc/meminfo
total_ram_kb=$(grep MemTotal /proc/meminfo | awk '{print &#36;2}')

# Set zram size to 50% of RAM in bytes
zram_size=$((total_ram_kb * 1024 / 2))
echo $zram_size > /sys/block/zram0/disksize

# Set compression algorithm (e.g., lz4 for balance)
echo lz4 > /sys/block/zram0/comp_algorithm

# Initialize as swap
mkswap /dev/zram0

# Enable swap with high priority
swapon --priority 100 /dev/zram0
This approach uses standard interfaces for flexibility across distributions. zram integrations often involves addressing over-allocation, where setting the device size too large relative to can exhaust physical memory during poor compression scenarios, triggering the out-of-memory () killer and process termination. To monitor usage and prevent such issues, the zramctl tool provides real-time statistics on compression ratios, memory consumption, and swap activity; for example, running zramctl displays device status, helping users adjust sizes proactively.

Performance and Comparisons

Advantages Over Traditional Swap

zram offers significant performance improvements over traditional disk-based swap by operating entirely within , eliminating the need for slow disk I/O operations. Traditional swap relies on writing pages to persistent like hard drives or SSDs, which introduces substantial due to delays or wear-leveling. In contrast, zram compresses pages in and stores them there, allowing for near-instantaneous read and write access speeds comparable to regular operations. This in-memory approach avoids the bottlenecks of disk access, making swap operations orders of magnitude faster in latency-sensitive scenarios. Another key advantage is the effective increase in available through on-the-fly , which can achieve ratios around 2:1 on average for typical workloads, effectively doubling the usable swap without additional . As of 2025, enhancements like multi-stream improve on multi-core processors, with recent benchmarks indicating up to 3:1 ratios using on diverse workloads. This is particularly beneficial on memory-constrained systems, such as devices or low-RAM desktops, where traditional swap might exhaust disk or degrade performance due to frequent I/O. zram leverages CPU cycles for and —requiring some CPU cycles, which is typically low on modern —without significantly impacting overall system responsiveness under normal loads. By avoiding disk writes altogether, zram also reduces wear on storage devices, extending their lifespan in environments like or SSD-only systems. Furthermore, zram enables swap functionality on systems lacking dedicated swap partitions or files, providing a that integrates seamlessly with the swap subsystem. Unlike traditional swap, which can lead to thrashing on slow storage during high memory pressure, zram maintains higher throughput by keeping all swap activity in , resulting in smoother multitasking and reduced penalties. This makes it especially suitable for scenarios where disk bandwidth is limited or unavailable, such as virtualized environments or battery-powered devices.

Limitations and Trade-offs

While zram provides memory efficiency through , it introduces notable CPU overhead associated with the and processes, which can lead to utilization spikes of 5-15% on low-end CPUs during periods of heavy activity. This overhead arises because every page swapped to zram must be processed algorithmically, potentially impacting overall system responsiveness in resource-constrained environments where CPU cycles are already limited. As a -based , zram exclusively consumes physical for its storage pool, which can starve active applications of available RAM if the pool is configured oversized relative to expected ratios. Kernel documentation advises against allocating more than twice the physical RAM size, as typical ratios hover around 2:1, rendering larger pools inefficient and counterproductive by reserving that could otherwise support running processes. zram handles incompressible data by storing it uncompressed within the allocated pool, which diminishes the effective memory savings and can lead to suboptimal space utilization when workloads include non-compressible content like encrypted or random data. Such pages are tracked separately as "huge_pages" in zram statistics, and while they can be offloaded to backing storage if configured, this fallback reduces the overall benefits of in-RAM compression. Compressed pages in zram remain stored in physical , exposing them to security risks such as cold-boot attacks where an adversary with physical access can extract data remnants from after power-off using cooling techniques. Unlike disk-based swap, zram lacks native mitigations, relying instead on broader system-level protections like full-disk encryption, which do not inherently secure the compressed pool. On systems with ample , such as high-end servers, zram's is limited because becomes infrequent and disk I/O remains negligible, making the compression overhead unjustified without memory pressure. In these scenarios, the fixed allocation of for the zram offers compared to simply expanding physical .

Adoption and Ecosystem

Distribution Support

zram is available as a module in all major distributions since the inclusion of the feature in the version 3.14, allowing users to enable compressed RAM-based swap on virtually any modern system. The is compiled into most distribution kernels by default, though activation typically requires additional configuration or packages depending on the distro. Fedora has utilized zram as the default swap mechanism since Fedora 33, employing the zram-generator tool to automatically create a compressed swap device sized at the system's or 8 , whichever is smaller, without requiring a traditional swap partition or file. This approach prioritizes performance on systems with limited storage, and the configuration can be customized via /etc//zram-generator.conf. In Ubuntu and its derivatives like Linux Mint, zram is not enabled by default; instead, these distributions rely on zswap for memory compression combined with a swap file. However, users can enable zram by installing the zram-config package, which sets up a swap device using the lz4 compression algorithm and allocates space equivalent to half the physical RAM. Discussions within the Ubuntu community have explored adopting zram as default for future releases, citing performance benefits on low-RAM hardware, but as of Ubuntu 24.04, zswap remains the standard. Debian provides zram support through the systemd-zram-generator package, which includes a default configuration allocating 50% of for the compressed swap , but it is not activated out-of-the-box and requires manual installation and service enabling. The Debian Wiki recommends this setup for systems with 4 GB or less of to improve responsiveness without disk I/O. Arch Linux includes the zram module in its and supports easy activation via the zram-generator AUR package or manual commands, though it is not enabled by default to allow user customization of compression algorithms like lzo or . Users often configure it during installation for lightweight setups, with statistics accessible via /sys/block/zram0. Pop!_OS, an Ubuntu-based distribution from , has enabled zram by default since version 22.04, using a custom configuration that compresses memory in the background to enhance multitasking. The default setup allocates up to 100% of RAM for the device, adjustable via pop-zram.conf, and integrates seamlessly with the . openSUSE incorporates zram support through the systemd-zram-service package, which can be installed and enabled for automatic swap creation, but it is not active by default in either or Leap releases, reflecting a preference for traditional swap on higher-RAM systems. For low-memory scenarios, the service uses lzo-rle compression by default, with options to switch to higher-ratio algorithms like . Other distributions, such as Gentoo and , offer zram via configuration and optional packages like zram-init, allowing compilation-time enabling or runtime setup, but default installations typically defer to preference without pre-enabling the . Overall, while zram's -level availability ensures broad , adoption as a default varies by distribution philosophy, with performance-oriented ones like and leading in out-of-the-box integration.

Use Cases in Modern Systems

In modern Linux-based systems, zram is primarily employed as a compressed to enhance efficiency in resource-constrained environments, where traditional disk-based would introduce significant or wear on devices. By compressing inactive pages within , zram allows systems to maintain more active processes without resorting to slower I/O operations, achieving ratios often around 2:1 to 3:1 depending on the workload and algorithm used, such as LZ4 or . This makes it particularly valuable for extending effective capacity without additional hardware. A key application is in embedded systems, such as single-board computers like the , where physical is limited (e.g., 1-4 ) and storage often relies on flash media prone to wear from frequent writes. Here, zram serves as swap to handle bursts of memory demand during tasks like running lightweight servers or multimedia applications, reducing flash degradation and providing read/write speeds up to 10 times faster than SSDs. For instance, configuring zram to allocate 50% of available with LZ4 compression enables seamless operation of RAM-intensive workloads without performance thrashing. In mobile devices running , zram functions as a core component of low-memory killer mechanisms, compressing pages to keep more applications in a cached state for quicker resumption. This is crucial for devices with 4-8 RAM, where multitasking (e.g., switching between apps or handling background services) benefits from zram's ability to store up to twice the data in the same physical space, minimizing app closures and improving user responsiveness under pressure. kernels have integrated enhancements like multi-compression algorithms to optimize this further. For desktops and workstations, particularly those with modest RAM configurations (e.g., 4-16 GB), distributions like and leverage zram to replace or augment disk swap, boosting interactivity during memory-intensive activities such as web browsing with multiple tabs or light . Workstation, for example, enables zram by default since version 33, allocating up to the full amount of (capped at 8 GB) for compressed swap, which reduces by an compared to SSD-based alternatives and enhances overall system stability without requiring swap partitions. Similarly, users on low-memory hardware install zram via packages like zram-config to mitigate slowdowns, ensuring smoother performance in everyday computing scenarios. Beyond swap, zram finds niche uses as a high-speed, compressed block device for temporary storage, such as mounting /tmp or application caches in memory-limited servers or desktops. This setup provides fast I/O for transient data while conserving disk space, though it trades some CPU overhead for the compression/decompression cycles. Overall, zram's adoption underscores a shift toward in-memory compression for balancing speed and capacity in diverse modern computing paradigms.

References

  1. [1]
    zram: Compressed RAM-based block devices
    The zram module creates RAM-based block devices named /dev/zram<id> (<id> = 0, 1, ...). Pages written to these disks are compressed and stored in memory itself.
  2. [2]
    Kernel development - LWN.net
    Apr 3, 2013 · ... kernel memory management (MM) subsystem: zram, zcache, and zswap. ... Nitin Gupta was the original author of compcache and I was the original ...Avoiding Game-Score Loss... · A Vfs Deadlock Post-Mortem · In-Kernel Memory Compression
  3. [3]
    zcache: a compressed page cache - LWN.net
    Jul 27, 2010 · compcache became ramzswap which then became zram. zram is a compressed block device. It can be used for swap or as a backing store for a ...
  4. [4]
    Chapter 11 Swap Management - The Linux Kernel Archives
    Virtual memory and swap space allows a large process to run even if the process is only partially resident. As “old” pages may be swapped out, the amount of ...
  5. [5]
    Tmpfs — The Linux Kernel documentation
    ### Summary: Can tmpfs or RAM-based Files Be Used as Swap Devices in Linux?
  6. [6]
    swapon(8) - Linux manual page - man7.org
    swapon is used to specify devices on which paging and swapping are to take place. The device or file used is given by the specialfile parameter.
  7. [7]
    Chapter 15. Swap Space | Red Hat Enterprise Linux | 7
    Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages ...
  8. [8]
    None
    Nothing is retrieved...<|separator|>
  9. [9]
    [PDF] The Case for Compressed Caching in Virtual Memory Systems
    Compressed caching uses part of the available RAM to hold pages in compressed form, effectively adding a new level to the virtual memory hierarchy.
  10. [10]
    [PDF] IBM zEnterprise Data Compression
    IBM Resource Management Facility (IBM RMF™) support for hardware compression includes IBM System Management Facilities (SMF) Type 74 SubType 9 records and a new.<|separator|>
  11. [11]
    [PDF] Adaptive Main Memory Compression - USENIX
    This paper describes a memory compression solution to this problem that adapts the allocation of real memory between uncompressed and compressed pages and also.<|control11|><|separator|>
  12. [12]
    [PDF] Compute vs. IO tradeoffs for MapReduce Energy Efficiency
    Mar 29, 2010 · Compression offers a way to decrease IO demands. (compressed data has smaller size) through increased. CPU work (required for compression and ...
  13. [13]
    EnableCompression | Microsoft Learn
    Jan 24, 2019 · EnableCompression specifies whether the Microsoft ReadyBoost™ cache uses compression. Disabling compression can improve CPU usage and decrease ...
  14. [14]
    Nitin Gupta: [PATCH 0/3] compressed in-memory swapping take5
    Mar 30, 2009 · It allows creating a RAM based block device which acts as swap disk. Pages swapped to this device are compressed and stored in memory itself.Missing: proposal | Show results with:proposal
  15. [15]
    Compcache: in-memory compressed swapping - LWN.net
    May 26, 2009 · It creates a virtual block device (called ramzswap) which acts as a swap disk. Pages swapped to this disk are compressed and stored in memory itself.
  16. [16]
    Andrew Morton: Re: [PATCH 0/3] compressed in-memory swapping ...
    Apr 1, 2009 · > It allows creating a RAM based block device which acts as swap disk. > Pages swapped to this device are compressed and stored in memory itself ...Missing: proposal | Show results with:proposal
  17. [17]
    Linux_3.14 - Linux Kernel Newbies
    Mar 30, 2014 · Summary of the changes and new features merged in the Linux kernel during the 3.14 development cycle.Missing: mainline lkml
  18. [18]
    ZRAM Finally Promoted Out Of Staging In Linux Kernel - Phoronix
    Dec 18, 2013 · The change moving zRAM out of the Linux staging area in linext-next, so it should be in good shape for hitting Linux 3.14 within Linus Torvalds ...
  19. [19]
    Linux_4.14 - Linux Kernel Newbies
    Nov 12, 2017 · Summary of the changes and new features merged in the Linux kernel during the 4.14 development cycle.
  20. [20]
    zram - ArchWiki
    Aug 28, 2025 · zram, formerly called compcache, is a Linux kernel module for creating a compressed block device in RAM, ie a RAM disk with on-the-fly disk compression.
  21. [21]
    Memory allocation among processes | App quality
    Feb 19, 2025 · zRAM is a partition of RAM used for swap space. Everything is compressed when placed into zRAM, and then decompressed when copied out of zRAM.
  22. [22]
    Linux 6.2 Lands Support For Multiple Compression Streams With ...
    Dec 21, 2022 · Merged last week to Linux 6.2 as part of Andrew Morton's memory management related patches is support within ZRAM for multiple compression streams.<|separator|>
  23. [23]
    [PDF] Practical Timing Side Channel Attacks on Memory Compression
    Nov 16, 2021 · We demonstrate that a dictionary attack with. 100 guesses on ZRAM decompression can leak a 6B-secret co-located with attacker data in a page ...
  24. [24]
    Google Engineer Experimenting With ZRAM Handling For Multiple ...
    Oct 6, 2022 · There are patches that provide support for ZRAM to be able to handle multiple compression streams on a per-CPU basis.
  25. [25]
    The zram driver: Supportability in Red Hat Enterprise Linux
    Aug 6, 2024 · The zram driver is present inside "staging" directory but it is fully supported in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 by ...
  26. [26]
    sysfs-block-zram
    What: /sys/block/zram<id>/writeback Date: November 2018 Contact: Minchan Kim <minchan@kernel.org> Description: The writeback file is write-only and trigger idle ...
  27. [27]
    Smaller and faster data compression with Zstandard
    Aug 31, 2016 · The fastest algorithm, lz4, results in lower compression ratios; xz, which has the highest compression ratio, suffers from a slow compression ...Missing: LZO | Show results with:LZO
  28. [28]
    Comparison of Compression Algorithms - LinuxReviews
    zstd, appears to be the clear winner, with leading compression speed, decompression speed, and acceptable compression ratio. · We tested decompressing using a ...Introduction · Compressing The Linux Kernel · Notable Takeaways
  29. [29]
    zram-generator(8) — systemd-zram-generator — Debian unstable — Debian Manpages
    ### Summary of zram-generator Setup at Boot Time Using Systemd
  30. [30]
    ZRAM as default - Foundations - Ubuntu Community Hub
    Dec 5, 2023 · What is the consensus on enabling ZRAM by default going forward in Ubuntu? I have personally seen excellent results in 22.04 with it.Missing: Unix | Show results with:Unix
  31. [31]
    Systemd unit generator for zram devices - GitHub
    This generator provides a simple and fast mechanism to configure swap on /dev/zram* devices. The main use case is create swap devices.
  32. [32]
    zram: Compressed RAM-based block devices — The Linux Kernel documentation
    ### Summary of zram from https://www.kernel.org/doc/html/latest/admin-guide/blockdev/zram.html
  33. [33]
    Compressed swap - LWN.net
    Mar 26, 2014 · Zram acts like a special block device which can be mounted as a swap device; zswap, instead, uses the "frontswap" hooks to try to avoid swapping ...Missing: traditional | Show results with:traditional
  34. [34]
    In-kernel memory compression - LWN.net
    Apr 3, 2013 · In-kernel memory compression aims to keep more data compressed in RAM, using idle CPU cycles to compress and decompress byte sequences.
  35. [35]
    Swapfile vs Zswap vs ZRAM on VPS in 2025: Performance ... - Onidel
    Sep 3, 2025 · ZRAM excels for responsive applications requiring low-latency memory access, while zswap provides balanced performance for mixed workloads.
  36. [36]
    ZRAM and VMs | etbe - Russell Coker
    Aug 27, 2025 · Those are Intel 120G 2.5″ DC grade SATA SSDs. For most servers ZRAM isn't a good choice as you can just keep doing IO on the SSDs for years. A ...
  37. [37]
    [SOLVED] What is ZRAM (in layman's terms) and should I use it?
    Jul 17, 2025 · ZRAM compresses RAM usage, making it faster than swap, but may not be beneficial unless the system is under memory stress. It uses RAM to avoid ...
  38. [38]
    How do I use zRam? - Ask Ubuntu
    Aug 11, 2012 · zRam is a code inside kernel, that once activated, creates a RAM based block device which acts as a swap disk, but is compressed and stored in memory.
  39. [39]
    ZRam - Debian Wiki
    Jun 28, 2023 · zram (previously called compcache) can create RAM based compressed block devices. It is a module of the Linux kernel since 3.2. If physical swap ...
  40. [40]
    pop-os/default-settings - GitHub
    This repo contains Pop!_OS distribution default settings. For easy management of all Pop!_OS related source code and assets, see the main Pop! repo.
  41. [41]
    Changes/SwapOnZRAM - Fedora Project Wiki
    Oct 13, 2020 · This means swap-on-zram is not as effective at page eviction as swap-on-drive, the eviction rate is ~50% instead of 100%. But it is at least ...
  42. [42]
    zram in Embedded Linux - Circuit Cellar
    Apr 2, 2024 · This article covers how to use zram for memory swap, so as to create a drive/disk mounted in RAM memory itself, using on-the-fly data compression and ...How To Maximize Ram In A... · What Is Zram? · Zram In A Raspberry Pi
  43. [43]
    What really happens when your phone runs out of RAM?
    Jun 28, 2025 · Discover how Android uses zRAM and swap space to manage RAM, and see how Samsung RAM Plus and Xiaomi Memory Extension compare.
  44. [44]
    Kernel release notes | Android Open Source Project
    The 6.1 kernel brings multiple improvements for the ARM64 architecture, including: Support for the ARMv8.6 timer extensions; Support for QARMA3 pointer- ...
  45. [45]
    How to Install ZRAM to Boost Ubuntu Performance - Tecmint
    Oct 14, 2024 · A common practice is to set it to 50-100% of your RAM size. Now start and enable the ZRAM service with the following commands. sudo systemctl ...