Fact-checked by Grok 2 weeks ago

zswap

Zswap is a feature that serves as a lightweight compressed for swap pages, intercepting pages during the swapout process to compress them and store the results in a -based rather than writing directly to a backing swap device. Introduced in kernel version 3.11, it aims to mitigate the performance overhead of swapping by minimizing disk I/O, particularly beneficial for systems with limited , overcommitted virtual machines, or solid-state drives where can degrade longevity. By using algorithms such as LZO, LZ4, or , zswap can store multiple compressed pages in the space typically occupied by a single uncompressed page, effectively extending available memory. The mechanism relies on the zsmalloc allocator to manage the compressed pool dynamically, mapping swap entries to compressed data via red-black trees for efficient retrieval. When the pool reaches its configured maximum size—typically a of total , such as 20%—zswap evicts the least recently used pages to the backing swap device on a write-back basis. It also optimizes storage for same-filled pages, like zero pages, by avoiding full compression and instead using a compact representation to further enhance efficiency. An optional shrinker mechanism allows proactive reclamation of cold compressed pages under memory pressure, integrating with the kernel's global reclaim process. Configuration of zswap is flexible, supporting boot-time enabling via kernel parameters like zswap.enabled=1 or runtime toggling through interfaces in /sys/module/zswap/parameters/. Users can select the compressor algorithm, set the maximum percentage with max_pool_percent, and adjust an acceptance threshold to control in usage. While zswap requires a backing swap device—unlike the related feature, which creates a standalone compressed block device—it complements traditional swapping by acting as a front-cache, and disabling it does not immediately evict stored pages, allowing gradual drainage. This design makes zswap particularly suitable for environments seeking to balance efficiency with minimal overhead.

Introduction

Overview

Zswap is a feature that serves as a compressed write-back for swapped-out pages, storing compressed versions of these pages in a RAM-based to avoid or delay disk I/O operations, introduced in kernel version 3.11. It functions by intercepting pages during the swap-out process, attempting to compress them, and managing the compressed data within a dynamic pool allocated from available RAM. The core purpose of zswap is to reduce swap I/O overhead by keeping frequently accessed swapped pages in compressed form in memory, thereby trading CPU cycles for potential I/O savings. Zswap is built on the frontswap API, which provides a "transcendent memory" interface that allows swap pages to be handled by backend mechanisms before reaching the backing swap device. In the basic workflow, when a page is slated for swapping, zswap attempts to compress it; if compression succeeds and space is available in the pool, the compressed page is stored there along with metadata mapping it to the original swap entry. If compression fails or the pool is full, the page is written directly to the backing swap device as a fallback. Pages stored in the pool can later be decompressed and faulted back into memory when accessed, with eviction occurring on an LRU basis to manage pool size.

Benefits and Use Cases

Zswap provides significant benefits in memory-constrained environments by compressing swap pages in , thereby minimizing disk I/O operations that would otherwise degrade system responsiveness. In systems with limited , this approach mitigates the impact of by keeping more inactive pages in a compressed form within rather than evicting them to slower devices. For overcommitted machines, zswap reduces I/O pressure on shared backing , leading to lower and improved . Additionally, by decreasing the frequency of writes to swap devices, zswap extends the lifespan of SSD-based swap partitions, which are particularly sensitive to from repeated write cycles. The resource efficiency of zswap stems from its ability to compress inactive pages, allowing RAM to be used more effectively for active workloads while effectively expanding available memory capacity. Typical compression ratios range from 3.6:1 on x86 architectures to 4.3:1 on POWER systems, depending on data compressibility, which can increase effective memory by 2-4 times in practice. For instance, benchmarks using the SPECjbb2005 workload demonstrated up to 40% performance improvements on systems exceeding available memory, with even higher gains (up to 60%) when leveraging hardware-accelerated compression on POWER7+ processors. This compression enables systems to handle larger workloads without proportional increases in physical memory demands. Zswap is particularly well-suited for use cases involving memory-limited setups, such as desktops and laptops with 4-8 of under multitasking loads, where it prevents thrashing by prioritizing compressed caching over disk swaps. In environments with unpredictable workloads, it enhances stability by buffering bursts of pressure without immediate I/O spikes. virtual machines allocated minimal benefit from reduced contention on shared , making zswap ideal for cost-optimized deployments. Systems relying on slow or expensive for swap, like remote or networked devices, also gain from deferred disk access, as zswap acts as an intermediary . A build illustrated these advantages, showing a 53% reduction in runtime and 76% less I/O at high counts. While zswap offers these gains, it introduces s, primarily additional CPU overhead for and operations, which can impact in scenarios if the I/O savings do not outweigh the cycles spent. However, in most memory-pressure situations, the reduction in disk I/O latency—often orders of magnitude slower than access—makes this trade-off favorable, especially on modern multi-core processors.

Functionality

Compression and Storage Mechanism

Zswap compresses anonymous swap pages prior to storage in a dedicated pool, aiming to reduce memory pressure by retaining frequently accessed pages in compressed form rather than writing them to backing swap devices. The compression process employs a user-selectable algorithm such as LZO, LZ4, or , with the default determined by the configuration option CONFIG_ZSWAP_COMPRESSOR_DEFAULT. This selection can be overridden at boot time via the zswap.compressor= parameter or at runtime through the /sys/module/zswap/parameters/compressor interface. During compression, each 4 KiB page is processed individually; pages are rejected if the resulting compressed size is greater than or equal to the original page size, deeming them incompressible. For storage, zswap utilizes the zsmalloc allocator, which manages a slab-based pool of compressed objects in without requiring preallocation of fixed-size blocks. Zsmalloc supports variable-sized allocations efficiently by packing multiple smaller compressed pages into larger zpages, minimizing internal fragmentation while providing handles rather than direct pointers for referencing stored data. The pool's size is dynamically adjustable up to a of total , capped by the max_pool_percent parameter (default 20%), ensuring it does not consume excessive system memory. To facilitate quick lookups and management, zswap maintains per-swap-type XArrays that map swap entries—identified by their offset within the swap device—to the corresponding zsmalloc handles. These indexed array structures enable efficient insertion, deletion, and retrieval operations. When a compressed page is stored, its swap entry is inserted into the XArray; upon a subsequent , the handle is retrieved to decompress and restore the original page into the process's . Zswap includes optimizations for special page patterns through the same_filled_pages_enabled option (enabled by default), which detects pages filled with identical bytes—such as zero-filled pages common in sparse allocations—and stores only a representative without invoking the full compression algorithm. In this case, the compressed length is recorded as zero, and the (e.g., all zeros) is associated with the , significantly reducing storage overhead for such pages while simplifying retrieval. If the zsmalloc pool reaches its capacity limit, zswap rejects the store operation, allowing the page to proceed directly to the backing swap device via the standard swap subsystem. Rejection due to pool fullness is governed by an acceptance threshold (default 90% via accept_threshold_percent), introducing to prevent thrashing between acceptance and rejection states as pool usage fluctuates.

Eviction and Integration with Swap

Zswap integrates with the Linux kernel's swap subsystem by intercepting pages during the swap-out process, attempting to compress them into its in-memory pool before they reach the backing swap device. This integration allows zswap to serve as a cache layer, reducing direct I/O to slower storage devices. When the compressed pool fills to its configured limit, zswap begins evicting pages to maintain space. The eviction policy in zswap is based on a least recently used (LRU) , targeting the oldest compressed pages in the pool for removal when the pool reaches its maximum size, defined by the parameter. These pages are decompressed and then written back to the backing swap , ensuring that the original uncompressed is preserved on disk. This process helps manage memory pressure by proactively freeing pool space without immediately rejecting new incoming pages. To support fine-grained control, zswap allows disabling writeback on a per-cgroup basis, preventing evicted pages from being sent to for specific control groups. Administrators can achieve this by setting the memory.zswap.writeback file to 0 within a cgroup's subsystem , such as echo 0 > /sys/fs/cgroup/<cgroup-name>/memory.zswap.writeback. This feature is useful in environments where certain workloads should avoid I/O entirely, though it risks pool exhaustion if not monitored. Under broader system memory pressure, zswap can optionally employ a for proactive reclamation. When enabled via the shrinker_enabled parameter (e.g., echo Y > /sys/module/zswap/parameters/shrinker_enabled), the shrinker scans the pool for cold pages—those unlikely to be accessed soon—and evicts them to the swap device ahead of full pool limits. This is disabled by default but can be activated at boot time using the CONFIG_ZSWAP_SHRINKER_DEFAULT_ON kernel configuration option, aiding in overall by integrating with the kernel's global reclaim paths. To prevent thrashing during repeated pool overflows, zswap implements through the accept_threshold_percent parameter. Once the pool exceeds its limit and begins rejecting new pages, zswap will not resume accepting them until the pool usage drops below this threshold (default 90% of the maximum). For instance, setting echo 80 > /sys/module/zswap/parameters/accept_threshold_percent ensures a , stabilizing acceptance behavior; a value of 100 disables this entirely. This mechanism balances pool utilization and system responsiveness during fluctuating demands.

Configuration

Boot-Time Parameters

Zswap supports kernel command-line parameters for configuration at boot time, providing initial settings. These are useful for systems where zswap is compiled in or loaded as a . The zswap.enabled parameter toggles zswap's availability. Setting it to 1 enables zswap, while 0 disables it; the default depends on the build option CONFIG_ZSWAP_DEFAULT_ON=y. The zswap.compressor parameter specifies the default compression algorithm, such as lzo, lz4, or zstd, overriding the build-time default from CONFIG_ZSWAP_COMPRESSOR_DEFAULT. Supported algorithms must be compiled into the , with lz4 often preferred for its balance of speed and . The zswap.zpool parameter selects the zpool implementation for the compressed pool, such as zsmalloc (default as of Linux kernel 5.10+), zbud, or others if compiled. For example, zswap.zpool=zsmalloc ensures use of the more efficient allocator. The shrinker can be enabled at boot if CONFIG_ZSWAP_SHRINKER_DEFAULT_ON=y is set in the kernel configuration.

Runtime Tuning and Controls

Zswap supports dynamic adjustments after boot through sysfs interfaces in /sys/module/zswap/parameters/. These allow enabling/disabling, changing algorithms, adjusting limits, and tuning options without kernel reload. To enable zswap at runtime, write 1 to enabled (e.g., echo 1 > /sys/module/zswap/parameters/enabled). Disabling writes 0; existing pages remain in the pool until invalidated, faulted, or evicted. To force eviction, use swapoff on swap devices. The algorithm can be changed by writing to compressor (e.g., echo lz4 > /sys/module/zswap/parameters/compressor). New pages use the new ; existing ones retain the original until evicted. Supported options depend on configuration and zpool backends like zsmalloc. The max_pool_percent parameter (default: 20) sets the maximum pool size as a percentage of total (e.g., echo 30 > /sys/module/zswap/parameters/max_pool_percent). This balances memory usage and benefits. The same_filled_pages_enabled parameter (default: Y) optimizes storage for identical-value pages like zero pages without (e.g., echo 0 > /sys/module/zswap/parameters/same_filled_pages_enabled to disable). The shrinker_enabled parameter (default: N, unless CONFIG_ZSWAP_SHRINKER_DEFAULT_ON=y) enables proactive cold page reclamation under memory pressure (e.g., echo Y > /sys/module/zswap/parameters/shrinker_enabled). The accept_threshold_percent parameter (default: 90) sets for accepting pages into a full pool (e.g., echo 80 > /sys/module/zswap/parameters/accept_threshold_percent). Values below 100 prevent thrashing by refusing pages until usage drops below the threshold; 100 disables . These options enable tuning for specific workloads.

Monitoring

Sysfs Interfaces

The sysfs interfaces for zswap are provided under the /sys/module/zswap/parameters/ directory, allowing runtime configuration and querying of core zswap settings. These parameters control global behavior, as zswap operates system-wide without per-cgroup sysfs controls in its base implementation; however, writeback to backing swap can be disabled on a per-cgroup basis using the cgroup v2 interface at /sys/fs/cgroup/<cgroup-name>/memory.zswap.writeback by writing 0 to disable it. The enabled parameter toggles zswap on or off by writing 1 or 0, respectively; for example, echo 1 > /sys/module/zswap/parameters/enabled activates it at runtime, assuming sysfs is mounted at /sys. When disabled, any pages already in the compressed pool remain until evicted or invalidated, but no new pages are accepted. The compressor parameter specifies the compression algorithm in use, such as lzo, lz4, or zstd, and supports runtime changes via writes to the file; the default is set by the kernel configuration CONFIG_ZSWAP_COMPRESSOR_DEFAULT, but alterations do not recompress existing pages. The max_pool_percent parameter limits the compressed pool size as a percentage of total system RAM (default 20%), influencing when zswap begins evicting pages to backing swap; reading this file returns the current limit value. Zswap always optimizes storage for same-filled pages, like zero pages, by using a compact representation with a compressed length of zero, avoiding unnecessary compression overhead. The accept_threshold_percent parameter sets a hysteresis threshold (default 90%) below the pool limit, at which zswap resumes accepting new pages after reaching capacity; writing 100 disables this mechanism entirely. The shrinker_enabled parameter activates the pool shrinker (default off), which evicts cold pages under memory pressure to reclaim ; it can be toggled at runtime with Y or N. These interfaces enable monitoring of basic zswap state during operation; for instance, repeatedly reading /sys/module/zswap/parameters/max_pool_percent and /sys/module/zswap/parameters/enabled in a script can track whether the pool limit is approached and zswap remains active under load. Detailed counters for actual pool usage, such as stored pages or total bytes, are available via debugfs.

Debugfs Statistics

The debugfs interface for zswap provides detailed runtime statistics on pool usage, compression outcomes, and rejection events, accessible under the path /sys/kernel/debug/zswap/ once debugfs is mounted. To enable access, the debugfs filesystem must be mounted with the command mount -t debugfs none /sys/kernel/debug, assuming the kernel was compiled with CONFIG_DEBUG_FS enabled. This interface exposes multiple read-only files, each containing a single 64-bit integer value representing cumulative or current metrics for zswap operations. Key statistics include pool_total_size, which reports the total size of the compressed pool in bytes, calculated as the product of the number of pool pages and the page size. The stored_pages file tracks the current number of compressed pages held in the pool, offering insight into storage utilization and compression efficiency. Similarly, written_back_pages counts the number of pages evicted from the pool and written to backing swap storage, typically due to the pool reaching its maximum capacity defined by zswap_max_pool_percent. Rejection counters provide diagnostics on failed store attempts: reject_alloc_fail increments when the underlying buddy allocator cannot provide sufficient memory for a compressed page; reject_kmemcache_fail records rare failures to allocate metadata for pool entries; and reject_reclaim_fail tallies stores rejected after unsuccessful reclaim attempts when the pool is full. Additional counters like reject_compress_fail (compression algorithm errors) and reject_compress_poor (pages compressed to sizes unsuitable for efficient storage) highlight potential issues with the chosen compressor. The decompress_fail counter tracks failures in decompressing pages during load or writeback operations. The pool_limit_hit metric counts instances where the pool limit was reached, triggering potential writebacks. These statistics aid in troubleshooting zswap performance; for example, elevated reject_alloc_fail values suggest high pressure constraining the pool allocator, while frequent increments in written_back_pages indicate regular evictions and reliance on disk swap. Administrators can monitor these files periodically or via scripts to assess success rates, such as by comparing stored_pages against total swap pressure, without altering runtime behavior. For broader control parameters, interfaces offer complementary tuning options.

History and Development

Introduction and Early Versions

zswap is a feature developed as part of the kernel's subsystem to provide a compressed for swapped-out pages, aiming to mitigate the performance penalties associated with traditional to disk. It was first merged into the mainline kernel in version 3.11, released on September 2, 2013, by primary developer Seth Jennings of . The implementation addressed the need for efficient in environments with constrained , where swap operations could become significant bottlenecks due to I/O on storage devices. The initial design of zswap was built upon the frontswap API, which had been introduced in kernel version 3.1 in October 2011 to enable "transcendent " interfaces for swap pages without requiring modifications to the core swap subsystem. Unlike device-based solutions that necessitate dedicated RAM-backed block devices, zswap provided a lightweight, in-kernel mechanism for compressing pages directly into a RAM pool, deferring or avoiding writes to the backing swap device. This approach was motivated by the desire to reduce swap I/O in RAM-limited systems, such as embedded devices or virtualized guests under pressure, where even modest ratios could yield substantial performance gains by trading CPU cycles for fewer disk accesses. At launch, zswap supported only the LZO , chosen for its balance of speed and efficiency suitable for swap operations. Pages destined for were compressed and stored in a dynamic pool managed by the zsmalloc allocator, which was also introduced in kernel 3.1 and optimized for handling variably sized, compressed objects with minimal fragmentation. If succeeded, the page was ; failures resulted in rejection, allowing the page to proceed to standard swap-out without further intervention. Eviction from the pool followed a least recently used (LRU) policy, writing compressed pages back to the swap device when the pool reached its configured size limit, ensuring the cache remained bounded and did not exacerbate memory pressure. These features established zswap as a yet effective , with early benchmarks showing up to 76% reduction in swap I/O under heavy load.

Recent Enhancements

Since the mid-2010s, zswap has seen several key enhancements aimed at improving efficiency, pool management, and integration with modern features. In kernel version 3.11 (released in September 2013), support for the LZ4 compressor was added alongside the initial LZO support, providing a faster alternative for scenarios prioritizing speed over . Later, in kernel 4.19 (October 2018), compressor support was introduced, offering superior ratios at reasonable speeds, further expanding zswap's configurability for diverse workloads. These compressors can be specified at boot time using the zswap.compressor parameter or adjusted via at runtime. In kernel 4.20 (December 2019), a shrinker mechanism was added to zswap, enabling proactive reclamation of cold (infrequently accessed) pages from the compressed pool under memory pressure. This feature helps mitigate pool bloat by evicting stale entries to backing swap storage, reducing overall memory footprint without waiting for the pool to fill completely. The shrinker is disabled by default but can be enabled via the zswap.shrinker_enabled sysfs parameter or boot-time configuration. Building on this, post-kernel 5.0 releases introduced hysteresis controls, such as the accept_threshold_percent parameter, which allows zswap to reject new pages into the pool until it shrinks below a specified percentage of capacity. This prevents thrashing between acceptance and eviction, improving stability in high-pressure scenarios. Cgroup integration advanced in 5.8 ( 2020), with the addition of per-cgroup controls for disabling writeback, allowing administrators to prevent specific workloads or containers from compressed pages to disk. This is useful for prioritizing latency-sensitive tasks by keeping their swap data in . Further refinements in 6.x series, including 6.8 (March 2024), added the ability to force cold page under tight conditions and a mode to disable writeback entirely. In the 2020s, zswap benefited from tighter integration with zsmalloc, the default allocator, including optimizations for handling same-filled pages to reduce duplication and improve density. Ongoing developments through kernel 6.17 (September 2025) include a second-chance algorithm in 6.12 for dynamic pool sizing, compression batching with support like IAA for high-throughput systems, and node-preferred allocation policies in zsmalloc to enhance NUMA awareness and reduce remote access. These updates, including reduced for SSD longevity and better VM overcommit support, emphasize scalability and efficiency in and environments.

Comparisons

zswap vs. zram

Zswap and are both mechanisms for compressed in-memory swap, but they differ fundamentally in architecture. Zswap operates as a cache layered on top of an existing backing swap device, such as a disk partition or file, intercepting pages during the swap-out process and compressing them into a RAM-based pool using the zsmalloc allocator. If the pool reaches its limit, the least recently used compressed pages are written back to the underlying swap device in a write-back policy. In contrast, functions as a standalone compressed block device that resides entirely in RAM, acting as a self-contained swap space without requiring any backing storage; pages swapped to are compressed and stored directly in the allocated memory, with no eviction to disk. These architectural distinctions lead to notable trade-offs. Zswap is particularly effective for large or unpredictable workloads on systems with fast backing like SSDs, as its dynamic sizing allows the to grow or shrink based on pressure, reducing I/O by keeping hot pages in compressed while offloading cold ones to disk. Zram, however, offers simpler operation for fixed-size, small-scale swap needs on slower or absent , providing very fast in-memory I/O since all operations remain in , though it may incur higher overall CPU overhead due to consistent and without the option for disk fallback. In terms of resource utilization, zswap employs a dynamic pool limited by the max_pool_percent (defaulting to 20% of total system memory), enabling flexible allocation that avoids overcommitting while using a red-black tree for efficient page tracking. Zram, by comparison, requires preallocation of a fixed device size via the disksize , which consumes a predictable but static amount of for compressed storage, typically achieving a 2:1 but dedicating resources upfront without the adaptability of zswap's pool. Choosing between and depends on system constraints: suits environments with an existing swap device on disk (e.g., SSD-backed systems) to extend effective capacity without fully replacing traditional , while is preferable for diskless setups, devices, or scenarios lacking persistent storage where a dedicated in-RAM swap is needed. While and can be used together (e.g., with as the backing swap device), doing so is generally discouraged due to overlapping compression roles that lead to inefficiency from double compression.

zswap vs. Other Techniques

Zswap differs from Transparent Huge Pages (THP) in its focus on inactive during swapping, whereas THP optimizes active access by automatically allocating larger page sizes, such as 2MB instead of 4KB, to reduce (TLB) misses and overhead. THP supports promotion and demotion of pages for anonymous and /shmem, improving performance for workloads with contiguous access, but it does not incorporate compression and relies on standard swapping mechanisms for eviction. In contrast, zswap compresses pages destined for swap into a RAM pool before any disk involvement, providing a memory multiplier effect that THP lacks, making the two techniques complementary for overall efficiency. Compared to traditional swapfiles or swap partitions without , zswap acts as an intermediary compressed , storing pages in after to avoid immediate disk writes, which significantly reduces I/O operations and associated with slower storage devices. Uncompressed swap directly evicts pages to disk, incurring higher bandwidth demands and potential SSD wear without the space savings from ratios typically ranging from 2:1 to 3:1, depending on workload. This makes zswap particularly beneficial in memory-constrained environments where frequent would otherwise degrade performance due to disk bottlenecks. Unlike user-space compression approaches, such as mounting compressed filesystems or using tools on , zswap operates entirely within the for seamless, automatic handling of swap pages without requiring application-level or manual intervention. , an in-memory temporary filesystem, supports of its pages to backing storage but lacks built-in , leading to full-size and no inherent memory amplification for compressible data. User-space methods demand explicit setup, such as compressing data before storage in , which introduces overhead and lacks the kernel's integration, potentially disrupting automatic memory pressure responses. Zswap relies on CPU-based software compression algorithms like LZ4 or Zstd, which trade processing cycles for memory savings, whereas hardware-accelerated alternatives, such as Intel's In-Memory Analytics Accelerator (IAA), offload these operations to dedicated accelerators for higher throughput and lower latency. With IAA integration in zswap via the IAA crypto driver (available since Linux 6.8) and ongoing patches for batching optimization (as of 2025), compression achieves up to 2.2x better memory savings than software equivalents on workloads like Redis, while reducing p99 latency by maintaining it at around 7% under memory pressure, but this requires compatible hardware like Intel Xeon 6 processors. Compute Express Link (CXL)-based systems can extend memory tiers but similarly depend on specialized devices for offloading, contrasting zswap's broader software availability without additional hardware costs. Despite its advantages, zswap introduces compression and decompression latencies that can vary based on CPU load and page content, rendering it less suitable for systems where predictable, bounded response times are critical, as these operations may cause unpredictable delays during memory pressure. In comparison, deprecated techniques like zcache offered a more general-purpose compressed caching framework via the transcendent memory (tmem) , supporting features beyond swap such as cleancache, but its larger codebase and lack of maintenance led to its replacement by focused implementations like zswap.

References

  1. [1]
    zswap - The Linux Kernel documentation
    Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and attempts to compress them into a ...
  2. [2]
    New Linux zswap compression functionality - IBM
    Zswap is a new lightweight backend framework that takes pages that are in the process of being swapped out and attempts to compress them and store them in a ...
  3. [3]
    zswap — The Linux Kernel documentation
    ### Summary of Zswap (from https://www.kernel.org/doc/html/latest/admin-guide/mm/zswap.html)
  4. [4]
    zswap: compressed swap caching - LWN.net
    Jan 28, 2013 · zswap: compressed swap caching ; To: Andrew Morton <akpm@linux-foundation.org> ; Subject: [PATCHv3 0/6] zswap: compressed swap caching ; Date: Mon, ...
  5. [5]
    Zswap
    ... boot time by setting the "same_filled_pages_enabled" attribute to 0, e.g. zswap.same_filled_pages_enabled=0. It can also be enabled and disabled at runtime ...
  6. [6]
  7. [7]
    Merging zswap - LWN.net
    May 22, 2013 · Zswap developer Seth Jennings duly submitted the code for consideration for the 3.11 development cycle. He quickly ran into opposition from ...Missing: introduction | Show results with:introduction
  8. [8]
    Linux_3.11 - Linux Kernel Newbies
    Sep 2, 2013 · Summary of the changes and new features merged in the Linux kernel during the 3.11 development cycle.Missing: Seth | Show results with:Seth
  9. [9]
    Frontswap — The Linux Kernel documentation
    Apr 9, 2012 · Frontswap provides a “transcendent memory” interface for swap pages. In some environments, dramatic performance savings may be obtained.
  10. [10]
    Linux_3.1 - Linux Kernel Newbies
    Summary of the changes and new features merged in the Linux Kernel during the 3.1 development cycle.<|control11|><|separator|>
  11. [11]
    [PATCH V10 0/6] mm: frontswap: overview (and proposal to merge ...
    [PATCH V10 0/6] mm: frontswap: overview (and proposal to merge at next window). From: Dan Magenheimer <dan.magenheimer@oracle ...
  12. [12]
    LKML: Seth Jennings
    Dec 11, 2012 · Zswap Overview: Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and ...Missing: introduction 3.11
  13. [13]
    zsmalloc - The Linux Kernel documentation
    zsmalloc has 255 size classes, each of which can hold a number of zspages. Each zspage can contain up to ZSMALLOC_CHAIN_SIZE physical (0-order) pages.
  14. [14]
    CONFIG_ZSMALLOC - kernelconfig.io
    In linux kernel since version 3.1 (release Date: 2011-10-24). zsmalloc is a slab-based memory allocator designed to store compressed RAM pages. zsmalloc uses ...
  15. [15]
    The first half of the 6.8 merge window - LWN.net
    Jan 12, 2024 · The zswap subsystem has gained the ability to force cold pages out to (real) swap when memory gets tight. This commit includes some ...
  16. [16]
    zswap compression batching with optimized iaa_crypto driver
    Sep 25, 2025 · ... performance benefits of compress batching when used in zswap_store() of large folios. shrink_folio_list() "reclaim batching" of any-order ...
  17. [17]
    zram: Compressed RAM-based block devices
    In this document we will describe only 'manual' zram configuration steps, IOW, zram and zram_control sysfs attributes. In order to get a better idea about ...
  18. [18]
    Transparent Hugepage Support — The Linux Kernel documentation
    ### Summary of Transparent Huge Pages (THP)
  19. [19]
    Tmpfs — The Linux Kernel documentation
    ### Summary of tmpfs from https://docs.kernel.org/filesystems/tmpfs.html
  20. [20]
    None
    ### Summary: zswap with IAA Hardware Acceleration
  21. [21]
    The zswap compressed swap cache - LWN.net
    Feb 15, 2013 · Both zcache and zswap do compressed swap page caching. However, zcache uses the tmem API internally that creates an abstraction layer for ...