Fact-checked by Grok 2 weeks ago

bcache

Bcache is a block layer cache subsystem integrated into the , designed to accelerate operations by using faster storage devices, such as solid-state drives (SSDs), to data from slower underlying block devices like hard disk drives (HDDs) or arrays. Developed by Kent Overstreet, it was first merged into the mainline in version 3.10 in 2013, providing a filesystem-agnostic solution that operates at the block level to enhance performance without requiring changes to upper-layer software. The primary purpose of bcache is to the speed gap between expensive, high-capacity HDDs and cost-effective SSD , enabling systems ranging from desktops to enterprise storage arrays to achieve significantly higher for frequently accessed data. It supports multiple caching modes, including writethrough (where writes are sent synchronously to both the cache and backing ), writeback (for higher by caching writes before flushing to the backing ), and writearound (bypassing the cache for sequential writes). Key features include dynamic attachment and detachment of cache at runtime, support for multiple backing per cache set, and intelligent IO detection that skips sequential reads and writes to preserve SSD lifespan by minimizing random operations and . Bcache employs a hybrid and journal structure for efficient metadata management, allocating data in erase block-sized buckets to optimize SSD , and it ensures during unclean shutdowns through barriers, flushes, and automatic recovery mechanisms. Performance benchmarks have demonstrated capabilities up to 1 million for random reads, with random write throughput reaching 18.5K in early tests, outperforming direct SSD access in certain workloads. and monitoring occur via interfaces, allowing fine-tuned control over cache behavior, error handling (such as disabling caching on unrecoverable errors), and statistics, making it suitable for production environments. While bcache has influenced subsequent developments like the filesystem, it remains a standalone caching tool focused on block-level acceleration.

Introduction

Definition and Purpose

Bcache, short for block cache, is a cache mechanism integrated into the kernel's block layer, enabling the use of fast storage devices such as solid-state drives (SSDs) to serve as a read/write cache for slower secondary storage devices like hard disk drives (HDDs). This design operates at the block level, intercepting I/O requests to the backing storage and managing data placement transparently without requiring modifications to the filesystem or applications. The primary purpose of bcache is to enhance (I/O) performance, particularly for patterns common in mixed workloads, by storing frequently accessed data on high-speed caching media while retaining the larger capacity of slower devices. It facilitates the creation of hybrid storage volumes that leverage the speed of SSDs for latency-sensitive operations and the cost-efficiency of HDDs for bulk storage, thereby reducing overall system latency and improving throughput for applications like or virtual machines. By prioritizing caching of hot data on faster tiers, bcache optimizes resource utilization in environments where full SSD replacement would be prohibitively expensive. Key benefits of bcache include its cost-effective approach to storage tiering, allowing organizations to augment existing HDD with smaller, affordable SSDs rather than overprovisioning expensive all-flash arrays. Additionally, its block-level granularity avoids the overhead associated with filesystem-level caching, enabling efficient handling of diverse I/O patterns while supporting modes such as writethrough and writeback for flexible trade-offs. This results in significant performance gains, such as up to several times higher for random reads compared to uncached HDDs alone.

Basic Operation

Bcache functions as a block-layer in the , intercepting I/O requests to a backing device and managing data flow between it and a faster device, such as an SSD, in a manner transparent to upper-layer filesystems. This setup allows hybrid storage configurations where the cache accelerates access to frequently used data blocks on slower, higher-capacity backing storage like HDDs or arrays. For read operations, bcache first checks the for the requested blocks. On a cache hit, the is served directly from the , providing low-latency access. In the case of a cache , the requested blocks are retrieved from the backing , and—unless the I/O is detected as sequential (with a default cutoff of )—they are then populated into the for future use. This read-ahead helps populate the proactively while avoiding unnecessary caching of large sequential reads that would not benefit from the 's speed. Write operations in bcache support two primary modes: writethrough and writeback. In writethrough mode, incoming writes are synchronously applied to both the and the backing device, ensuring data consistency without buffering dirty in the . Conversely, in writeback mode (which is disabled by default but can be enabled at ), writes are initially directed only to the for immediate acknowledgment, with the dirty later flushed asynchronously and sequentially to the backing device to maintain . This writeback approach enhances write by leveraging the 's speed, though it introduces a brief window of potential in case of cache failure. Cache population and are managed using least-recently-used (LRU)-like heuristics to track and prioritize extents in structures, ensuring efficient use of limited space. When the cache fills, less recently accessed blocks are to make room for new , while persistently records which extents are cached on the backing device. Sequential I/O, both reads and writes, is typically bypassed to optimize for patterns common in workloads benefiting from .

History and Development

Origins and Initial Release

Bcache was primarily developed by Kent Overstreet, who announced the project on July 2, 2010, through an article on LWN.net detailing its design as a Linux kernel block layer cache for improving block device performance using solid-state drives (SSDs). The early development of bcache was driven by the growing availability of SSDs in the late 2000s and the need for an efficient, general-purpose caching solution within the Linux kernel that could accelerate slower storage devices without requiring modifications to existing filesystems. Initial prototypes emphasized integration at the block layer, ensuring independence from specific filesystem implementations to allow broad compatibility across Linux storage stacks. Pre-release discussions and patch sets were shared extensively on the (LKML) and Overstreet's personal website, bcache.evilpiepirate.org, where documentation, code repositories, and wikis facilitated community feedback and iterative improvements over the following years. Bcache achieved production-ready status in May 2012 with the release of version 13, as declared by Overstreet, marking it stable for real-world deployment as an out-of-tree . Its first official inclusion in the mainline occurred in version 3.10, released on June 30, 2013.

Kernel Integration and Evolution

Bcache was integrated into the mainline Linux kernel with version 3.10, released on June 30, 2013, enabling its use as a stable block caching mechanism without requiring external modules. This merge marked bcache's transition from an out-of-tree project to a core kernel component, and it has remained available in all subsequent kernel releases without major deprecations or removals. Following its integration, bcache underwent incremental enhancements focused on stability and performance across kernel versions up to the 6.x series. These updates included optimizations for better I/O handling and compatibility with emerging storage technologies, such as support for NVMe SSDs as caching devices once kernel NVMe drivers matured around version 3.13. In 2017, Coly Li was appointed as co-maintainer to support ongoing development and stability improvements. Such evolutions ensured bcache's reliability in diverse environments, with ongoing refinements addressing edge cases in block layer interactions. Bcache's development trajectory is linked to the bcachefs filesystem, a successor project led by the same primary developer, Kent Overstreet, which aimed to extend bcache's caching concepts into a full filesystem. was merged into the 6.7 development cycle in October 2023, with the kernel released on January 7, 2024, but faced significant challenges related to stability and maintenance disputes, leading to its designation as externally maintained in kernel 6.17 (released September 28, 2025) and complete removal committed for kernel 6.18 (expected release late 2025). This outcome reinforced bcache's position as the primary, mature caching solution within the kernel. As of November 2025, bcache remains stable and actively maintained in versions 6.12 and later, with no major rewrites planned but continued fixes integrated through standard kernel development channels, including patches for issues like dereferences in flushing routines. Its enduring presence underscores its role as a dependable tool for SSD caching in production systems.

Technical Architecture

Components

Bcache consists of several core hardware and software elements that form the foundation of its caching mechanism. The primary components include the backing device, the caching device, the for metadata management, and the integration with the kernel's I/O path. The backing device serves as the slower, high-capacity storage layer that maintains the persistent data in a bcache setup. Typically implemented using hard disk drives (HDDs) or arrays, it provides the bulk storage for the filesystem or data volumes, while allowing faster caching to accelerate access. This device can function independently in passthrough mode without an attached cache, ensuring data availability even if caching is disabled. For example, a large HDD array might be designated as the backing device to store terabytes of data, with bcache handling the overlay of cached portions transparently. The , in contrast, is a faster medium dedicated to holding frequently accessed to improve . Common examples include solid-state drives (SSDs) or NVMe devices, which offer low-latency reads and writes compared to the backing device. The caching device is typically smaller in capacity than the backing device, as it acts solely as an accelerator rather than a full replacement. Multiple caching devices can be combined into a cache set to distribute load and enhance reliability, supporting modes like writethrough or writeback for handling. Both the backing and caching devices rely on a superblock, a critical metadata structure that enables their registration and coordination within bcache. Located at an 8 KiB offset on the backing device and similarly on the caching device, the superblock stores essential information such as device UUIDs for identification, cache configuration parameters, and version details to ensure compatibility. This structure is vital for attaching devices to a cache set and recovering data integrity, as it allows the kernel to recognize and validate bcache-formatted devices during boot or reconfiguration. At the software level, bcache integrates into the kernel's block layer by registering the combined backing and caching setup as a single block device, such as /dev/bcache<N>. This allows it to intercept I/O requests through the kernel's request queues, transparently routing reads and writes to the appropriate component—cache for hits or backing device for misses—without altering the upper-layer filesystem's view. The integration occurs via interfaces at /sys/block/bcache<N>/bcache and /sys/fs/bcache/<UUID>, enabling runtime control and monitoring of the I/O path.

Data Management

Bcache employs a B+ tree structure as its primary on-disk index to map cached extents from the backing device to their locations in the . This structure efficiently tracks data ranging from single sectors up to full bucket sizes, with btree nodes indexing large regions of the . To support efficient updates, bcache uses a btree/log mechanism, where recent changes are first appended to a log before being incorporated into the btree, minimizing random writes to the device. The cache device is divided into buckets sized to match SSD erase blocks, typically ranging from 128 to 1 MB, to align with storage characteristics and reduce . Buckets are allocated sequentially and filled before reuse; upon invalidation, entire buckets are discarded rather than partially overwritten, ensuring predictable . This approach avoids the fragmentation and performance degradation associated with smaller, misaligned allocations on solid-state media. Metadata management in bcache relies on a journal to record recent modifications to the and state, with writes delayed up to 100 milliseconds by default (configurable via the journal_delay_ms ) to batch operations and improve efficiency. is classified into tiers based on access frequency, using recency statistics to distinguish hot (frequently accessed) from cold (infrequently used); these influence eviction decisions, with hotter retained longer in the . The priority_stats provides metrics such as unused percentage and average to monitor behavior. Garbage collection periodically frees invalidated buckets by scanning and discarding obsolete , triggered manually via the trigger_gc entry or automatically under low free space conditions, ensuring sustained availability. For error resilience, bcache replays the upon mounting to recover from crashes or unclean shutdowns, reconstructing the state without a formal clean shutdown protocol. It handles I/O errors from the cache device by invalidating affected data and falling back to the backing device, with configurable error thresholds to disable caching if failures exceed limits. However, bcache lacks built-in checksumming for cached , instead relying on the error detection capabilities of the underlying storage devices.

Features and Capabilities

Caching Policies

Bcache employs several configurable caching policies to manage data placement and I/O operations between the cache device (typically an ) and the backing device (such as an ), balancing performance, safety, and device wear. These policies determine whether writes are cached, how reads are handled on misses, and when to bypass caching for specific patterns like sequential I/O. The primary modes include writethrough, writeback, and writearound, with additional behaviors for read prefetching and sequential detection. In writethrough mode, writes are performed synchronously to both the cache and the backing device, ensuring that data reaches stable storage before the operation completes. This approach prioritizes , as there is no risk of loss from uncommitted cache contents. Reads are served from the if the data is present (a ), otherwise fetched from the backing and potentially cached for future access. If a write to the fails, bcache invalidates the corresponding entry in the to maintain . This mode is the default when writeback caching is disabled. Writeback mode buffers writes initially in the , deferring the transfer to the backing device until later via asynchronous flushes. Dirty data—changes pending write to the backing device—is managed sequentially by scanning the from start to end, allowing efficient background updates. Reads follow the standard : served from the on a hit or from the backing device on a miss, with the missed data loaded into the . While this mode offers potential for higher write throughput, it introduces a risk of if the cache device fails before dirty data is flushed to the backing device. Writeback is disabled by default and can be toggled at . The writearound policy bypasses the cache entirely for writes, directing them straight to the backing device to avoid unnecessary SSD wear from patterns that do not benefit from , such as large sequential transfers. Reads in this mode are still handled via the if applicable, but writes do not populate or modify contents. Bcache automatically detects sequential I/O patterns—using a rolling of I/O sizes per task—and applies writearound behavior when exceeding a configurable cutoff (defaulting to 4 MB), skipping to prioritize workloads. This detection operates across all modes to protect the device. For read optimization, bcache supports a read-around , also known as readahead, which optionally prefetches adjacent blocks into the upon a read miss. On a cache miss, the system rounds up the read request to a specified size (default 0, meaning disabled) and loads the additional data from the backing device into the , anticipating patterns. This helps improve future hit rates for streaming or sequential reads without affecting write operations.

Performance Enhancements

Bcache achieves high through its efficient structure for management, enabling up to 1,000,000 on random reads when paired with sufficiently fast . The design minimizes lookup overhead by using large nodes that reduce tree depth, while low-overhead operations ensure quick access to cached extents without excessive CPU or I/O costs. This optimization is particularly beneficial for workloads dominated by patterns, where traditional hard disk drives (HDDs) struggle, allowing bcache to accelerate throughput significantly by serving requests directly from the SSD . To reduce on solid-state drives (SSDs), bcache employs sequential bucket writes, allocating data in erase block-sized units and filling them contiguously before issuing discards for reuse. Delayed flushes further minimize unnecessary erases by batching dirty data and writing it sequentially to the backing device, scanning the from start to end. Additionally, bcache avoids caching sequential I/O by default—using a rolling with a 4 MB cutoff—to prevent cache pollution from large, streaming transfers that do not benefit from SSD acceleration, thereby preserving space for random workloads. Bcache supports multiple backing devices per cache set but only a single cache device, with plans for multi-cache support in . Backing devices can also be attached or detached at without or unmounting, using commands like echo <CSET-UUID> > /sys/block/bcache0/bcache/attach, which facilitates dynamic reconfiguration in production environments. This flexibility ensures continuous operation while scaling performance as hardware needs evolve. For error handling, bcache automatically degrades the upon detecting excessive I/O errors from the caching , switching affected backing devices to passthrough mode to bypass the faulty and maintain data availability. It retries reads from the backing on read failures and flushes dirty data before shutdown to prevent loss. In 2025, a fix addressed a potential issue in flushing to enhance (CVE-2025-38263). Options for scrubbing, such as manual garbage collection via trigger_gc, allow detection and invalidation of corrupted entries, enhancing reliability over time.

Configuration and Management

Setup Procedures

Setting up bcache involves preparing the kernel environment, formatting the backing and caching devices with superblocks, registering the devices, attaching the cache to the backing device, and then creating a filesystem on the resulting bcache device. Prerequisites include a Linux kernel version 3.10 or later, compiled with the bcache module enabled (CONFIG_BCACHE=y or as a loadable module via modprobe bcache). Additionally, the bcache-tools package must be installed to provide user-space utilities for device registration, available from the official kernel git repository or distribution packages. The backing device is typically a slower HDD (e.g., /dev/sda), while the caching device is a faster SSD (e.g., /dev/nvme0n1); both must be unused (whole disks or partitions) to avoid data loss during formatting. To format and prepare the backing device, run the command bcache make -B /dev/sda, which creates a bcache . With modern bcache-tools, rules may register the device automatically, making it available as /dev/bcache0 (or the next available index). Without automatic registration, manually register it with echo /dev/sda > /sys/fs/bcache/register. This step initializes the device for caching but does not yet enable caching functionality. For the caching device, execute bcache make -C /dev/nvme0n1 to format it and create its superblock. Similarly, register it if needed: echo /dev/nvme0n1 > /sys/fs/bcache/register. The command outputs the cache set UUID, which is required for attachment and can be viewed later in /sys/fs/bcache//. Attach the cache to the backing device by writing the cache set UUID to the attach file: echo <cache-set-uuid> > /sys/block/bcache0/bcache/attach. This links the devices, activates caching on /dev/bcache0, and makes the combined device available for use; the superblock on the backing device stores metadata about the attachment. By default, the cache mode is writethrough and can be adjusted later via sysfs. Finally, create a filesystem on the bcache device, such as mkfs.[ext4](/page/Ext4) /dev/bcache0, and it (e.g., mount /dev/bcache0 /mnt) to begin using the cached storage.

Tools and Commands

Bcache management relies on a combination of user-space utilities from the bcache-tools package and the kernel's interface for post-setup operations such as , , and . The bcache-tools provide command-line utilities for examining bcache structures without altering runtime behavior. For instance, bcache-super-show displays the superblock contents of a cache or backing device, including like UUIDs and bucket sizes, which aids in or verification; the -f option forces continuation even if the superblock is invalid. Another utility, bcache-status, offers a formatted overview of bcache devices, including cache usage, hit rates, and recent performance metrics over intervals like the last five minutes, hour, or day. The primary interface for runtime management is , accessible under /sys/block/bcache<N>/bcache/ for individual bcache devices and /sys/fs/bcache/<cset-uuid>/ for cache sets. Key files include cache_mode, which can be adjusted to modes such as writeback for full caching, writethrough for synchronous writes, or writearound for sequential bypass; changes take effect immediately via commands like echo writeback > /sys/block/bcache0/bcache/cache_mode. The sequential_cutoff parameter sets the threshold for treating I/O as sequential, defaulting to 4 , and can be tuned with echo 0 > /sys/block/bcache0/bcache/sequential_cutoff to disable write-around for all writes. Monitoring is facilitated through statistics directories, such as /sys/block/bcache0/bcache/stats_total/, which track metrics including cache hits, misses, bypassed I/O, and dirty percentages; these counters reveal sizes and cache efficiency without additional tools. For safe detachment during maintenance, echo 1 > /sys/block/bcache0/bcache/stop initiates a graceful shutdown, flushing dirty if in writeback mode before unregistering. Runtime modifications and teardown use sysfs echo commands for detachment and unregistration. To detach a specific cache set, echo <cset-uuid> > /sys/block/bcache0/bcache/detach removes the association while preserving . Full unregistration, which closes cache devices and detaches all backing devices after flushing dirty data, is achieved with echo 1 > /sys/fs/bcache/<cset-uuid>/unregister. These operations support dynamic adjustments post-setup, such as switching cache modes or monitoring without rebooting.

Limitations and Alternatives

Known Issues

One significant risk associated with bcache's writeback caching mode is the potential for during power failures or SSD cache device failures before dirty data is fully flushed to the backing device. In such scenarios, uncommitted writes in the cache may be lost, leading to filesystem inconsistencies or stale data being returned to applications if the cache becomes unavailable. To mitigate this in production environments, it is recommended to pair bcache with an (UPS) for reliable shutdowns or configure the backing device using for added durability. Bcache lacks native support for redundancy mechanisms such as integration or data checksumming, placing the full burden of data durability on the underlying backing device. Without built-in checksumming for user data—limited only to —bcache cannot detect or repair silent in cached blocks, relying instead on the filesystem or layer below for checks. This design choice simplifies the block layer cache but exposes users to higher risks in failure-prone setups without additional safeguards like external arrays. Compatibility challenges arise when integrating bcache with certain storage stacks, including potential issues with Logical Volume Manager (LVM) configurations where volume resizing or snapshots may disrupt cache alignment. Similarly, while bcache functions with encrypted devices via , performance degradation or boot-time complications can occur if encryption is layered beneath the cache rather than above it. Bcache is generally unsuitable for use with due to conflicts at the block layer, where ZFS's direct I/O and CoW semantics interfere with bcache's caching operations, often resulting in minor data corruptions or detection failures. Maintenance of bcache requires manual intervention for optimal performance under heavy workloads, particularly in tuning garbage collection to manage cache fragmentation and free space. Administrators must monitor and trigger garbage collection via interfaces like /sys/fs/bcache/<cset-uuid>/trigger_gc to prevent cache exhaustion, as automatic thresholds may not suffice for high-I/O scenarios. Bcache remains actively maintained in the as of 2025, with recent fixes for issues such as a dereference in cache flushing (CVE-2025-38263, fixed in July 2025). The removal of the related filesystem from the mainline kernel—marked as externally maintained in version 6.17 (September 28, 2025) and fully removed in version 6.18 (December 2025)—means advanced filesystem features developed in bcachefs, such as native multi-device redundancy and enhanced checksumming, are not available in the standalone bcache caching subsystem.

Comparisons with Other Solutions

Bcache, as a block-layer caching mechanism in the , differs from LVM's dm-cache in its native integration and simplicity for hybrid SSD/HDD setups. While bcache operates directly at the block device level without requiring additional volume management layers, dm-cache is built on the Device Mapper framework and leverages LVM for configuration, enabling features like online volume conversion and but introducing greater setup complexity through commands such as lvcreate and lvconvert. This makes bcache preferable for straightforward caching of entire block devices in new installations, whereas dm-cache suits environments already using LVM for advanced storage management like striping or . In performance tests using random reads, bcache has demonstrated consistent in optimized configurations, such as up to 68.8k total IOPS, compared to dm-cache's initial peaks around 92k IOPS when the cache is empty but dropping significantly to around 1.5k IOPS under load when full. In contrast to the now-removed , bcache functions solely as a pure caching layer atop existing filesystems and block devices, avoiding the complexities of a full (CoW) filesystem. , which integrated caching with multi-device support, RAID-like redundancy, compression, and checksumming, was marked as externally maintained in the mainline in version 6.17 (September 28, 2025) due to ongoing stability issues and maintainer disputes, with full removal in version 6.18 (December 2025), rendering it unsuitable for production use in mainline kernels. As a result, bcache offers greater long-term stability for caching scenarios post-2025, particularly where users seek to enhance performance without overhauling their filesystem stack. Compared to ZFS's L2ARC, bcache provides block-level caching independent of any specific filesystem, supporting both read and write operations across arbitrary block devices, but it lacks the adaptive, ARC-extending intelligence of L2ARC that prioritizes hot data based on access patterns within pools. L2ARC serves as a secondary read cache on SSDs to offload from RAM-based ARC, excelling in dataset-heavy environments with features like and snapshots, yet it requires committing to the full ecosystem, including its licensing and resource demands. Benchmarks indicate L2ARC can double transaction rates in database workloads over bcache by caching more data effectively, though bcache's generality allows its use with filesystems like or without ZFS overhead. Bcache is particularly suited for generic block device caching in SSD/HDD hybrid arrays, where it accelerates random I/O for general-purpose storage, but alternatives like dm-writeboost target specialized write buffering with log-structured designs that minimize overhead for bursty workloads. Dm-writeboost, derived from Solaris's Disk Caching Disk, focuses on efficient reduction through sequential logging on SSDs before flushing to slower backing stores, achieving lower in high-write scenarios compared to bcache's broader read/write balancing. Thus, while bcache supports versatile caching modes like writeback for sustained performance gains, dm-writeboost is ideal for applications with unpredictable write patterns, such as or virtual machines, without the full caching overhead of bcache.

References

  1. [1]
    A block layer cache (bcache) - The Linux Kernel documentation
    Bcache detects sequential IO and skips it; it also keeps a rolling average of the IO sizes per task, and as long as the average is above the cutoff it will skip ...
  2. [2]
    bcache
    Mar 23, 2018 · Bcache is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or ...Bcachefs · What is bcache? · Getting bcache
  3. [3]
    Bcache: Caching beyond just RAM - LWN.net
    Kent Overstreet has been working on bcache, which is a Linux kernel module intended to improve the performance of block devices.
  4. [4]
    A bcache update - LWN.net
    May 14, 2012 · With the current v13 patch set, bcache creator Kent Overstreet says: Bcache is solid, production ready code. There are still bugs being ...
  5. [5]
    Linux_3.10 - Linux Kernel Newbies
    Linux 3.10 has been released on Sun, 30 Jun 2013. Summary: This release adds support for bcache, which allows to use SSD devices to cache data from other block ...Missing: inclusion | Show results with:inclusion
  6. [6]
    BCache Gets New Maintainer, NVMe Improvements & More For ...
    Nov 15, 2017 · There are a number of NVMe improvements in Linux 4.15 including support for native multi-path, AEN user-space notifications, command side ...
  7. [7]
    Bcachefs Merged Into The Linux 6.7 Kernel - Phoronix
    Oct 31, 2023 · Less than twenty-four hours after Bcachefs was submitted for Linux 6.7, this new open-source file-system has been successfully merged for this next kernel ...
  8. [8]
    Linus Torvalds Removes The Bcachefs Code From The Linux Kernel
    Sep 29, 2025 · Now for Linux 6.18, the Bcachefs code was removed from the mainline kernel. Linus Torvalds a short time ago stripped out the Bcachefs code from ...
  9. [9]
    CVE-2025-38263 Common Vulnerabilities and Exposures | SUSE
    Description. In the Linux kernel, the following vulnerability has been resolved: bcache: fix NULL pointer in cache_set_flush() 1. LINE#1794 ...
  10. [10]
  11. [11]
    ABI testing symbols — The Linux Kernel documentation
    For backing devices: When on, writeback caching is enabled and writes will be buffered in the cache. When off, caching is in writethrough mode; reads and ...
  12. [12]
  13. [13]
    bcache-super-show(8) - Debian Manpages
    Jan 20, 2024 · NAME¶. bcache-super-show - Print the bcache superblock ; SYNOPSIS¶. bcache-super-show [ -f] device ; OPTIONS¶. -f: Keep going if the superblock ...Missing: commands stat
  14. [14]
    bcache-status - Man Page
    The `bcache-status` command displays useful bcache statistics in a convenient way. Options include printing stats for the last five minutes, hour, day, or ...
  15. [15]
    bcache.txt - The Linux Kernel Archives
    You can control bcache devices through sysfs at /sys/block/bcache<N>/bcache . You can also control them through /sys/fs//bcache/<cset-uuid>/ . Cache devices are ...
  16. [16]
    Potential for data corruption caused by bcache returning stale data ...
    Mar 3, 2020 · When using bcache configured in writeback mode, if the caching device goes offline while still containing dirty data, bcache will return stale ...
  17. [17]
    The Programmer's Guide to bcache
    Jun 19, 2015 · The Programmer's Guide to bcache: This document is intended to cover the design and core concepts of bcache, and serve as a guide to ...
  18. [18]
    bcache & zfs results in minor corruptions · Issue #8340 - GitHub
    Jan 26, 2019 · This flushes out the cache to the backing device. Then I issue a "zpool scrub" to see if there was any data corruption in the backing device.Missing: compatibility LVM
  19. [19]
    Bcachefs removed from the mainline kernel - LWN.net
    Sep 30, 2025 · Bcachefs removed from the mainline kernel. [Posted September 30, 2025 by corbet]. After marking bcachefs "externally maintained" in 6.17, Linus Torvalds has ...Bcachefs goes to 'externally maintained'Sad if trueMore results from lwn.net
  20. [20]
    bcache and/vs. LVM cache - Storage APIs - WordPress.com
    Mar 20, 2015 · probably also most advanced are bcache and LVM-cache (or dm-cache as explained below). So what these two are and how they differ? Let's ...
  21. [21]
    bcache and lvmcache - The Ongoing Struggle
    Jul 19, 2017 · lvmcache is based on dm-cache, works with any LVM, but needs LVM. bcache is more ambitious, aiming for higher performance, but requires ...
  22. [22]
    Accelerating your HDD with dm-cache or bcache - Resources
    Nov 7, 2014 · I've been keeping an eye on two very interesting components in the Linux kernel that were built exactly for this scenario. bcache and dm-cache.
  23. [23]
    Bcachefs Ousted from Mainline Kernel: The Move to DKMS and ...
    Oct 16, 2025 · ... 6.17 merge window, bcachefs would be removed. Final Removal. With Linux kernel version 6.18, the core bcachefs code was officially stripped out.
  24. [24]
    Bcachefs dropped from Linux 6.17 - TrueNAS Community Forums
    Jul 5, 2025 · If you've followed the bcachefs saga, it's come to a head: bcachefs is out with kernel 6.17. Linux Kernel 6.17 Drops bcachefs » Linux ...
  25. [25]
    Bcachefs goes to 'externally maintained' - LWN.net
    Aug 29, 2025 · Still it might not be a good fit for the model Kent Overstreet ... BTW, as a happy bcache user, I'm salty that you've hijacked the name ...
  26. [26]
    Bcache - ArchWiki
    Dec 21, 2024 · Bcache (block cache) allows one to use an SSD as a read/write cache (in writeback mode) or read cache (writethrough or writearound) for another blockdevice.Setting up bcached btrfs file... · Installation to a bcache device · Configuring
  27. [27]
    L2ARC Cache ZFS Plugin on OMV - Forum - openmediavault
    Feb 4, 2022 · bcache is completely different than L2ARC. L2ARC is zfs only. bcache can be used with any filesystem. They both do about the same thing.Missing: differences | Show results with:differences
  28. [28]
    MySQL/ZFS in the Cloud, Leveraging Ephemeral Storage - Percona
    Sep 13, 2021 · The transaction rate with bcache is inferior to L2ARC because less data is cached. The L2ARC yielded more than twice the number of transactions ...
  29. [29]
    dm-writeboost - Log-structured Caching for Linux - GitHub
    dm-writeboost is originated from Disk Caching Disk (DCD). DCD, implemented in Solaris, is an OS-level IO controller that builds logs from in-coming writes.
  30. [30]
    Akira Hayakawa: [PATCH] staging: Add dm-writeboost - LKML
    Sep 1, 2013 · Unlike other block caching softwares like dm-cache and bcache, dm-writeboost focuses on bursty writes. Since the implementation is optimized ...
  31. [31]
    4 Multi-tier caching for block device operations - SUSE Documentation
    bcache is a Linux kernel block layer cache. It allows one or more fast disk drives (such as SSDs) to act as a cache for one or more slower hard disks. bcache ...