bcache
Bcache is a block layer cache subsystem integrated into the Linux kernel, designed to accelerate input/output operations by using faster storage devices, such as solid-state drives (SSDs), to cache data from slower underlying block devices like hard disk drives (HDDs) or RAID arrays.[1] Developed by Kent Overstreet, it was first merged into the mainline Linux kernel in version 3.10 in 2013, providing a filesystem-agnostic solution that operates at the block level to enhance performance without requiring changes to upper-layer software.[1][2] The primary purpose of bcache is to bridge the speed gap between expensive, high-capacity HDDs and cost-effective SSD caching, enabling systems ranging from desktops to enterprise storage arrays to achieve significantly higher throughput for frequently accessed data.[2] It supports multiple caching modes, including writethrough (where writes are sent synchronously to both the cache and backing devices), writeback (for higher performance by caching writes before flushing to the backing device), and writearound (bypassing the cache for sequential writes).[1] Key features include dynamic attachment and detachment of cache devices at runtime, support for multiple backing devices per cache set, and intelligent IO detection that skips sequential reads and writes to preserve SSD lifespan by minimizing random operations and write amplification.[1][2] Bcache employs a hybrid B+ tree and journal structure for efficient metadata management, allocating data in erase block-sized buckets to optimize SSD wear leveling, and it ensures data integrity during unclean shutdowns through barriers, flushes, and automatic recovery mechanisms.[1] Performance benchmarks have demonstrated capabilities up to 1 million IOPS for random reads, with random write throughput reaching 18.5K IOPS in early tests, outperforming direct SSD access in certain workloads.[2] Configuration and monitoring occur via sysfs interfaces, allowing fine-tuned control over cache behavior, error handling (such as disabling caching on unrecoverable errors), and statistics, making it suitable for production environments.[1] While bcache has influenced subsequent developments like the bcachefs filesystem, it remains a standalone caching tool focused on block-level acceleration.[2]Introduction
Definition and Purpose
Bcache, short for block cache, is a cache mechanism integrated into the Linux kernel's block layer, enabling the use of fast storage devices such as solid-state drives (SSDs) to serve as a read/write cache for slower secondary storage devices like hard disk drives (HDDs).[2][1] This design operates at the block level, intercepting I/O requests to the backing storage and managing data placement transparently without requiring modifications to the filesystem or applications.[1] The primary purpose of bcache is to enhance input/output (I/O) performance, particularly for random access patterns common in mixed workloads, by storing frequently accessed data on high-speed caching media while retaining the larger capacity of slower devices.[2] It facilitates the creation of hybrid storage volumes that leverage the speed of SSDs for latency-sensitive operations and the cost-efficiency of HDDs for bulk storage, thereby reducing overall system latency and improving throughput for applications like databases or virtual machines.[1] By prioritizing caching of hot data on faster tiers, bcache optimizes resource utilization in environments where full SSD replacement would be prohibitively expensive.[2] Key benefits of bcache include its cost-effective approach to storage tiering, allowing organizations to augment existing HDD infrastructure with smaller, affordable SSDs rather than overprovisioning expensive all-flash arrays.[2] Additionally, its block-level granularity avoids the overhead associated with filesystem-level caching, enabling efficient handling of diverse I/O patterns while supporting modes such as writethrough and writeback for flexible data consistency trade-offs.[1] This results in significant performance gains, such as up to several times higher IOPS for random reads compared to uncached HDDs alone.[2]Basic Operation
Bcache functions as a block-layer cache in the Linux kernel, intercepting I/O requests to a backing device and managing data flow between it and a faster cache device, such as an SSD, in a manner transparent to upper-layer filesystems. This setup allows hybrid storage configurations where the cache accelerates access to frequently used data blocks on slower, higher-capacity backing storage like HDDs or RAID arrays.[1] For read operations, bcache first checks the cache for the requested data blocks. On a cache hit, the data is served directly from the cache device, providing low-latency access. In the case of a cache miss, the requested blocks are retrieved from the backing device, and—unless the I/O is detected as sequential (with a default cutoff of 4 MB)—they are then populated into the cache for future use. This read-ahead mechanism helps populate the cache proactively while avoiding unnecessary caching of large sequential reads that would not benefit from the cache's speed.[1] Write operations in bcache support two primary modes: writethrough and writeback. In writethrough mode, incoming writes are synchronously applied to both the cache and the backing device, ensuring data consistency without buffering dirty data in the cache. Conversely, in writeback mode (which is disabled by default but can be enabled at runtime), writes are initially directed only to the cache for immediate acknowledgment, with the dirty data later flushed asynchronously and sequentially to the backing device to maintain durability. This writeback approach enhances write performance by leveraging the cache's speed, though it introduces a brief window of potential data loss in case of cache failure.[1] Cache population and eviction are managed using least-recently-used (LRU)-like heuristics to track and prioritize data extents in metadata structures, ensuring efficient use of limited cache space. When the cache fills, less recently accessed blocks are evicted to make room for new data, while metadata persistently records which extents are cached on the backing device. Sequential I/O, both reads and writes, is typically bypassed to optimize for random access patterns common in workloads benefiting from caching.[1]History and Development
Origins and Initial Release
Bcache was primarily developed by Kent Overstreet, who announced the project on July 2, 2010, through an article on LWN.net detailing its design as a Linux kernel block layer cache for improving block device performance using solid-state drives (SSDs).[3] The early development of bcache was driven by the growing availability of SSDs in the late 2000s and the need for an efficient, general-purpose caching solution within the Linux kernel that could accelerate slower storage devices without requiring modifications to existing filesystems. Initial prototypes emphasized integration at the block layer, ensuring independence from specific filesystem implementations to allow broad compatibility across Linux storage stacks.[3] Pre-release discussions and patch sets were shared extensively on the Linux Kernel Mailing List (LKML) and Overstreet's personal website, bcache.evilpiepirate.org, where documentation, code repositories, and wikis facilitated community feedback and iterative improvements over the following years.[3] Bcache achieved production-ready status in May 2012 with the release of version 13, as declared by Overstreet, marking it stable for real-world deployment as an out-of-tree kernel module. Its first official inclusion in the mainline Linux kernel occurred in version 3.10, released on June 30, 2013.[4][5]Kernel Integration and Evolution
Bcache was integrated into the mainline Linux kernel with version 3.10, released on June 30, 2013, enabling its use as a stable block caching mechanism without requiring external modules.[5] This merge marked bcache's transition from an out-of-tree project to a core kernel component, and it has remained available in all subsequent kernel releases without major deprecations or removals.[1] Following its integration, bcache underwent incremental enhancements focused on stability and performance across kernel versions up to the 6.x series. These updates included optimizations for better I/O handling and compatibility with emerging storage technologies, such as support for NVMe SSDs as caching devices once kernel NVMe drivers matured around version 3.13. In 2017, Coly Li was appointed as co-maintainer to support ongoing development and stability improvements.[6][7] Such evolutions ensured bcache's reliability in diverse environments, with ongoing refinements addressing edge cases in block layer interactions. Bcache's development trajectory is linked to the bcachefs filesystem, a successor project led by the same primary developer, Kent Overstreet, which aimed to extend bcache's caching concepts into a full copy-on-write filesystem. Bcachefs was merged into the Linux kernel 6.7 development cycle in October 2023, with the kernel released on January 7, 2024, but faced significant challenges related to stability and maintenance disputes, leading to its designation as externally maintained in kernel 6.17 (released September 28, 2025) and complete removal committed for kernel 6.18 (expected release late 2025).[8][9] This outcome reinforced bcache's position as the primary, mature caching solution within the kernel. As of November 2025, bcache remains stable and actively maintained in Linux kernel versions 6.12 and later, with no major rewrites planned but continued bug fixes integrated through standard kernel development channels, including patches for issues like null pointer dereferences in cache flushing routines.[10] Its enduring presence underscores its role as a dependable tool for SSD caching in production systems.Technical Architecture
Components
Bcache consists of several core hardware and software elements that form the foundation of its caching mechanism. The primary components include the backing device, the caching device, the superblock for metadata management, and the integration with the Linux kernel's I/O path. The backing device serves as the slower, high-capacity storage layer that maintains the persistent data in a bcache setup. Typically implemented using hard disk drives (HDDs) or RAID arrays, it provides the bulk storage for the filesystem or data volumes, while allowing faster caching to accelerate access.[1] This device can function independently in passthrough mode without an attached cache, ensuring data availability even if caching is disabled.[1] For example, a large HDD array might be designated as the backing device to store terabytes of data, with bcache handling the overlay of cached portions transparently.[2] The caching device, in contrast, is a faster storage medium dedicated to holding frequently accessed data to improve performance. Common examples include solid-state drives (SSDs) or NVMe devices, which offer low-latency reads and writes compared to the backing device.[1] The caching device is typically smaller in capacity than the backing device, as it acts solely as an accelerator rather than a full replacement.[2] Multiple caching devices can be combined into a cache set to distribute load and enhance reliability, supporting modes like writethrough or writeback for data handling.[1] Both the backing and caching devices rely on a superblock, a critical metadata structure that enables their registration and coordination within bcache. Located at an 8 KiB offset on the backing device and similarly on the caching device, the superblock stores essential information such as device UUIDs for identification, cache configuration parameters, and version details to ensure compatibility.[1] This structure is vital for attaching devices to a cache set and recovering data integrity, as it allows the kernel to recognize and validate bcache-formatted devices during boot or reconfiguration.[2] At the software level, bcache integrates into the Linux kernel's block layer by registering the combined backing and caching setup as a single block device, such as/dev/bcache<N>. This allows it to intercept I/O requests through the kernel's request queues, transparently routing reads and writes to the appropriate component—cache for hits or backing device for misses—without altering the upper-layer filesystem's view.[1] The integration occurs via sysfs interfaces at /sys/block/bcache<N>/bcache and /sys/fs/bcache/<UUID>, enabling runtime control and monitoring of the I/O path.[1]
Data Management
Bcache employs a B+ tree structure as its primary on-disk index to map cached extents from the backing device to their locations in the cache. This structure efficiently tracks data ranging from single sectors up to full bucket sizes, with btree nodes indexing large regions of the cache. To support efficient updates, bcache uses a hybrid btree/log mechanism, where recent changes are first appended to a log before being incorporated into the btree, minimizing random writes to the cache device.[1] The cache device is divided into buckets sized to match SSD erase blocks, typically ranging from 128 KB to 1 MB, to align with flash storage characteristics and reduce write amplification. Buckets are allocated sequentially and filled before reuse; upon invalidation, entire buckets are discarded rather than partially overwritten, ensuring predictable wear leveling. This approach avoids the fragmentation and performance degradation associated with smaller, misaligned allocations on solid-state media.[1][3] Metadata management in bcache relies on a journal to record recent modifications to the btree and cache state, with writes delayed up to 100 milliseconds by default (configurable via thejournal_delay_ms parameter) to batch operations and improve efficiency. Data is classified into priority tiers based on access frequency, using recency statistics to distinguish hot data (frequently accessed) from cold data (infrequently used); these priorities influence eviction decisions, with hotter data retained longer in the cache. The priority_stats interface provides metrics such as unused percentage and average priority to monitor working set behavior. Garbage collection periodically frees invalidated buckets by scanning and discarding obsolete data, triggered manually via the trigger_gc sysfs entry or automatically under low free space conditions, ensuring sustained cache availability.[1]
For error resilience, bcache replays the journal upon mounting to recover from crashes or unclean shutdowns, reconstructing the btree state without a formal clean shutdown protocol. It handles I/O errors from the cache device by invalidating affected data and falling back to the backing device, with configurable error thresholds to disable caching if failures exceed limits. However, bcache lacks built-in checksumming for cached data integrity, instead relying on the error detection capabilities of the underlying storage devices.[1][3]
Features and Capabilities
Caching Policies
Bcache employs several configurable caching policies to manage data placement and I/O operations between the cache device (typically an SSD) and the backing device (such as an HDD), balancing performance, safety, and device wear. These policies determine whether writes are cached, how reads are handled on misses, and when to bypass caching for specific patterns like sequential I/O. The primary modes include writethrough, writeback, and writearound, with additional behaviors for read prefetching and sequential detection.[1][11] In writethrough mode, writes are performed synchronously to both the cache and the backing device, ensuring that data reaches stable storage before the operation completes. This approach prioritizes data integrity, as there is no risk of loss from uncommitted cache contents. Reads are served from the cache if the data is present (a hit), otherwise fetched from the backing device and potentially cached for future access. If a write to the cache fails, bcache invalidates the corresponding entry in the cache to maintain consistency. This mode is the default when writeback caching is disabled.[1][12] Writeback mode buffers writes initially in the cache, deferring the transfer to the backing device until later via asynchronous flushes. Dirty data—changes pending write to the backing device—is managed sequentially by scanning the index from start to end, allowing efficient background updates. Reads follow the standard cache hierarchy: served from the cache on a hit or from the backing device on a miss, with the missed data loaded into the cache. While this mode offers potential for higher write throughput, it introduces a risk of data loss if the cache device fails before dirty data is flushed to the backing device. Writeback is disabled by default and can be toggled at runtime.[1][12] The writearound policy bypasses the cache entirely for writes, directing them straight to the backing device to avoid unnecessary SSD wear from patterns that do not benefit from caching, such as large sequential transfers. Reads in this mode are still handled via the cache if applicable, but writes do not populate or modify cache contents. Bcache automatically detects sequential I/O patterns—using a rolling average of I/O sizes per task—and applies writearound behavior when exceeding a configurable cutoff (defaulting to 4 MB), skipping caching to prioritize random access workloads. This detection operates across all cache modes to protect the cache device.[1][11] For read optimization, bcache supports a read-around mechanism, also known as readahead, which optionally prefetches adjacent blocks into the cache upon a read miss. On a cache miss, the system rounds up the read request to a specified size (default 0, meaning disabled) and loads the additional data from the backing device into the cache, anticipating sequential access patterns. This helps improve future hit rates for streaming or sequential reads without affecting write operations.[1]Performance Enhancements
Bcache achieves high input/output operations per second (IOPS) through its efficient B+ tree structure for metadata management, enabling up to 1,000,000 IOPS on random reads when paired with sufficiently fast hardware.[2] The B+ tree design minimizes lookup overhead by using large nodes that reduce tree depth, while low-overhead metadata operations ensure quick access to cached extents without excessive CPU or I/O costs.[1] This optimization is particularly beneficial for workloads dominated by random access patterns, where traditional hard disk drives (HDDs) struggle, allowing bcache to accelerate throughput significantly by serving requests directly from the SSD cache. To reduce write amplification on solid-state drives (SSDs), bcache employs sequential bucket writes, allocating data in erase block-sized units and filling them contiguously before issuing discards for reuse.[1] Delayed flushes further minimize unnecessary erases by batching dirty data and writing it sequentially to the backing device, scanning the index from start to end.[1] Additionally, bcache avoids caching sequential I/O by default—using a rolling average with a 4 MB cutoff—to prevent cache pollution from large, streaming transfers that do not benefit from SSD acceleration, thereby preserving space for random workloads.[1] Bcache supports multiple backing devices per cache set but only a single cache device, with plans for multi-cache support in future development. Backing devices can also be attached or detached at runtime without downtime or unmounting, using commands likeecho <CSET-UUID> > /sys/block/bcache0/bcache/attach, which facilitates dynamic reconfiguration in production environments.[1] This flexibility ensures continuous operation while scaling performance as hardware needs evolve.
For error handling, bcache automatically degrades the cache upon detecting excessive I/O errors from the caching device, switching affected backing devices to passthrough mode to bypass the faulty cache and maintain data availability.[1] It retries reads from the backing device on cache read failures and flushes dirty data before shutdown to prevent loss. In 2025, a fix addressed a potential NULL pointer issue in cache flushing to enhance stability (CVE-2025-38263).[13] Options for scrubbing, such as manual garbage collection via trigger_gc, allow detection and invalidation of corrupted cache entries, enhancing reliability over time.[1]
Configuration and Management
Setup Procedures
Setting up bcache involves preparing the kernel environment, formatting the backing and caching devices with superblocks, registering the devices, attaching the cache to the backing device, and then creating a filesystem on the resulting bcache device.[1] Prerequisites include a Linux kernel version 3.10 or later, compiled with the bcache module enabled (CONFIG_BCACHE=y or as a loadable module via modprobe bcache).[1] Additionally, the bcache-tools package must be installed to provide user-space utilities for device registration, available from the official kernel git repository or distribution packages.[14] The backing device is typically a slower HDD (e.g., /dev/sda), while the caching device is a faster SSD (e.g., /dev/nvme0n1); both must be unused (whole disks or partitions) to avoid data loss during formatting.[1] To format and prepare the backing device, run the commandbcache make -B /dev/sda, which creates a bcache superblock. With modern bcache-tools, udev rules may register the device automatically, making it available as /dev/bcache0 (or the next available index). Without automatic registration, manually register it with echo /dev/sda > /sys/fs/bcache/register.[1] This step initializes the device for caching but does not yet enable caching functionality.
For the caching device, execute bcache make -C /dev/nvme0n1 to format it and create its superblock. Similarly, register it if needed: echo /dev/nvme0n1 > /sys/fs/bcache/register. The command outputs the cache set UUID, which is required for attachment and can be viewed later in /sys/fs/bcache/echo <cache-set-uuid> > /sys/block/bcache0/bcache/attach.[1] This links the devices, activates caching on /dev/bcache0, and makes the combined device available for use; the superblock on the backing device stores metadata about the attachment. By default, the cache mode is writethrough and can be adjusted later via sysfs.[1]
Finally, create a filesystem on the bcache device, such as mkfs.[ext4](/page/Ext4) /dev/bcache0, and mount it (e.g., mount /dev/bcache0 /mnt) to begin using the cached storage.[1]
Tools and Commands
Bcache management relies on a combination of user-space utilities from the bcache-tools package and the kernel's sysfs interface for post-setup operations such as inspection, configuration, and monitoring. The bcache-tools provide command-line utilities for examining bcache structures without altering runtime behavior. For instance,bcache-super-show displays the superblock contents of a cache or backing device, including metadata like UUIDs and bucket sizes, which aids in debugging or verification; the -f option forces continuation even if the superblock checksum is invalid.[15] Another utility, bcache-status, offers a formatted overview of bcache devices, including cache usage, hit rates, and recent performance metrics over intervals like the last five minutes, hour, or day.[16]
The primary interface for runtime management is sysfs, accessible under /sys/block/bcache<N>/bcache/ for individual bcache devices and /sys/fs/bcache/<cset-uuid>/ for cache sets. Key files include cache_mode, which can be adjusted to modes such as writeback for full caching, writethrough for synchronous writes, or writearound for sequential bypass; changes take effect immediately via commands like echo writeback > /sys/block/bcache0/bcache/cache_mode.[1] The sequential_cutoff parameter sets the threshold for treating I/O as sequential, defaulting to 4 MiB, and can be tuned with echo 0 > /sys/block/bcache0/bcache/sequential_cutoff to disable write-around for all writes.[1]
Monitoring is facilitated through sysfs statistics directories, such as /sys/block/bcache0/bcache/stats_total/, which track metrics including cache hits, misses, bypassed I/O, and dirty data percentages; these counters reveal working set sizes and cache efficiency without additional tools.[1] For safe detachment during maintenance, echo 1 > /sys/block/bcache0/bcache/stop initiates a graceful shutdown, flushing dirty data if in writeback mode before unregistering.[1]
Runtime modifications and teardown use sysfs echo commands for detachment and unregistration. To detach a specific cache set, echo <cset-uuid> > /sys/block/bcache0/bcache/detach removes the association while preserving data integrity.[1] Full unregistration, which closes cache devices and detaches all backing devices after flushing dirty data, is achieved with echo 1 > /sys/fs/bcache/<cset-uuid>/unregister.[17] These operations support dynamic adjustments post-setup, such as switching cache modes or monitoring without rebooting.
Limitations and Alternatives
Known Issues
One significant risk associated with bcache's writeback caching mode is the potential for data loss during power failures or SSD cache device failures before dirty data is fully flushed to the backing device.[1] In such scenarios, uncommitted writes in the cache may be lost, leading to filesystem inconsistencies or stale data being returned to applications if the cache becomes unavailable.[18] To mitigate this in production environments, it is recommended to pair bcache with an uninterruptible power supply (UPS) for reliable shutdowns or configure the backing device using RAID for added durability.[1] Bcache lacks native support for redundancy mechanisms such as RAID integration or data checksumming, placing the full burden of data durability on the underlying backing device.[1] Without built-in checksumming for user data—limited only to metadata—bcache cannot detect or repair silent corruption in cached blocks, relying instead on the filesystem or storage layer below for integrity checks.[19] This design choice simplifies the block layer cache but exposes users to higher risks in failure-prone setups without additional safeguards like external RAID arrays. Compatibility challenges arise when integrating bcache with certain storage stacks, including potential issues with Logical Volume Manager (LVM) configurations where volume resizing or snapshots may disrupt cache alignment. Similarly, while bcache functions with encrypted devices via dm-crypt, performance degradation or boot-time complications can occur if encryption is layered beneath the cache rather than above it.[1] Bcache is generally unsuitable for use with ZFS due to conflicts at the block layer, where ZFS's direct I/O and CoW semantics interfere with bcache's caching operations, often resulting in minor data corruptions or detection failures.[20] Maintenance of bcache requires manual intervention for optimal performance under heavy workloads, particularly in tuning garbage collection to manage cache fragmentation and free space.[1] Administrators must monitor and trigger garbage collection via sysfs interfaces like/sys/fs/bcache/<cset-uuid>/trigger_gc to prevent cache exhaustion, as automatic thresholds may not suffice for high-I/O scenarios.[1] Bcache remains actively maintained in the Linux kernel as of 2025, with recent fixes for issues such as a NULL pointer dereference in cache flushing (CVE-2025-38263, fixed in July 2025).[13] The removal of the related bcachefs filesystem from the mainline kernel—marked as externally maintained in version 6.17 (September 28, 2025) and fully removed in version 6.18 (December 2025)—means advanced filesystem features developed in bcachefs, such as native multi-device redundancy and enhanced checksumming, are not available in the standalone bcache caching subsystem.[21]
Comparisons with Other Solutions
Bcache, as a block-layer caching mechanism in the Linux kernel, differs from LVM's dm-cache in its native integration and simplicity for hybrid SSD/HDD setups. While bcache operates directly at the block device level without requiring additional volume management layers, dm-cache is built on the Device Mapper framework and leverages LVM for configuration, enabling features like online volume conversion and thin provisioning but introducing greater setup complexity through commands such aslvcreate and lvconvert.[22][23] This makes bcache preferable for straightforward caching of entire block devices in new installations, whereas dm-cache suits environments already using LVM for advanced storage management like striping or mirroring.[24] In performance tests using random reads, bcache has demonstrated consistent IOPS in optimized configurations, such as up to 68.8k total IOPS, compared to dm-cache's initial peaks around 92k IOPS when the cache is empty but dropping significantly to around 1.5k IOPS under load when full.[23]
In contrast to the now-removed bcachefs, bcache functions solely as a pure caching layer atop existing filesystems and block devices, avoiding the complexities of a full copy-on-write (CoW) filesystem. Bcachefs, which integrated caching with multi-device support, RAID-like redundancy, compression, and checksumming, was marked as externally maintained in the Linux mainline kernel in version 6.17 (September 28, 2025) due to ongoing stability issues and maintainer disputes, with full removal in version 6.18 (December 2025), rendering it unsuitable for production use in mainline kernels.[25] As a result, bcache offers greater long-term stability for caching scenarios post-2025, particularly where users seek to enhance performance without overhauling their filesystem stack.
Compared to ZFS's L2ARC, bcache provides block-level caching independent of any specific filesystem, supporting both read and write operations across arbitrary block devices, but it lacks the adaptive, ARC-extending intelligence of L2ARC that prioritizes hot data based on access patterns within ZFS pools. L2ARC serves as a secondary read cache on SSDs to offload from RAM-based ARC, excelling in dataset-heavy environments with features like compression and snapshots, yet it requires committing to the full ZFS ecosystem, including its licensing and resource demands.[26][27] Benchmarks indicate L2ARC can double transaction rates in database workloads over bcache by caching more data effectively, though bcache's generality allows its use with filesystems like ext4 or XFS without ZFS overhead.[28]
Bcache is particularly suited for generic block device caching in SSD/HDD hybrid arrays, where it accelerates random I/O for general-purpose storage, but alternatives like dm-writeboost target specialized write buffering with log-structured designs that minimize overhead for bursty workloads. Dm-writeboost, derived from Solaris's Disk Caching Disk, focuses on efficient write amplification reduction through sequential logging on SSDs before flushing to slower backing stores, achieving lower latency in high-write scenarios compared to bcache's broader read/write balancing.[29][30] Thus, while bcache supports versatile caching modes like writeback for sustained performance gains, dm-writeboost is ideal for applications with unpredictable write patterns, such as databases or virtual machines, without the full caching overhead of bcache.[31]