Fact-checked by Grok 2 weeks ago

OverlayFS

OverlayFS is a filesystem in the that merges multiple underlying directories or filesystems into a single, unified view presented to users and applications. It operates by overlaying a writable upper directory on top of one or more read-only lower directories, ensuring that modifications, creations, or deletions affect only the upper layer while reads prioritize the upper layer and fall back to lower layers as needed. A dedicated work directory, located on the same filesystem as the upper, handles temporary files during operations like copy-up. Originally developed as a to provide efficient union filesystem functionality, OverlayFS was merged into the mainline with version 3.18, released on December 7, 2014. This integration enabled lightweight stacking of filesystems without the overhead of full copies, supporting semantics where unchanged files are referenced from lower layers. Over subsequent kernel releases, enhancements have included multiple lower layers (supported since kernel 3.18), data-only overlays (introduced in kernel 6.8) for separation, and features like unique inode numbers via the "xino" option to improve compatibility with tools expecting stable identifiers. Key mechanisms in OverlayFS include whiteouts, special files that hide corresponding entries in lower layers, and opaque directories, which prevent traversal into lower subdirectories to maintain isolation. These, along with extended attribute handling for overlay-specific metadata (prefixed with "trusted.overlay."), ensure consistent behavior across layers, including support for nesting and sharing. The filesystem also accommodates advanced scenarios like NFS exports and fs-verity integration for signed files, making it robust for production environments. OverlayFS is notably employed in container orchestration platforms like , where the overlay2 storage driver leverages it to stack image layers efficiently, supporting up to 128 layers for optimized build and runtime performance. In live operating systems and bootable media, it allows non-destructive modifications over read-only base filesystems, preserving the original integrity. Additionally, in embedded systems, OverlayFS enhances data protection by isolating writes to volatile storage like , preventing corruption on flash media during power failures or updates.

Overview

Definition and Purpose

OverlayFS is a filesystem implementation provided as a in the . It enables the merging of multiple directories or filesystems—referred to as layers—into a single, unified view presented to users and applications. In this setup, an upper layer, typically writable, is overlaid onto one or more lower layers, which are often read-only; objects in the upper layer take precedence, while those absent from the upper layer are transparently accessed from the lower layers. A dedicated work directory, on the same filesystem as the upper layer, is used for temporary files during operations such as copy-up. Union filesystems, the broader underlying OverlayFS, allow separate directory trees or filesystem to be combined transparently, creating a composite where reads prioritize the most recent or topmost , and writes are directed to a designated writable without modifying the originals. This approach facilitates semantics, ensuring that underlying data remains intact while changes are isolated. OverlayFS implements this paradigm natively within the , offering a lightweight and performant alternative to earlier user-space or out-of-tree solutions. The primary purpose of OverlayFS is to support read-write operations on top of immutable lower layers, preserving the lower filesystem's state while directing all modifications to the upper layer for efficient, non-destructive updates. This design is ideal for scenarios demanding layered filesystem management without altering base storage, such as in live operating systems where temporary user changes overlay a read-only root filesystem. It originated to fulfill the need for robust, in-kernel overlay functionality in Linux, resolving challenges in environments like bootable media and emerging container workflows by providing a standardized, kernel-integrated union filesystem.

Key Features

OverlayFS supports stacking multiple read-only lower directories beneath a single writable upper layer, enabling the merging of hierarchical filesystem views. For instance, the mount option lowerdir=/lower1:/lower2 allows files from /lower2 to override those in /lower1 in the merged , with the upper layer capturing all modifications. This multi-layer capability facilitates efficient composition of filesystem snapshots or distributions without duplicating data. The filesystem maintains compliance in its merged view, preserving standard file permissions, ownership, and semantics for operations like reading, writing, and renaming across layers. While it adheres to for most behaviors, such as consistent directory traversals and file locking, certain optimizations like read-only access to lower layers without updating access times represent deliberate trade-offs for efficiency. This compliance ensures seamless integration with existing applications and tools. Efficiency is achieved through lazy copy-up mechanisms, where read-only files from lower layers are not duplicated to the upper layer until a write occurs. This on-demand copying minimizes storage overhead and I/O during initial mounts or reads, with the upper layer only materializing changes as needed. Additionally, metacopy support allows initial copying of just file metadata during copy-up, deferring data blocks until accessed, further optimizing performance for large files. OverlayFS provides NFS export capability via the nfs_export=on mount option, allowing the merged filesystem to be shared over NFS while maintaining consistent views for clients. This requires underlying filesystems to support NFS exporting and uses inode indexing to handle cross-layer references reliably. Metadata handling in OverlayFS includes options for inode numbers and verification. The xino=on or xino=auto option assigns unique, persistent inode numbers to overlaid files by encoding filesystem identifiers into high bits, preventing collisions in the merged view. Furthermore, support for fs-verity ensures the of lower-layer files during copy-up, verifying content against digests stored in extended attributes when the verity=on or verity=require option is enabled.

History

Development Origins

The development of OverlayFS originated from early 2009 discussions within the community about the need for a robust union filesystem, driven by the limitations of existing out-of-tree implementations like AUFS that prevented their mainline inclusion due to excessive complexity and lack of clarity in design. These conversations, documented on , emphasized the demand for a solution that could merge multiple filesystem namespaces into a unified view while minimizing modifications to the (VFS) layer, addressing use cases such as writable overlays on read-only media for live distributions and embedded environments. Miklos Szeredi, a prominent kernel developer, initiated the OverlayFS project in response to these needs, submitting the initial Request for Comments (RFC) patchset in 2010 to propose a hybrid approach combining VFS-level directory handling with direct access to underlying filesystems for efficiency. This design prioritized simplicity and correctness, distinguishing it from more intricate predecessors like AUFS, with features such as "copy-up" operations for writable modifications and extended attributes for managing whiteouts and opaque directories. Experimental adoption began in 2011 when integrated OverlayFS into its embedded routing firmware to enable flexible overlays on resource-constrained devices. Prior to mainline acceptance, the project encountered pre-integration challenges, including stability concerns around preventing unintended modifications to lower-layer filesystems and resolving locking mechanisms, which required multiple patch revisions and community scrutiny to ensure reliability.

Kernel Integration and Releases

OverlayFS was integrated into the mainline Linux kernel version 3.18, released in December 2014, through a pull request submitted by Miklos Szeredi. This marked the filesystem's transition from out-of-tree development to official kernel support, enabling its use in production environments without custom patches. In Linux kernel 4.0, released in April 2015, OverlayFS received enhancements that facilitated its adoption as the backing storage driver for Docker's overlay2 implementation, which became the default for systems supporting OverlayFS. Subsequent releases in the 4.x series, such as 4.2 and 4.9, introduced stability improvements including better handling of metadata inconsistencies and support for additional filesystems as lower layers, addressing early reliability issues reported in container workloads. More recent developments have focused on advanced layering capabilities and . 6.8, released in March 2024, added support for "data-only" lower layers, allowing layers to contribute solely contents without exposing their in the merged view, which enhances privacy and efficiency in multi-layer setups like container images. 6.15, released in May 2025, introduced the override_creds mount option, which records the calling task's credentials for accessing lower layers, mitigating concerns in untrusted overlays, and support for specifying layers using O_PATH descriptors rather than s, improving by avoiding path traversal risks and enabling more flexible mounting in sandboxed environments. Adoption milestones include Slackware's integration of OverlayFS into its Live Edition in 2016, where it replaced older filesystems for providing writable persistence on read-only media like CDs and USB sticks. Ongoing refinements, particularly in kernels 5.x and 6.x, have optimized OverlayFS for container ecosystems, with and Podman leveraging it for layered image management and runtime efficiency.

Architecture

Layering and Merging

OverlayFS utilizes a layered that combines a single writable upper filesystem with zero or more read-only lower filesystems to form a unified view. The upper layer serves as the primary writable component, where modifications are stored, while the lower layers provide read-only content that can be overridden by the upper layer. This structure allows the merged filesystem to prioritize objects from the upper layer, ensuring that any equivalent items in the lower layers are hidden when present above. The core merging process in OverlayFS recursively unions directories from the upper and lower layers into a single coherent . Directories are combined by aggregating their contents, with names from the upper directory taking precedence over those in the lower directories. For files and other non-directory objects, an item in the upper layer completely overrides its equivalent in the lower layer, while the absence of an item in the upper layer results in transparent of the corresponding item from the lower layer. This mechanism creates a seamless view for userspace applications, abstracting away the individual layer boundaries. Support for multiple lower layers enables more flexible configurations, such as where the upper layer overlays a of lower layers (e.g., upper on lower1 on lower2). Layers are searched in order from the topmost lower layer downward to the bottommost, allowing hierarchical compositions. Notably, lower layers themselves can be instances of OverlayFS, facilitating nested overlays for complex scenarios. The search order ensures that higher layers in the override from those below, maintaining the across the entire . To achieve view consistency, OverlayFS maintains merged directory caches that integrate name lists from all contributing layers, presenting a unified and consistent to userspace processes. This caching approach hides the distinctions between layers, allowing applications to interact with the filesystem as if it were a single, monolithic entity without awareness of the underlying composition. Upper layer and attributes are consistently applied in the merged , further reinforcing the coherent .

Special Files and Metadata Handling

OverlayFS employs special files known as whiteouts to handle deletions in its layered structure, allowing files from lower layers to be hidden without physically removing them from the underlying filesystems. A whiteout is created in the upper layer as a character with a device number of 0/0, or alternatively as a zero-sized file bearing the extended attribute trusted.overlay.whiteout. When a file is deleted in the merged view, OverlayFS generates this whiteout, which masks the corresponding lower-layer file during directory enumeration, effectively simulating its removal while preserving the original data intact. To manage the deletion of entire directories, OverlayFS uses opaque directories, which prevent the exposure of subdirectories from lower layers. An opaque directory is marked in the upper layer with the extended attribute trusted.overlay.opaque="y", causing the directory to appear empty in the merged view and blocking any lower-layer contents from being visible or accessible. Non-directory files are inherently treated as opaque, ensuring they do not reveal underlying layers. For directories containing whiteouts, the attribute trusted.overlay.opaque="x" may be used to indicate opacity while allowing internal whiteout files, optimizing operations like readdir without unnecessary overhead. Opaque directories should not themselves contain whiteouts, as this would conflict with the deletion semantics. Metadata handling in OverlayFS prioritizes the upper layer for in the merged . In the merged view, inodes for directories derive their solely from the upper layer if present, while non-directories may inherit from either the upper or lower layer depending on availability; the st_dev field for directories reports the overlay's device, but non-directories might reflect the underlying filesystem's device. To ensure unique and inode numbers (st_ino), the mount option xino=on (or xino=auto) enables inode composition, combining the real inode number with a filesystem identifier (fsid) for uniqueness across layers, provided the underlying filesystems support NFS file handles for . Without this option, inode numbers may vary and are not guaranteed to be persistent. Additionally, OverlayFS does not update access times (atime) on lower-layer files during reads, deviating from strict semantics to avoid unnecessary copy-up operations and maintain read-only integrity for lower layers. Changes to , such as permissions via chmod, trigger a copy-up operation in OverlayFS, where the affected file or directory is fully copied from the lower layer to the upper layer to allow the modification. This ensures that metadata alterations are isolated to the writable upper layer without impacting read-only lower components. When the metacopy=on option is enabled, initial metadata changes copy only the metadata to the upper layer—marked with the trusted.overlayfs.metacopy extended attribute—while deferring data copy until an actual write access occurs, optimizing for scenarios where only attributes are modified.

Implementation

Mount Options and Configuration

OverlayFS is mounted using the standard Linux mount command with the filesystem type overlay. The basic syntax is mount -t overlay overlay -o lowerdir=/lower,upperdir=/upper,workdir=/work /merged, where /merged represents the mount point that presents the unified view of the layered filesystems. The lowerdir option specifies one or more read-only lower layers, which provide the base filesystem content; multiple directories can be stacked by separating paths with colons (e.g., lowerdir=/lower1:/lower2), evaluated from right to left for merging order. The upperdir option points to a writable upper layer where modifications, such as file creations or changes, are stored to preserve the read-only nature of lower layers. The workdir option designates a temporary directory used for internal OverlayFS operations, such as preparing copy-ups; it must be an empty directory on the same filesystem as upperdir and should be backed by a fast filesystem like for optimal performance. Several key options allow customization of OverlayFS behavior. The redirect_dir=on option enables tracking of directory renames and hard links across layers by using extended attributes, which is disabled by default to maintain compatibility. The metacopy=on option optimizes copy-up operations by copying only file metadata (e.g., permissions and timestamps) initially, deferring actual data copy until necessary, also disabled by default. For environments requiring NFS exports, nfs_export=on ensures consistent file handles and attribute caching across NFS clients. The volatile option skips calls to the underlying filesystems, improving but risking on crashes, making it unsuitable for persistent storage. Additionally, userxattr switches extended attribute storage from the trusted.overlay. to user.overlay. for use in user namespaces where trusted attributes may not be accessible. Configuration prerequisites include ensuring the upper layer's filesystem supports trusted or user extended attributes and provides valid d_type in directory entries, as filesystems like NFS do not meet these requirements. Lower layers can be any mountable filesystem, including another instance, without needing write support. Recent versions (e.g., 6.8+) introduce advanced features like specifying layers via file descriptors with fsconfig or data-only lower layers using double colons (e.g., lowerdir=/l1::/do1), but these build on the core options for read-only bases with a writable overlay. Additionally, 6.18 introduced support for case-folding, enabling case-insensitive handling of files and directories to improve compatibility with certain filesystems in container environments.

Core Operations and Behaviors

OverlayFS manages file operations by transparently merging the upper and lower layers, ensuring that the underlying filesystem structure remains unchanged while providing a unified view to applications. For read operations, OverlayFS provides transparent access to files and directories, presenting objects from the upper layer when they exist there, or falling back to the lower layer otherwise. This includes non-directory objects such as regular files, symbolic links, and device special files. Shared memory mappings are possible for read-only access to lower-layer files, but with the caveat that subsequent modifications to the file will not be reflected in the mapping if it was opened read-only before mapping. Write operations in OverlayFS trigger a copy-up mechanism on the first modification of a lower-layer that requires write access, such as opening for read-write or truncating. During copy-up, the is copied from the lower to the upper layer, after which all subsequent writes occur on the upper copy. Direct writes to the lower layer are not permitted, preserving its read-only integrity even if the lower filesystem itself is writable. Renames and deletions handle layer interactions through specific mechanisms to avoid modifying the lower layer. For renames, if the source directory is on the lower layer or merged (not originally created on the upper), OverlayFS performs a copy-up of the directory entry to the upper layer before completing the rename, particularly when the redirect_dir=on mount option is enabled. Deletions of lower-layer files or directories use whiteouts—special markers created on the upper layer as zero-sized regular files or character devices with specific attributes—without altering the lower layer. Additional behaviors include relaxed handling of executable files from the lower layer: opening such a for writing or truncating it does not result in an ETXTBSY error, allowing modifications without denial due to active execution. In volatile mode, enabled via the volatile mount option, changes to the upper layer are discarded upon unmount, and synchronization calls to the upper filesystem are omitted, making it suitable for non-persistent scenarios but not crash-safe.

Use Cases

Containerization and Virtualization

OverlayFS plays a central role in technologies by enabling efficient, layered filesystem management for isolated environments. In , the overlay2 storage driver, introduced in Docker 1.12 in July 2016, leverages OverlayFS to implement layered images where each container layer serves as an upper filesystem atop shared lower layers. This integration allows multiple containers to share common base image layers without duplication, facilitating semantics for modifications. The benefits of this approach include efficient through immutable lower layers, which reduces overall storage requirements by avoiding full filesystem copies for each instance. Additionally, OverlayFS supports snapshotting mechanisms that enable rapid deployments, as new layers can be created incrementally from existing ones, minimizing resource overhead in dynamic environments. In contexts, OverlayFS supports runtimes like by providing writable overlays on top of read-only base operating system images, allowing for flexible and secure isolation without altering the underlying filesystem. This capability is particularly useful for creating ephemeral, modifiable views of persistent base images in lightweight setups. OverlayFS has seen widespread adoption in the ecosystem, serving as the default storage driver in major distributions such as and for deployments. In , its use via containerd's default overlayfs snapshotter enables efficient pod management, including support for updates where new image layers can be applied without disrupting running workloads.

Embedded Systems and Read-Only Overlays

In resource-constrained environments such as systems and (IoT) devices, OverlayFS enables non-destructive updates by layering writable modifications atop read-only base filesystems, preserving the integrity of while allowing configuration changes. This approach is particularly valuable in devices with limited write cycles on NAND , where frequent updates could otherwise degrade hardware longevity. For LiveCD and LiveUSB systems, OverlayFS overlays writable changes onto a read-only base, facilitating persistent sessions without modifying the original media. The lower layer consists of the compressed, immutable image, while the upper layer—often a in —captures all modifications, such as user files or temporary data, ensuring the boot medium remains unaltered after shutdown. This mechanism supports copy-up operations for persistence, where modified files from the read-only base are transparently elevated to the writable layer. In embedded and applications, OverlayFS is prominently used in distributions like for firmware overlays, where a read-only SquashFS root filesystem (/rom) is merged with a writable overlay (/overlay, typically or ) to form the unified root (/). This allows configuration changes, package installations, and customizations without requiring a full firmware flash, reducing and wear on . 's adoption of OverlayFS dates back to its early integration as a stable union filesystem option, enhancing its suitability for router and gateway devices. Read-only root filesystems protected by OverlayFS are common in appliances and dedicated , safeguarding the base filesystem from while directing updates to an upper layer on for volatile changes or persistent flash partitions for durability. In such setups, the immutable lower layer ensures system reliability in unattended operations, with the overlay handling runtime modifications like logs or settings. Examples include Live editions, which have utilized OverlayFS since 2016 to stack writable layers over modules for bootable media. Similarly, employs OverlayFS on A/B devices in development environments (userdebug/eng builds), where a writable upper layer on partitions like /mnt/scratch/overlay enables system modifications without disrupting the read-only base.

Limitations

Functional Constraints

OverlayFS imposes several inherent functional constraints due to its design as a filesystem that layers a writable upper over one or more read-only lower directories, ensuring that modifications are confined to the upper layer without affecting the underlying layers. A primary limitation is that changes cannot propagate back to the lower layers; the lower filesystems remain unmodified, and all write operations trigger a copy-up mechanism to replicate content into the upper layer before alterations can occur. This one-way writability prevents scenarios where updates to the overlay might need to synchronize or merge back to the original lower content, making OverlayFS unsuitable for bidirectional modification workflows. Rename operations present additional constraints, particularly for cross-layer renames involving . Renaming a from a lower or merged layer to the upper layer fails with the EXDEV error by default, as it requires a full copy-up of the contents, which may not be feasible for large structures. Even with the optional redirect_dir enabled to support such renames, the still involves copying the entire subtree, potentially leading to failures if space or permissions are insufficient. For files, renames across layers are handled via copy-up but inherit the same resource-intensive behavior. Filesystem compatibility further restricts usability, with strict requirements for the upper layer and work directory. The upper directory must reside on a filesystem that supports trusted.* and user.* extended attributes for metadata handling, such as whiteouts, and must return valid d_type entries in readdir responses; network filesystems like NFS are explicitly unsuitable for the upper layer due to these deficiencies. The work directory, used for temporary operations during copy-up, must be an empty directory on the exact same filesystem as the upper directory to ensure atomicity and consistency. While lower layers can use a variety of mountable Linux filesystems—including differing types or even other OverlayFS instances— they must collectively support the necessary features for OverlayFS to function, and remote filesystems like NFS are supported only as lowers with potential limitations in attribute handling. Other constraints include the inability to merge changes from the upper layer back into the lower layers, preserving the separation but limiting OverlayFS to overlay-only use cases. Access time updates (st_atime) mandated by for read operations are not performed on files residing in lower layers, potentially affecting applications reliant on accurate tracking. Additionally, OverlayFS does not deny writes or truncations to executing files from lower layers with ETXTBSY errors, and modifications to the underlying filesystems while the overlay is mounted result in . These limitations are integral to OverlayFS's implementation and do not extend to non-Linux systems without verified ports.

Performance and Compatibility Issues

One significant performance drawback of OverlayFS is the copy-up mechanism, which duplicates files from the read-only lower layer to the writable upper layer upon the first write operation. This process incurs substantial I/O and CPU overhead, particularly for large files or directories with many small files, as the entire file must be copied even if only minor modifications are made. In write-heavy workloads, such as those common in environments, this can lead to noticeable slowdowns compared to native filesystems due to the duplication latency. Inode management in OverlayFS also contributes to efficiency challenges. Inodes from lower layers are shared across the merged view, which can result in where multiple paths refer to the same underlying inode, potentially confusing applications that assume unique inode numbers. Additionally, times (atime) are not updated on lower layer files since they remain read-only, affecting tools or scripts that rely on accurate timestamps for caching or optimization decisions. The st_dev and st_ino values may appear non-uniform for non-directory objects, further complicating with software expecting consistent device and inode identifiers. Compatibility issues arise primarily from filesystem-specific requirements. OverlayFS performs optimally with upper layers on or , which support trusted extended attributes (xattrs) and reliable directory entry types (d_type); using networked filesystems like NFS for the upper layer is unsuitable due to lack of these features, leading to fallback behaviors and reduced efficiency. Lower layers over networked storage can exacerbate latency during copy-up operations. The volatile mount option improves performance by omitting sync calls to the upper filesystem, but it is not crash-safe and may result in for recent changes, making it unsuitable for scenarios requiring full data durability. To mitigate these issues, OverlayFS provides the metacopy=on mount option, which defers full copy-up for metadata-only operations like or , copying only attributes initially to reduce overhead for small- modifications. Enabling xino=on helps normalize inode numbers across layers, alleviating concerns. These options, often tuned in setups, can narrow the performance gap in targeted workloads, though they introduce security risks if lower layers are untrusted.

Comparisons

With AUFS

AUFS, or Another Union File System, employs a branch stacking model that allows for greater flexibility in , including multiple read-write branches and dynamic addition or removal of branches at . In contrast, OverlayFS adopts a simpler, fixed centered on a single upper read-write layer overlaid atop one or more lower read-only layers, without support for branch modifications. This design choice in OverlayFS prioritizes ease of implementation and integration within the mainline , avoiding the complexity of AUFS's extensible branching capabilities. Regarding stability and adoption, AUFS has historically been maintained as an out-of-tree module, necessitating custom patches that can lead to compatibility issues with evolving mainline . OverlayFS, merged into the mainline with 3.18 in late 2014, benefits from ongoing upstream development and testing, rendering it the more reliable option for production deployments. As a result, OverlayFS has seen widespread adoption in modern distributions and container runtimes, supplanting AUFS in scenarios requiring long-term maintainability. In terms of features, AUFS provides support for external data (ext) branches, enabling direct access to remote filesystems such as NFS without full unioning, which facilitates hybrid local-remote configurations. OverlayFS, while lacking this external branch flexibility, offers enhanced capabilities like native NFS export support through index entries with file handles and fs-verity integration for verifying file integrity in layered setups. These additions in OverlayFS improve security and network filesystem compatibility in constrained environments. The transition from AUFS to OverlayFS has been driven by maintenance burdens associated with AUFS's out-of-tree status and deprecation in key projects like , where overlay2 became the default storage driver to ensure consistent performance and support across distributions. Many distributions, including , have phased out AUFS in favor of OverlayFS to align with mainline kernel advancements and reduce patching overhead.

With UnionFS

UnionFS, first prototyped in 2004 by researchers at Stony Brook University, represents an early kernel-level implementation of a stackable unification file system for Linux, employing a fan-out design to merge multiple branches without requiring native Virtual File System (VFS) integration. This approach allowed UnionFS to layer directories dynamically, supporting features like branch insertion and removal at runtime, which enabled flexible namespace management. In contrast, OverlayFS, introduced in Linux kernel version 3.18 in 2014, adopts a native VFS-based architecture that directly hooks into the kernel's file system layer, simplifying the unification process by limiting it to a fixed upper (read-write) layer overlaid on one or more read-only lower layers. This evolution from UnionFS's stackable model to OverlayFS's integrated design addressed longstanding challenges in mainlining union filesystems, prioritizing stability and efficiency over extensive configurability. A key architectural difference lies in their handling of layer mutability and merging capabilities: UnionFS permits a mix of read-only and read-write branches across multiple layers, with the highest-priority branch designated as writable, and supports merging changes back to lower branches through explicit operations. , however, strictly enforces read-only semantics for all lower layers, directing all modifications exclusively to the upper layer via mechanisms, and explicitly avoids back-merging to prevent complexity in synchronization. This restriction in OverlayFS reduces edge cases around , such as handling deletions across layers, but limits its flexibility compared to UnionFS's more permissive model, which could accommodate scenarios like multi-writable overlays but at the cost of increased implementation complexity. Performance-wise, OverlayFS benefits from its tight kernel integration, routing file operations directly through VFS paths with minimal indirection, resulting in lower overhead for common workloads—often negligible compared to native filesystems—due to its simpler, non-stackable design that avoids the inter-layer traversal inherent in . , while efficient for its era with reported overheads of 2–3% for typical user-like tasks and up to 27.5% in I/O-intensive benchmarks on its prototype, incurs higher costs from stacking, including branch traversal and duplicate elimination, making it less suitable for high-throughput environments. Although 's offered better portability across kernel versions than user-space alternatives like FUSE-based implementations, its maintenance burden contributed to its decline in . As a foundational influence, paved the way for subsequent union filesystem developments in , inspiring the design principles behind OverlayFS while highlighting the trade-offs of feature richness versus maintainability; however, it was never merged into the mainline and has largely faded from Linux use in favor of OverlayFS. persists in BSD variants, where it remains a core feature for tasks like overlaying writable layers on read-only media, with ongoing efforts to enhance its stability in .

References

  1. [1]
    Overlay Filesystem - The Linux Kernel documentation
    An overlay filesystem combines two filesystems - an 'upper' filesystem and a 'lower' filesystem. When a name exists in both filesystems, the object in the ' ...
  2. [2]
    Linux_3.18 - Linux Kernel Newbies
    Linux 3.18 has been released on Sun, 7 Dec 2014. Summary: This release adds support for overlayfs, which allows to combine two filesystem in a single mount ...Overlayfs · Radeon: mapping of user... · bpf() syscall for eBFP virtual...
  3. [3]
    OverlayFS storage driver - Docker Docs
    The overlay2 driver natively supports up to 128 lower OverlayFS layers. This capability provides better performance for layer-related Docker commands.Prerequisites · Configure Docker with the... · OverlayFS and Docker...
  4. [4]
    Improving data integrity in Linux embedded systems with OverlayFS
    May 2, 2024 · Enter OverlayFS! Brought into the Linux kernel mainline with version 3.18, it allows you to overlay the contents – both files and folders ...
  5. [5]
    Understand what an OverlayFS is and how it works
    Brought into the Linux kernel mainline with version 3.18, OverlayFS lets you overlay the contents (both files and directories) of one directory onto another.
  6. [6]
    Unioning file systems: Architecture, features, and design choices
    Mar 18, 2009 · A unioning file system combines multiple file systems into one, using a read-only base and a writable overlay, with branches and whiteouts.
  7. [7]
    Kernel development - LWN.net
    Jun 14, 2011 · LWN looked at the overlayfs filesystem last year. Overlayfs, written by Miklos Szeredi, is distinguished by its relative simplicity.
  8. [8]
    Kernel development - LWN.net
    Apr 8, 2009 · In this article, I will examine two unioning file systems for Linux: unionfs and aufs. While union mounts and union file systems have the same ...
  9. [9]
    Kernel development [LWN.net]
    ... RFC patchset by Miklos Szeredi. The idea behind unioning filesystems is quite simple, but the devil is in the details. In a union, one filesystem is mounted ...
  10. [10]
    Miklos Szeredi: Re: [PULL for 3.18] overlay filesystem v24 - LKML
    Sep 30, 2014 · Of course the story is not limited to the file-lock. > > If I remember correctly, are you the one who consitunes the development > of UnionMount ...Missing: motivation | Show results with:motivation
  11. [11]
    Linux_6.13 - Linux Kernel Newbies
    OverlayFS. File descriptors based layer setup commit, commit, commit, commit, commit. TMPFS. Add case-insensitive support for tmpfs commit, commit, commit ...
  12. [12]
    The first part of the 6.15 merge window - LWN.net
    Mar 28, 2025 · The 6.15 merge window can be expected to remain open through April 6, after which it will be time to stabilize all of that new work.
  13. [13]
    New Slax is comming
    Nov 3, 2017 · Also, note that while most of Live CD linux distribution used Aufs as of November 2016, but Slackware used overlayfs for live CD. Hence ...
  14. [14]
    Storage Drivers in Docker: A Deep Dive
    Aug 30, 2016 · Overlay2 · History: Derek McGowan added the overlay2 graphdriver to Docker in PR #22126, merged in June 2016 in time for the Docker 1.12 release.
  15. [15]
    Select a storage driver - Docker Docs
    overlay2 is the preferred storage driver for all currently supported Linux distributions, and requires no extra configuration. fuse-overlayfs, fuse-overlayfs is ...BTRFS storage driver · OverlayFS storage driver · Device Mapper
  16. [16]
    [OpenWrt Wiki] The OpenWrt Flash Layout
    Oct 18, 2023 · Embedded systems almost exclusively use “raw flash”, while solid-state drives (SSDs) and USB memory sticks, almost exclusively use “FTL flash”!
  17. [17]
    Overlay Filesystem — The Linux Kernel documentation
    ### Summary: Usage in Read-Only Overlays for Live Systems
  18. [18]
    Android OverlayFS Integration with adb Remount
    ### Summary: OverlayFS in Android OTA Updates
  19. [19]
    Read only root filesystem with overlay fs - Toradex Community
    Jan 9, 2024 · I am trying to determine if there is a way to make the rootfs read only and mount overlay filesystems to allow the system to run as if the root were writeable.Missing: appliances flash
  20. [20]
    slackware:liveslak - SlackDocs
    Jun 25, 2023 · This is a version of Slackware 14.2 (and newer), that can be run from a DVD or a USB stick. It is an ISO image meant to be a showcase of what Slackware is ...Missing: 2016 | Show results with:2016
  21. [21]
    Performance Evaluation of File Operations on OverlayFS
    - **Abstract**: Evaluates file operation performance on OverlayFS, focusing on read, write, and metadata operations across upper and lower layers. Compares OverlayFS with ext4 using benchmarks like Bonnie++ and Postmark. Results show OverlayFS has higher latency for certain operations due to copy-up mechanism.
  22. [22]
    [PDF] Evaluating Docker storage performance: from workloads to graph ...
    For instance, if a large file is updated in a container, Aufs and OverlayFS would have to copy the entire file, decreasing performance and disk space usage. In ...
  23. [23]
    AUFS - SourceForge
    version, status on www.kernel.org, support status. linux-6.x, mainline, supported and fully tested except nfs4 branch. linux-6.15, stable, supported and ...
  24. [24]
    Overlayfs merged into Linux kernel - Flockport
    Nov 1, 2014 · Overlayfs has been merged in kernel 3.18-rc2 so it will be sometime before it flows downstream but this is exciting news. For those who are out ...
  25. [25]
    Docker upgrade failure: The aufs storage-driver is no longer supported
    Aug 31, 2021 · The aufs storage-driver is no longer supported. Please ensure that none of your containers are using the aufs storage driver, remove the directory /var/lib/ ...Missing: introduction | Show results with:introduction
  26. [26]
    [PDF] Versatility and Unix Semantics in a Fan-Out Unification File System
    We implemented a prototype of Unionfs on. Linux. Our evaluation shows a 2–3% performance over- head for typical user-like workloads. 1 Introduction.
  27. [27]
    Introduction to the OverlayFS - LinuxConfig
    Sep 5, 2022 · The OverlayFS pseudo-filesystem was first included in the Linux kernel 3.18 release: it allows us to combine two directory trees or filesystems.
  28. [28]
    A brief history of union mounts - LWN.net
    Jul 14, 2010 · Unioning file systems: unionfs and aufs. This article will provide a high-level overview of various implementations of union mounts from the ...
  29. [29]
    Differences of aufs, unionfs and overlayfs from each other - Super User
    Oct 30, 2014 · OverlayFS sends all file i/o requests directly to the underling file system. And is thus potentially faster than unionfs.Missing: LWN 2009
  30. [30]
    unionfs - FreeBSD Manual Pages
    HISTORY The unionfs device driver first appeared in FreeBSD 5.0. AUTHORS The unionfs device driver was written by Jan-Simon Pendry for 4.4BSD and Masanori ...Missing: legacy | Show results with:legacy
  31. [31]
    UnionFS Stability and Enhancement - FreeBSD Foundation
    The UnionFS project aims to stabilize and enhance its utility on FreeBSD, focusing on enabling apparent modifications to read-only filesystems.Missing: legacy | Show results with:legacy<|control11|><|separator|>