Fact-checked by Grok 2 weeks ago

UnionFS

UnionFS is a stackable unification filesystem for that merges the contents of multiple directories, known as branches, into a single coherent view while preserving their physical separation on disk. This approach enables transparent overlaying of files and directories from separate filesystems, supporting both read-only and read-write branches with semantics to handle modifications efficiently. Developed as part of the (Filesystem Independent Stackable Template) project at , UnionFS originated in 2003 and was first released as an out-of-tree in November 2004. Its key features include dynamic insertion and deletion of branches at any position in the stack, maintenance of Unix semantics for file operations (such as handling duplicates and partial errors), and a priority-based resolution system where higher-priority branches take precedence in the unified namespace. UnionFS gained prominence through its adoption in live CD distributions, such as and SLAX, where it allowed combining a read-only base filesystem (e.g., from a ) with a writable overlay (e.g., in ) to enable persistent changes during sessions. By March 2006, the project had attracted over 6,700 unique users from 81 countries and contributions from more than 37 developers, reflecting strong community involvement via mailing lists and IRC. Although efforts were made to integrate it into the mainline around 2007, UnionFS remained an external module due to architectural differences from VFS-based approaches. It influenced subsequent union filesystem implementations, including AUFS (Another UnionFS) for improved performance and reliability, and ultimately , which was merged into the in 2014 as a lightweight, VFS-integrated successor. However, active development of the original kernel module has not occurred since around 2014, limiting its use to older kernel versions (primarily 2.6.x). It is utilized in some legacy scenarios requiring flexible unification, such as certain software packaging and live environments, though its adoption has largely shifted to modern alternatives like in .

Introduction

Definition and Core Concept

UnionFS is a stackable filesystem service implemented in , enabling the overlay of multiple filesystems or directories—referred to as branches—into a single, unified view through a mechanism known as a . This approach allows disparate storage locations to be presented as one coherent , facilitating seamless access without altering the underlying branch structures. At its core, UnionFS operates on the principle of transparent overlay, where files and directories from various are merged virtually, appearing to users and applications as a single filesystem while preserving the integrity of each source branch. Read operations follow a priority-based : the first searches the highest-priority branch for the requested file or directory; if not found, it falls back to successively lower-priority branches until the item is located or determined absent. Key terminology includes , which denote the individual underlying filesystems or directories; , describing the merged, logical presentation; and stacking, the process of layering multiple branches atop one another to form the composite view.

Purpose and Advantages

UnionFS primarily serves to enable the creation of writable overlays on read-only media, allowing users to make isolated modifications without altering the underlying base filesystems. This is achieved by merging multiple directories, or branches, into a single unified , where changes are directed to a higher-priority writable layer while preserving the integrity of lower read-only branches. Such functionality simplifies software updates and maintenance by facilitating the of revisions or patches atop immutable storage, as seen in environments requiring persistent yet non-destructive alterations. Key advantages of UnionFS include significant space efficiency, as unchanged files are shared across layers rather than duplicated, minimizing requirements in multi-branch setups. For instance, this avoids the need to replicate entire base images when applying updates or user-specific changes, thereby reducing disk usage compared to traditional non-union filesystems that might require full copies for modifications. Additionally, it streamlines the management of versioned or stacked environments, such as those involving multiple software layers or client-specific customizations, by presenting a coherent view without the administrative overhead of synchronizing separate directories. UnionFS further supports Unix semantics through mechanisms like whiteout files, which enable the hiding of content from lower branches, ensuring transparent resolution of conflicts and deletions in the merged view. In practical scenarios, UnionFS excels at overlaying temporary changes on immutable , for example, by adding files or session to a read-only operating without compromising the original media. This approach contrasts with non-union filesystems by eliminating data duplication and reducing the complexity associated with maintaining multi-layer configurations, as modifications remain isolated and reversible. Overall, these features make UnionFS particularly valuable for resource-constrained or distributed systems where efficiency and isolation are paramount.

History

Origins and Early Concepts

The foundational concepts of union filesystems trace back to the late in the Plan 9 operating system developed at . Plan 9 introduced union directories as a mechanism to overlay multiple directories into a single view, allowing files to be searched across stacked components in order, with the first match resolving the lookup. This approach, detailed in the system's design, enabled flexible per-process customization of the without altering underlying storage, addressing the need for distributed resource aggregation in a networked environment. In 1993, Werner Almsberger developed the Inheriting File System (IFS) as one of the earliest explicit implementations of union mounting for 0.99. IFS allowed multiple filesystem branches to be merged transparently, inheriting behaviors from lower layers while supporting read-write operations on upper layers through copy-on-write-like mechanisms. However, its complexity led Almsberger to abandon kernel-level development in favor of user-space alternatives by the mid-1990s, highlighting early challenges in performance and integration. Throughout the , research evolved these ideas through stackable filesystem architectures, which layered new functionality atop existing filesystems without deep modifications. Influences included prototypes in BSD systems, where union mounts were integrated starting with 4.4BSD-Lite around 1994, enabling seamless merging of read-only and writable branches for tasks like . Efforts like the stackable interface for , proposed in 1999, further emphasized modularity by wrapping standard filesystems to add unification, reducing the need for code. These developments addressed pre-2000s limitations, such as the demand for transparent multi-branch merging that preserved Unix semantics while minimizing overhead and avoiding invasive changes to the . A pivotal academic contribution came in with a from , which formalized fan-out unification in UnionFS prototypes. This work demonstrated versatile stacking of multiple branches on , achieving near-native performance (2-3% overhead for typical workloads) while upholding strict Unix semantics, such as operations across layers. The report built on precedents to resolve lingering issues in branch resolution and whiteout handling for deletions.

Development of UnionFS

UnionFS was initially developed between 2003 and 2004 at by Erez Zadok and his colleagues in the Storage Systems Research Group as an open-source module, building on the stackable project. The project aimed to provide a versatile unification filesystem, with early work involving key contributors such as Charles P. Wright during his PhD studies from 2003 to 2006. This effort resulted in the first public descriptions, including a article titled "Unionfs: Bringing File Systems Together" published in December 2004, which outlined the system's architecture and applications. Key milestones in the project's evolution included the publication of a in October 2004, "Versatility and Unix Semantics in a Unification " (FSL-04-01b), which detailed the fan-out stacking technique and Unix semantics preservation. By , the development incorporated user-space management utilities to interface with the kernel module, enhancing administrative control over branch mounting and whiteout management, as later documented in project presentations. A significant highlight came in with a presentation at the Symposium titled "UnionFS: User- and Community-Oriented Development of a Unification ," which emphasized open development practices, since November 2004, and over 200,000 downloads by March . The project expanded beyond Linux with a port to FreeBSD around 2006–2007, led by Daichi GOTO and collaborators, focusing on support for memory filesystem overlays to enable read-only base layers with writable modifications. This adaptation addressed BSD-specific needs, such as improved locking and vnode management, and was integrated into FreeBSD development discussions by late 2006. The port was subsequently merged into FreeBSD's -current and -stable branches by 2007, and included in the FreeBSD 7.0 release in May 2008. UnionFS remained an out-of-tree module, never fully merged into the mainline due to its architectural complexity and maintenance challenges, which prompted the creation of forks like AUFS. The last major updates occurred around 2010, with patches and commits supporting 2.6.34 and earlier, after which development slowed. As of November 2025, the UnionFS project remains dormant with no commits since 2010, though the codebase is still available in the repository; the implementation continues to be maintained and used. Community efforts were coordinated through the Storage Systems Research Group at , utilizing mailing lists (unionfs and unionfs-cvs), IRC channels, and a tracker for contributions and bug reports.

Design and Architecture

Branching and Layering Mechanism

UnionFS organizes its into an ordered list of branches, each representing an underlying or , with priorities assigned from highest (top) to lowest (bottom). This structure allows for a combination of read-only base layers and writable overlays, enabling a unified view where the top branch typically handles modifications while lower branches provide persistent or shared content. The layering process in UnionFS is initiated through a command, which specifies the branches and their order. For example, the syntax mount -t unionfs -o dirs=/ramdisk=rw:/KNOPPIX=ro none /UNIONFS mounts a read-write branch (e.g., /ramdisk) on top of a read-only branch (e.g., /KNOPPIX from a ), creating a single merged at the point. This architecture directly accesses multiple branches without intermediate stacking layers, supporting dynamic adjustments like branch insertion or reordering at runtime. During read operations, UnionFS resolves lookups by traversing branches from top to bottom until the requested or is found, prioritizing the highest-precedence to avoid . Directory listings merge visible entries across all branches in priority order, using a to eliminate duplicates while excluding any obscured by whiteouts from higher branches. This ensures a consistent, view of the unified . The whiteout mechanism employs special zero-length files in upper branches to mask or simulate deletions of content in lower branches, preserving Unix semantics for operations like removal. For instance, a whiteout named .wh.[filename](/page/Filename) in the top branch hides the corresponding filename below it; these are created atomically via rename for files or through create-and-remove sequences for other objects, and they are invisible in normal listings but respected during resolution.

Copy-on-Write and Resolution Strategies

UnionFS employs the (CoW) principle to enable modifications on read-only branches without altering the underlying filesystems, ensuring the integrity of lower-priority branches while presenting a unified writable view. When a write operation targets a file in a read-only lower branch, UnionFS performs a "copyup" process, transparently copying the file and any necessary parent directories to the nearest higher-priority writable branch, typically the topmost one. This mechanism preserves the original data in the read-only branch, allowing applications to treat the union as fully writable, as seen in scenarios like patching read-only images by directing changes to a temporary writable overlay. Write resolution in UnionFS directs all modifications—creations, updates, deletions, renames, and moves—to the highest-priority writable , maintaining Unix semantics across the layered . New or modified files are stored exclusively in this top , while deletions are handled via whiteouts: special zero-length files prefixed with ".wh." (e.g., ".wh.filename") created in the writable to and hide corresponding entries in lower branches, preventing them from appearing in the unified . Renames and moves involve branch traversal to locate the source file's visible instance (prioritizing higher branches) and resolve the target path, copying if necessary to the writable to avoid cross-branch inconsistencies. Conflict resolution relies on priority-based shadowing, where higher-priority branches override lower ones for identical , ensuring a consistent without merging . In multi-layer setups with multiple read-write branches, UnionFS allows selective writability, directing operations to the highest writable for the path while preserving read-only below. Whiteout creation supports deletion modes like DELETE_WHITEOUT, which masks lower files atomically via rename operations, or DELETE_ALL, which attempts removal across branches before falling back to whiteouts if failures occur. Efficiency in CoW operations is enhanced by delaying full copies: metadata updates (e.g., permissions, timestamps) on read-only files may trigger minimal copyups without immediate duplication, while actual writes prompt complete file copies to the upper . UnionFS handles hard across branches using forward and reverse inode maps to assign unique, persistent inode numbers, enabling accurate link counting and detection without duplicating unnecessarily, at a space overhead of approximately 0.212% of total disk usage. For sparse files, copyup preserves sparseness and attributes during promotion to the writable , as supported by the underlying filesystems. Benchmarks indicate CoW introduces 10-12% overhead for I/O-intensive workloads with 1-4 branches, establishing its viability for use without excessive degradation.

Implementations

Linux UnionFS

UnionFS in Linux is implemented as an out-of-tree module from the 2.x lineage, requiring separate compilation against the target source and loading via commands such as insmod or modprobe. This approach allows it to function as a stackable filesystem without integration into the mainline tree, enabling users to apply patches to older versions for compatibility. The module integrates with the Virtual File System (VFS) through stackable hooks that intercept and manage operations on inodes and dentries across multiple branches, facilitating the unification of directory trees into a single view. It supports branches backed by various filesystems, including // for local storage, NFS for network access, and for in-memory operations, allowing flexible layering of read-only and read-write components. Key capabilities include dynamic branch insertion and removal, persistent whiteout handling for deletions, and efficient directory traversal with support for operations like lseek on readdir results. Configuration occurs primarily through mount options specified in the command, such as dirs=/branch1=rw:/branch2=ro to define branch paths and their permissions, enabling prioritized access from higher-precedence branches. The cow=1 option activates semantics for modifications to read-only branches, copying data upward to a writable layer while preserving the original. Deletion handling is controlled by options like delete_whiteout=1, which creates special whiteout files (e.g., .wh.filename) in the uppermost writable branch to mask underlying files without physical removal, ensuring semantics. Additional modes, such as delete=first or delete=all, allow customization of how unlink operations propagate across branches. As of 2025, UnionFS receives sporadic maintenance through community forks and archived repositories, with official development largely inactive since the mid-2000s. It has been superseded in practice by in-kernel alternatives like , though forks provide patches for compatibility with s up to the 6.x series.

UnionFS in BSD Systems

In , UnionFS is implemented as a stackable filesystem , integrated since the reimplementation merged into the 6-STABLE branch in February 2007 and fully featured in FreeBSD 7.0-RELEASE later that year. To enable it, administrators include the option UNIONFS line in the kernel configuration file during compilation, or load it dynamically via unionfs_load="YES" in /boot/loader.conf. This implementation supports overlays with nullfs for mounts and md for memory-based disks, allowing writable layers atop read-only bases such as CD-ROMs or shared system trees. It is particularly utilized in jail environments to enable multiple isolated jails to share a common read-only base filesystem while maintaining independent writable modifications, thereby improving storage efficiency and simplifying updates. A key BSD-specific feature of UnionFS in is its compatibility with , where read-only ZFS snapshots serve as lower layers beneath writable UnionFS overlays, facilitating layered filesystem structures for persistent and snapshot-aware deployments like jails. The mounting process uses the mount_unionfs command with syntax such as mount_unionfs [-o options] lower upper, where options like below designate layer ordering, copymode controls behaviors (traditional, transparent, or masquerade), and whiteout manages deletion markers; for example, mount_unionfs -o below /base /overlay places the base as the lower layer. In , UnionFS is provided as a userspace implementation via the pkgsrc package collection under the name fuse-unionfs, emphasizing lightweight union mounts suitable for resource-constrained embedded systems. This FUSE-based port, first added to pkgsrc in March 2007 with version 0.17, overlays directories transparently without requiring kernel modifications, supporting features like whiteout files in .unionfs subdirectories and compatibility with NetBSD's puffs framework for filesystem translation. Maintenance of UnionFS in BSD systems varies by variant. In , the implementation is included in the base system through version 14.x as of 2025, with ongoing enhancements funded by the FreeBSD Foundation to improve stability in multi-jail and container scenarios. In , the fuse-unionfs package is functional but receives less frequent updates, with the latest stable release at maintained through pkgsrc's quarterly branches, ensuring compatibility for embedded and general use without active kernel-level development.

Alternatives

AUFS

AUFS, or Another UnionFS, is a stackable unification filesystem for that merges multiple directories into a single virtual filesystem, serving as an enhanced alternative to the original UnionFS. Developed primarily by Junjiro R. Okajima starting in late 2005 and publicly released in early summer 2006, AUFS was designed as an out-of-tree kernel module to address performance and reliability limitations in UnionFS, such as inconsistent inode numbering and limited branch management. By 2009, it had evolved through versions supporting kernels from 2.6.16 to 2.6.30, incorporating original ideas that diverged significantly from its UnionFS inspirations. Key enhancements in AUFS include an external inode cache, implemented via an "xino" (external inode number) table, which maintains consistent inode numbers across branches to improve application and caching . It also supports a broader range of branch types, including loopback-mounted filesystems and FUSE-based ones, allowing greater flexibility in diverse storage backends. Additionally, AUFS introduces finer-grained policies for branch selection and writable operations, enabling multiple writable branches with customizable refresh and balancing mechanisms to distribute load and prevent overload on any single layer. AUFS features advanced whiteout handling, where whiteouts—markers for deleted files—are hardlinked for efficiency and managed through the xino system to avoid conflicts in multi-branch setups. The "del=0" policy further optimizes deletions by avoiding unnecessary copy-up operations or whiteout creations when files are absent from underlying branches, reducing overhead in read-only heavy workloads. These capabilities made AUFS particularly suitable for scenarios requiring non-destructive overlays, such as live media. Prior to the mainstream adoption of in 2014, AUFS was widely used in distributions for needs, including integration in for features like root filesystems on read-only media until the mid-2010s. Although early versions like AUFS 1 and 2 ceased maintenance in 2009 and 2012 respectively, later iterations such as AUFS 6 continue to be maintained by Okajima via for kernels up to 6.x, supporting ongoing use in specialized environments despite its out-of-tree status.

OverlayFS

OverlayFS is a union filesystem integrated into the since version 3.18, released in 2014, providing a lightweight mechanism for overlaying filesystems with an emphasis on simplicity and performance. Designed as a modern successor to earlier union filesystem technologies, it enables the merging of a writable upper layer with read-only lower layers into a single coherent view, facilitating efficient snapshotting and modification without altering the originals. This in-kernel implementation avoids the complexities of out-of-tree modules, making it suitable for production environments where stability and direct VFS integration are critical. The core architecture of OverlayFS revolves around a two-layer model: an upper directory serving as the writable layer for changes, and a lower directory (or directories) providing the read-only base. A dedicated workdir, located on the same filesystem as the upperdir, handles temporary files during operations like copy-up. Modifications to files in the lower layer trigger a (CoW) mechanism, where the file is duplicated to the upper layer before alteration, preserving the original intact. Support for multiple lower layers, enabling stacked read-only branches via colon-separated paths in the mount option (e.g., lowerdir=lower1:lower2), was introduced in kernel version 4.0 to enhance layering flexibility without compromising the single-writable-layer design. In contrast to UnionFS, which permits arbitrary stacking of unions including multiple writable branches and more intricate resolution policies, OverlayFS prioritizes a streamlined approach with no support for full recursive unions of overlay mounts. It leverages VFS inode redirection and caching for efficient name resolution and operations, bypassing the heavier interception layers used in UnionFS. The redirect_dir=on mount option, available since kernel 4.2, further optimizes CoW by enabling directory redirects via extended attributes, allowing renames across layers without full copy-up of contents, thus improving compliance and performance in rename-heavy workloads. OverlayFS has seen widespread adoption as the default storage driver in since 2015, powering efficient image layering for containers, and serves as the backend for orchestration in managing layered filesystems. It remains actively maintained in the mainline through version 6.x as of 2025, with ongoing enhancements for compatibility and efficiency in containerized and embedded environments.

Applications

Live Media and Distributions

UnionFS has been instrumental in enabling persistence on live CDs and DVDs by overlaying a RAM-based writable layer, typically using tmpfs, atop a read-only squashfs image. This setup allows users to make modifications during a session that can be saved without altering the original media, providing a writable illusion on inherently read-only bootable environments. One prominent example is Knoppix, which integrated UnionFS starting from version 3.8 in 2005 to support persistence features. In Knoppix, the read-only filesystem from the CD is merged with a writable ramdisk layer, enabling users to save configurations and files across reboots via an overlay mechanism. Similarly, Puppy Linux adopted UnionFS in its 1.x series starting in 2004, leveraging it for full root filesystem persistence. This allows the entire session—including installed applications and user data—to be saved to a designated file or partition, surviving reboots without requiring a complete remaster of the distribution. Slax, a modular Slackware-based distribution, employs UnionFS to unify read-only modules with a writable layer, facilitating session saving and customization directly from USB or CD media. In practice, the boot process for these UnionFS-based live systems involves mounting the base ISO's as the lower, read-only branch, followed by creating an upper writable branch in via . User modifications, such as file creations or edits, are directed to the layer using semantics. Deletions are handled through whiteouts—special zero-length files prefixed with ".wh." that mask entries from lower branches without physically removing them, ensuring the read-only base remains intact. This layered approach keeps the system lightweight and reversible, with optional saving of the upper layer to persistent storage at shutdown. Over time, many distributions have transitioned from UnionFS to more stable alternatives like for live media. For instance, Ubuntu's live environments, via the Casper boot system, adopted around 2015 to handle advanced overlay formats and improve reliability in versions post-3.18. This shift addressed limitations in UnionFS's out-of-tree , favoring in-kernel solutions for better performance and maintenance in modern live sessions.

Containerization and Virtualization

UnionFS and its variants, such as AUFS, have played a significant role in technologies by enabling efficient layering of filesystem images, which allows multiple container variants to share common base layers while maintaining isolation. In early versions of , released between 2013 and 2015 (prior to version 1.10), AUFS served as the default storage driver on distributions like and , facilitating the creation of hierarchical images where each layer represents incremental changes, thus optimizing storage for diverse container deployments. In virtualization environments, tools like leverage UnionFS variants, including AUFS, to overlay guest filesystems onto the host, supporting features such as snapshotting and rollback for container management. For instance, version 1.0 and later supported AUFS as a backing store option alongside others like , enabling the creation of writable overlays on read-only base images to facilitate and efficient resource use in multi-instance setups. This approach allows guest environments to inherit host resources while isolating modifications, which is particularly useful for development and testing scenarios requiring frequent resets. Over time, the container ecosystem has shifted away from AUFS toward more performant alternatives; Docker deprecated AUFS in version 19.03 and fully removed it in version 24.0, recommending migration to the driver for production use, though UnionFS variants persist in legacy or custom configurations. The benefits in these contexts include substantial reductions in disk usage for multi-container deployments—through mechanisms that avoid duplicating unchanged layers—and isolated writes per container instance, ensuring that modifications in one environment do not affect shared bases.

Limitations

Performance Considerations

UnionFS introduces performance overhead primarily through branch traversal during file lookups and (CoW) mechanisms on writes. In the worst case, lookups require traversing all branches in priority order, resulting in time complexity where n is the number of branches, as each operation like or READDIR scales linearly with branch count in microbenchmarks. This traversal overhead is mitigated by and caching of directory entries, which reduces repeated scans across branches for subsequent operations on the same paths. CoW operations, used when writing to files backed by read-only branches, amplify I/O by requiring a full copy-up to a writable before modification, leading to increased write and disk usage, particularly in scenarios where frequent CoW triggers can double or triple overhead. For instance, in benchmarks using the Am-utils compile workload, CoW introduced 19-20% overhead, while Postmark tests with 15-second intervals showed up to 275% slowdown due to repeated copying. Benchmark studies indicate UnionFS incurs 10-20% slowdown in read-heavy workloads compared to native filesystems like , such as 12.7-14.3% in Postmark's directory-append-load-delete configuration across 1-16 branches. Write-heavy scenarios exhibit higher penalties, with 23.5-27.5% overhead in Postmark's directory-write-heavy-test, and overall I/O-intensive tasks reaching 10-12% for up to four branches in 2.4 environments. These effects intensify with deeper branch stacks, as linear scaling in traversal costs compounds; practical deployments thus limit branches to 2-4 to maintain efficiency. Additional optimizations include inode remapping and caching strategies that map inodes to underlying inodes, avoiding redundant fetches and improving lookup speeds in fan-out designs. Performance degrades further when branches involve networked filesystems like NFS, due to added in remote traversals and CoW copies, as observed in layered configurations over and NFS.

Compatibility and Adoption Challenges

UnionFS's out-of-tree status as a module requires users to apply patches and rebuild the , creating substantial barriers to inclusion in distribution and complicating widespread deployment across diverse environments. As of 2025, the module supports only legacy up to approximately 3.x, rendering it incompatible with modern (6.x series) and distributions without significant additional effort. This approach, while enabling support for multiple versions, demands significant effort from users and distributors, often resulting in reliance on specialized or custom-built rather than ones. The maintenance of UnionFS outside the mainline kernel has fostered fragmentation, most notably with the development of AUFS as a to resolve perceived limitations in UnionFS's design and performance, splitting community resources and reducing cohesive advancement. Compatibility challenges manifest in inconsistent behavior across kernel versions, stemming from frequent changes in kernel APIs that necessitate ongoing adaptations to the module, potentially leading to regressions or unsupported configurations in newer releases. Additionally, UnionFS exhibits limited support for advanced filesystem features, such as Access Control Lists (ACLs) and within overlaid branches, which restricts its applicability in environments requiring fine-grained permissions or data protection at the filesystem level. Adoption of UnionFS has notably declined since the integration of into the starting with version 3.18 in 2014, as the in-kernel alternative provides comparable union mounting capabilities with reduced setup complexity and better long-term support. As of November 2025, the module remains unmaintained for kernels beyond 3.x, though user-space implementations like unionfs-fuse persist in niche use up to version 3.7 (2023), and BSD variants continue active development with updates in 2024-2025. In BSD systems, UnionFS variants offer greater stability due to the operating system's emphasis on integrated, conservative development, yet they remain confined to niche uses, such as enhancing read-only filesystem modifications in jails, without achieving broad ecosystem penetration. Further hurdles include the GPLv2 licensing, which aligns with the but places the full maintenance burden on external developers without mainline resources, exacerbating challenges in keeping pace with kernel evolution. Older UnionFS modules for deprecated kernel versions carry risks of unpatched vulnerabilities, as community efforts have shifted focus away from legacy support.

References

  1. [1]
    Unionfs: A Stackable Unification File System
    Unionfs is a stackable file system that merges the contents of several directories while keeping their physical content separate.
  2. [2]
    [PDF] and Community-Oriented Development of a Unification File System
    Jul 22, 2006 · Unionfs is a stackable file system that virtually merges a set of directories (called branches) into a single logical view.
  3. [3]
    Kernel Korner - Unionfs: Bringing Filesystems Together - Linux Journal
    Dec 1, 2004 · Unionfs merges separate filesystems into a single view, allowing users to see related files together while keeping them physically separate.
  4. [4]
    [PDF] Versatility and Unix Semantics in a Fan-Out Unification File System
    Unionfs maintains Unix semantics while of- fering advanced unification features such as dynamic in- sertion and removal of namespaces at any point in the.
  5. [5]
    A brief history of union mounts - LWN.net
    Jul 14, 2010 · This article will provide a high-level overview of various implementations of union mounts from the original 1993 Inheriting File System through ...
  6. [6]
    The new unionfs implementation for FreeBSD and status of merging
    Unionfs makes it possible to mount one file system on top of another. For example, you can mount a memory file system on top of a CD-ROM. As a result, it looks ...
  7. [7]
    Unioning file systems: Architecture, features, and design choices
    Mar 18, 2009 · A unioning file system combines multiple file systems into one, using a read-only base and a writable overlay, with branches and whiteouts.
  8. [8]
    [PDF] Plan 9 from Bell Labs - MIT CSAIL Computer Systems Security Group
    In Plan 9 terminology, this is a union directory and behaves like the concatenation of the constituent directories. A flag argument to bind and mount.
  9. [9]
  10. [10]
    [PDF] A Stackable File System Interface For Linux
    Stackable file systems, however, are easier to develop because they use existing file systems and interfaces. This paper describes a stackable wrapper file ...
  11. [11]
    Kernel development - LWN.net
    Apr 8, 2009 · Unionfs development began at SUNY Stony Brook in 2003, as part of the FiST stackable file system project.
  12. [12]
    [REQUEST] unionfs needs some guys can do implements new 2 ...
    [REQUEST] unionfs needs some guys can do implements new 2 APIs for VFS. Daichi GOTO daichi at freebsd.org. Tue Oct 10 07:06:00 PDT 2006.
  13. [13]
    GIT: unionfs2-2.6.34.y: Changes to branch 'master'
    Aug 12, 2010 · GIT: unionfs2-2.6.34.y: Changes to branch 'master'. Erez Zadok ezk at fsl.cs.sunysb.edu. Thu Aug 12 22:41:40 EDT 2010. Previous message: ...
  14. [14]
    Versatility and Unix Semantics in a Fan-Out Unification File System
    Nov 9, 2004 · Unionfs maintains Unix semantics while offering advanced unification features such as dynamic insertion and removal of namespaces at any point ...
  15. [15]
    Versatility and Unix Semantics in Namespace Unification
    Unionfs has a larger LoC than BSD Union Mounts because it supports more features. The Unionfs LoC includes 804 lines of user-space management utilities.
  16. [16]
    unionfs-odf.txt
    Currently, all Unionfs developers help maintain the ODF code. This file can be found in your Linux kernel source tree that contains Unionfs with ODF support, ...
  17. [17]
    FreeBSD 7.0-RELEASE Release Notes
    Feb 16, 2008 · This document contains the release notes for FreeBSD 7.0-RELEASE. It describes recently added, changed, or deleted features of FreeBSD.
  18. [18]
    unionfs - FreeBSD Manual Pages
    NAME unionfs -- UNION FS SYNOPSIS To compile this driver into the kernel, place the following lines in your kernel configuration file: option UNIONFS
  19. [19]
    UnionFS Stability and Enhancement - FreeBSD Foundation
    The UnionFS project aims to stabilize and enhance its utility on FreeBSD, focusing on enabling apparent modifications to read-only filesystems.
  20. [20]
    Chapter 17. Jails and Containers | FreeBSD Documentation Portal
    Sep 26, 2025 · Jails in FreeBSD enhance security by creating a separate, virtualized environment, building upon chroot, and virtualizing access to resources.Jail Types · Thin Jails · Jail Management · Jail UpgradingMissing: UnionFS | Show results with:UnionFS
  21. [21]
    mount_unionfs(8)
    ### Mount Syntax for UnionFS in FreeBSD
  22. [22]
    fuse-unionfs - pkgsrc.se | The NetBSD package collection
    Log message: filesystems/fuse-unionfs: Install manpages into ${PKGMANDIR}. This package uses a custom "do-install" target to install all of the package's files.Missing: Daichi Hirata 2006-2009
  23. [23]
  24. [24]
    NAME - AUFS
    Aufs is a stackable unification filesystem such as Unionfs, which unifies several directories and provides a merged single directory. In the early days, ...
  25. [25]
    AUFS - SourceForge
    ... Unionfs Version 2.x series began taking some of the same approaches to aufs1's. Unionfs was being developed by Professor Erez Zadok at Stony Brook University ...
  26. [26]
    Overlay Filesystem - The Linux Kernel documentation
    This document describes a prototype for a new approach to providing overlay-filesystem functionality in Linux (sometimes referred to as union-filesystems). An ...
  27. [27]
    Leading items - LWN.net
    It has been a part of the kernel since version 3.18 in 2014. The second filesystem involved is AUFS, which does everything that overlayfs does, and a lot more, ...
  28. [28]
    Support multiple "lower" directories on overlayfs #23502 - GitHub
    Jun 13, 2016 · Since kernel 4.0, multiple "lower" directories are supported on overlayfs ... colon (":") as a separator character between the directory names.
  29. [29]
    Docker with OverlayFS first impressions - Cloud 66 Blog
    Mar 30, 2015 · Instruction to use OverlaySF are simple enough: Upgrade to linux kernel 3.18; Run docker daemon with '-s overlay' option. First week was as well ...
  30. [30]
    OverlayFS storage driver - Docker Docs
    The overlay2 driver natively supports up to 128 lower OverlayFS layers. This capability provides better performance for layer-related Docker commands.
  31. [31]
    KNOPPIX 5.1 - Live Linux Filesystem On CD - Knopper.Net
    Change files on CD or DVD in a live-session, and for saving your personal settings and data over reset by using the persistent Knoppix-image option.
  32. [32]
    Puppy Linux History
    Series 1.x included major changes; the most obvious one being its use of unionfs. This means entire Linux filesystem root can now be persisted. Another major ...
  33. [33]
    User- and Community-Oriented Development of a Unification File ...
    Other LiveCDs (notably SLAX [7]) use Unionfs both for its copy-on-write semantics and as a package manager. A SLAX distribution consists of several modules ...
  34. [34]
    [Bug 1412411] [NEW] casper needs to handle the overlayfs V2 ...
    Jan 19, 2015 · [Bug 1412411] [NEW] casper needs to handle the overlayfs V2 format (overlay filesystem type) in later kernels.
  35. [35]
    AUFS storage driver
    ### Summary of AUFS in Docker
  36. [36]
    Storage Drivers in Docker: A Deep Dive
    Aug 30, 2016 · The Good: Overlay holds a lot of promise as the single focus for a complete union filesystem supported and merged into the mainline Linux kernel ...
  37. [37]
    LXC - News - Linux Containers
    Feb 20, 2014 · It's with great pleasure that the LXC team is announcing the release of LXC 1.0! ... aufs; overlayfs. Support for cloning and snapshotting ...
  38. [38]
    Stepping into the world of Linux Containers (LXC)
    Sep 5, 2017 · ... lxc/container_name/rootfs is called dir backing store. Other available options are lvm, loop, brtfs, zfs, aufs, overlayfs and rbd. With the ...<|separator|>
  39. [39]
    Engine v19.03 - Docker Docs
    Now skipping deprecated storage-drivers in auto-selection. moby/moby#38019; Deprecated aufs storage driver and added warning. moby/moby#38090; Removed support ...
  40. [40]
    Kernel Korner - Unionfs
    We benchmarked Unionfs's performance under Linux 2.4. For a compile benchmark with one to four branches, Unionfs overhead is only 12%. For an I/O intensive ...
  41. [41]
    aufs - advanced multi layered unification filesystem. version 4.13 ...
    Aufs is a stackable unification filesystem such as Unionfs, which unifies several directories and provides a merged single directory. In the early days, aufs ...Mount Options · Branch Syntax · External Inode Number Bitmap...
  42. [42]
    mayli/unionfs-latest - GitHub
    Linux kernel release 3.x <http://kernel.org/> These are the release notes for Linux version 3. Read them carefully, as they tell you what this is all about, ...Missing: major 2010