tmpfs
Tmpfs is a temporary file system in the Linux kernel that stores all files in virtual memory, utilizing kernel internal caches for rapid access while ensuring data is transient and lost upon unmounting or system reboot.[1] It operates by dynamically growing and shrinking as needed, with contents that can be swapped to disk for memory management, though this can be disabled.[1] Unlike persistent file systems, tmpfs does not write data to non-volatile storage, making it ideal for short-lived operations but unsuitable for long-term data retention.[1] Developed as an extension of the earlier ramfs file system, tmpfs was introduced on December 1, 2001, by Christoph Rohland to address limitations such as the lack of size controls and swap support in ramfs.[1] Subsequent enhancements, including updates by Hugh Dickins in 2007, KOSAKI Motohiro in 2010, Chris Down in 2020, and André Almeida in 2024, added features like Transparent Huge Pages (THP) support, POSIX access control lists (ACLs), extended attributes, user and group quotas, and NUMA memory policies.[1] By default, tmpfs allocates up to half of available RAM for its size and inode count, configurable via mount options such assize (e.g., size=50% or absolute values in k/m/g units) and nr_inodes.[1]
Tmpfs is commonly mounted on directories like /tmp or /var/tmp for temporary file storage and on /dev/shm for POSIX shared memory operations, where it appears as "Shmem" in /proc/meminfo.[1] It supports advanced options including case-insensitive lookups with UTF-8 encoding via casefold, 32-bit or 64-bit inode numbers, and initial mount permissions through mode, uid, and gid.[1] While highly performant due to its in-memory nature, oversizing tmpfs can lead to system deadlocks, and quotas are not compatible with user namespaces.[1] Compared to alternatives like ramfs (which lacks swapping and resizing) or block RAM disks (fixed-size and slower), tmpfs balances flexibility and efficiency for modern workloads.[1]
Introduction
Definition and Purpose
Tmpfs is a temporary filesystem that stores all of its files in the system's virtual memory, utilizing a portion of RAM for file storage and appearing as a standard mounted directory to users and applications.[2] It operates as a virtual memory-based filesystem, where file contents reside primarily in RAM rather than on persistent storage devices.[3] Key characteristics include its volatility—files are discarded upon unmounting or system reboot—and size limitations tied to available RAM and swap space, with a default allocation of up to 50% of physical RAM.[2] Despite these constraints, tmpfs supports standard filesystem operations such as reading, writing, and directory management, enabling seamless integration with existing tools and APIs.[3] The primary purpose of tmpfs is to provide high-speed, non-persistent storage for short-lived data, prioritizing performance over durability in contrast to traditional disk-based filesystems like ext4 or UFS.[2] By avoiding disk I/O, it facilitates rapid access for temporary files, overflow from swap space, and caching mechanisms that benefit from in-memory operations.[4] This design makes it ideal for scenarios where data persistence is unnecessary, such as session-specific files or intermediate results in computational tasks, while ensuring that under memory pressure, contents can spill over into swap without immediate failure.[3] Tmpfs emerged to address performance bottlenecks associated with disk access in common use cases, particularly the /tmp directory for temporary files and inter-process communication via shared memory.[5] In environments like /tmp, disk I/O for small, ephemeral files can introduce significant latency—often orders of magnitude slower than RAM access—leading to slowdowns in applications reliant on quick file creation and deletion.[4] Similarly, for inter-process communication, tmpfs enables efficient shared memory segments, such as Linux's /dev/shm mount point, reducing overhead in data exchange between processes.[2] These motivations underscore tmpfs's role in optimizing system responsiveness for volatile workloads without compromising the integrity of persistent data stores.[5]Historical Development
tmpfs originated as a memory-based filesystem in SunOS, developed by Sun Microsystems to provide efficient temporary storage using the operating system's virtual memory resources. First implemented in SunOS 4.0 in late 1987, it was refined and documented in SunOS 4.1 by 1990, drawing from earlier concepts like RAM disks but introducing dynamic memory allocation without fixed disk partitions.[6][7] This addressed limitations in prior Unix variants, such as SVR4, which lacked integrated volatile filesystems for high-performance, short-lived data.[6] In Linux, tmpfs evolved from the shmfs (shared memory filesystem) and ramfs precursors, with full integration introduced on December 1, 2001, by Christoph Rohland as part of the 2.4 kernel series. Unlike ramfs, which did not support size limits or swap usage, tmpfs incorporated swap-backed storage from its inception, allowing it to expand beyond physical RAM while maintaining volatility.[3][2] This adoption marked a key milestone, enabling widespread use for temporary files in distributions and addressing the need for scalable in-memory storage. Subsequent enhancements included updates by Hugh Dickins in 2007 for better swap and resizing support, KOSAKI Motohiro in 2010 for Transparent Huge Pages (THP) compatibility, Chris Down in 2020 for POSIX ACLs and extended attributes, and André Almeida in 2024 for user and group quotas along with NUMA memory policies.[1] BSD systems adopted tmpfs in the mid-2000s, building on earlier memory filesystems like MFS introduced in 4.2BSD in 1983. The modern tmpfs implementation was developed for NetBSD during Google Summer of Code 2005 by Julio M. Merino Vidal and first appeared in NetBSD 4.0 in 2007.[8] It was subsequently ported to FreeBSD 7.0 in 2007 and OpenBSD 5.5 in 2014, providing a unified, efficient alternative to memory disks with support for dynamic sizing.[9][10] Notable developments in the 2010s included security enhancements, such as the noexec mount option to prevent executable files on tmpfs mounts, reducing risks from temporary scripts or binaries.[3] In the 2010s, adaptations for containerization emerged, with Docker introducing tmpfs volume mounts in Engine version 1.10.0 (February 2016) for ephemeral, memory-resident storage in containers, enhancing performance in cloud-native environments.[11]Core Concepts
Semantics and Operations
Tmpfs provides a filesystem interface that manifests as a hierarchical directory tree residing entirely in virtual memory, enabling rapid access to files without involving persistent storage devices. It adheres to POSIX standards for core file operations, including creation (creat or open with O_CREAT), reading (read), writing (write), deletion (unlink), and renaming (rename). Unlike traditional disk-based filesystems, all data in tmpfs is volatile and does not persist across system reboots or unmounts, ensuring that the contents are automatically cleared upon filesystem detachment.[1][3]
File sizes in tmpfs are handled dynamically, allowing them to expand or contract based on write and truncate operations, subject to an overall size limit configurable at mount time (defaulting to half of available RAM). Directory listings (readdir) and metadata management, such as permissions and timestamps, operate on inodes maintained in RAM, providing low-latency responses typical of memory-resident structures. When memory resources are exhausted and the size limit is approached, operations fail with the ENOSPC error, preventing further allocation and alerting applications to resource constraints.[1][3]
Several unique behaviors distinguish tmpfs from persistent filesystems. Under memory pressure, individual pages can be swapped to disk if swapping is enabled (default behavior), maintaining functionality while deferring non-critical data to secondary storage. Operations such as file writes and directory modifications are atomic, ensuring consistency in concurrent access scenarios as per POSIX semantics. Due to its fully volatile nature, tmpfs requires no journaling mechanisms or filesystem consistency checks like fsck, simplifying recovery and eliminating overhead associated with durability guarantees.[1][3]
The interface for tmpfs is accessed through standard mounting tools, such as mount -t tmpfs /dev/shm for POSIX shared memory or mount -t tmpfs /tmp for temporary file storage, where it integrates seamlessly into the directory hierarchy without special user-space drivers.[1][3]
Memory Management
Tmpfs stores all files within the system's virtual memory subsystem, utilizing the kernel's page cache to hold file data and metadata. This integration allows tmpfs pages to be treated as shared memory (shmem), visible as "Shmem" in/proc/meminfo and contributing to the "Shared" category in tools like free(1). Pages are allocated on demand when data is written, using the kernel's page allocator rather than pre-allocating contiguous blocks, enabling non-contiguous physical page distribution while maintaining virtual address continuity for file mappings.[2][3]
The filesystem exhibits dynamic growth, expanding as files are created or modified and contracting upon deletion, without fixed upfront reservations. This behavior is capped by mount-time limits, such as the size option (e.g., size=1G or size=50% of physical RAM by default), which enforces an upper bound on total consumption including both RAM and swap usage. Inode limits can also be set via nr_inodes, defaulting to half the number of physical RAM pages or lowmem pages, preventing excessive metadata overhead. These limits interact with broader virtual memory configurations, such as those tunable via /proc/sys/vm/ sysctls (e.g., vm.overcommit_memory for allocation policy), ensuring tmpfs adheres to system-wide memory heuristics.[2][3]
Swapping is integrated seamlessly, with tmpfs pages eligible for eviction to swap space under memory pressure, similar to anonymous memory mappings. This enables tmpfs instances to exceed available physical RAM, but introduces performance trade-offs due to the latency of swap I/O. The noswap mount option (available since Linux 6.4) disables this behavior, keeping all data in RAM exclusively. In high-pressure scenarios, overcommitment of tmpfs can trigger the Out-Of-Memory (OOM) killer; however, since tmpfs memory is not easily reclaimable by the OOM handler, excessive sizing may lead to system deadlock.[2][3]
For performance, tmpfs eliminates physical disk seeks entirely for read and write operations when data resides in RAM, relying on the page cache for buffering and direct memory access. This results in near-native memory speeds for I/O, though swap usage degrades throughput. To optimize large allocations, tmpfs supports Transparent Huge Pages (THP) when enabled via kernel configuration (CONFIG_TRANSPARENT_HUGEPAGE), with mount options like huge=always or huge=within_size controlling usage; this reduces TLB overhead for workloads with big files, improving efficiency on systems with sufficient huge page reserves. System-wide THP for shmem/tmpfs is tunable via /sys/[kernel](/page/Kernel)/mm/transparent_hugepage/shmem_enabled.[2][12]
Implementations
Linux
Tmpfs was introduced in the Linux kernel 2.4 series in December 2001, implemented in thefs/tmpfs/ subsystem, backed by shared memory structures in mm/shmem.c to provide a temporary, memory-resident filesystem. It serves as the backing for shared anonymous memory mappings and System V shared memory segments internally, even when the user-visible tmpfs is disabled via kernel configuration.[1] This integration leverages the shared memory filesystem (shmem) subsystem, where operations like shmem_file_setup create unlinked tmpfs files to back shared memory without requiring an explicit mount.
A key feature of the Linux tmpfs is its support for POSIX shared memory, typically mounted at /dev/shm to enable System V and POSIX inter-process communication mechanisms, such as those used by glibc since version 2.2.[1] Mount options include nr_blocks to cap the total number of file blocks and nr_inodes to limit inode allocation, alongside mode for setting default file permissions on the root directory.[1] Additionally, tmpfs integrates with control groups (cgroups) through the memory controller, which accounts for shmem/tmpfs usage in cgroup memory limits, enabling containerized environments to restrict shared memory consumption.[13] In Android, derived from the Linux kernel, tmpfs is employed for temporary app data storage, such as caches and shared memory segments, to optimize performance on resource-constrained devices.[14]
The implementation has evolved significantly since its inception. Post-2015 developments include enhanced fanotify event reporting for tmpfs, with kernel patches from around 2018 adding support for additional event types like create and delete, and later improvements in 2021 enabling file handle reporting for better monitoring in containerized setups.[15] In the 2020s, tmpfs gained efficient asynchronous I/O support via io_uring, introduced in kernel 5.1, allowing high-performance operations on memory-backed files without traditional syscalls. In 2024, Linux kernel 6.13 introduced support for arbitrary-sized large folios in tmpfs, improving read performance by up to 20% for memory-intensive operations.[16] It also handles overlays with FUSE filesystems, permitting tmpfs as an upper layer in overlayfs configurations for volatile, in-memory modifications atop slower backends.
At the code level, tmpfs files are backed by shmem structures, with pages indexed via a radix tree for efficient sparse addressing and retrieval in the page cache.[17] Large page support is provided through transparent huge pages (THP) via the huge mount option, which promotes 2MB or larger pages for reduced TLB overhead, though explicit hugetlbfs integration requires additional configuration for reserved huge pages.[1][12]
BSD and Derivatives
tmpfs was introduced in FreeBSD 7.0 in 2007, ported from NetBSD as part of a Google Summer of Code project led by Julio M. Merino Vidal, with further adaptations by Rohit Jalan and others.[9][8] This implementation provides an efficient in-memory filesystem that stores both file data and metadata in virtual memory, primarily using physical RAM but spilling file data to swap space under memory pressure to prevent system instability; metadata, however, remains non-swappable to ensure filesystem integrity.[9] Unlike earlier memory-backed approaches such as the md(4) virtual disk driver, tmpfs offers a dedicated filesystem interface without requiring a backing block device, emphasizing low-latency operations for temporary storage needs like /tmp.[9][18] Mounting a tmpfs filesystem in FreeBSD is performed via the mount(8) command with the -t tmpfs option, for example:mount -t tmpfs -o size=1g /mnt/tmp. Key mount options include size to limit total capacity (defaulting to half of available RAM plus swap), inodes for maximum file nodes, uid and gid for root ownership, and mode for permissions, allowing fine-tuned control over resource usage.[9] In environments with ZFS as the root filesystem, tmpfs is frequently mounted on directories like /tmp or /var/tmp to combine the speed of in-memory storage with ZFS's persistent, snapshot-capable datasets for other system areas.[19] The design leverages FreeBSD's virtual memory subsystem to handle allocation, mitigating fragmentation risks in low-RAM scenarios by paging out file contents while keeping structural elements in core memory.[9]
In BSD derivatives, implementations vary while sharing conceptual roots with the original tmpfs from SunOS, which pioneered memory-based filesystems in Solaris 2.1 (1992) using virtual memory structures for efficient temporary storage.[20][21] NetBSD introduced tmpfs in version 4.0 (2007), mounting it via the dedicated mount_tmpfs(8) utility with options for size, nodes, and ownership similar to FreeBSD, and it supports swap-backed expansion for data beyond physical RAM. OpenBSD added tmpfs support in version 5.5 (2014) through mount_tmpfs(8), but disabled the kernel module in 6.0 (2016) due to insufficient maintenance; users instead rely on the mfs(8) command, which creates a memory-backed UFS instance via md(4)-like virtual devices for comparable in-RAM temporary filesystems.[10][22]
macOS, as a Darwin-based BSD derivative, added native support for tmpfs via the mount_tmpfs command in macOS 11 Big Sur (2020), requiring administrative privileges and offering options like size and mode for creating RAM-resident mounts, though system paths such as /private/tmp remain on persistent APFS volumes rather than tmpfs by default.[23] In macOS 13 Ventura (2022), /private/tmp received privacy enhancements through reduced world-writability (mode 1775 instead of 1777), limiting inter-process access to temporary files without shifting to an in-memory backend.[24] These adaptations highlight BSD's focus on secure, efficient temporary storage, with quota-like controls achieved via the size option rather than per-user soft or hard limits.[9]
Other Systems
The original implementation of tmpfs appeared in SunOS 4.1, where it served as a virtual memory-based file system leveraging the operating system's paging mechanisms to store files directly in RAM for improved performance over disk-based temporary storage.[6] This design allowed tmpfs to function as a temporary file system without dedicated disk resources, with files paged in and out as needed based on virtual memory availability.[25] In later Solaris releases, tmpfs evolved to integrate more deeply with the kernel's virtual memory subsystem, enabling mounts like /tmp to use swap space as an extension when physical RAM was exhausted, while maintaining its core volatility semantics.[26] For scenarios requiring block-device-like RAM disks, Solaris provided the Loopback File Interface (lofi), which could back devices with memory or null files to simulate high-speed, non-persistent storage. Although tmpfs remains supported in Solaris 11, some deployments have shifted toward mounting temporary directories on ZFS file systems for added features like snapshots, though this does not replace tmpfs's memory-centric approach.[27] Microsoft Windows lacks a native tmpfs equivalent, as its file system architecture does not include a built-in virtual memory-backed temporary file system. Instead, third-party tools like ImDisk provide RAM disk functionality by creating virtual block devices entirely in physical memory, allowing users to mount high-speed, volatile volumes for temporary data storage.[28] ImDisk supports dynamic allocation of RAM for these disks, with performance benefits for I/O-intensive tasks, though data is lost on reboot or power failure similar to tmpfs. For Linux compatibility on Windows, Windows Subsystem for Linux 2 (WSL2), introduced in 2019, emulates a full Linux kernel environment where tmpfs mounts operate as expected, often using the host's memory for /tmp and other temporary mounts to enhance performance in cross-platform workflows.[29] In IBM AIX, the /tmp directory is typically implemented as a disk-backed Journaled File System (JFS or JFS2) rather than a memory-based one, though administrators can configure RAM disks using loopback devices for temporary needs in performance-critical scenarios.[30] For real-time embedded systems, QNX Neutrino RTOS offers a RAM-based "filesystem" under /dev/shmem, which functions analogously to tmpfs by storing read/write files directly in shared memory, ensuring low-latency access suitable for deterministic operations in automotive and industrial applications.[31] This approach prioritizes volatility and speed, with /tmp often symlinked to this memory area during builds or runtime.[32] Details on early SunOS tmpfs implementations are somewhat outdated due to limited archival documentation beyond seminal papers, complicating precise replication in modern contexts. In cloud virtual machines, tmpfs-like memory-backed storage has gained traction; for instance, Amazon Linux 2023 on AWS EC2 defaults to mounting /tmp as tmpfs, limited to 50% of available RAM to balance performance gains against memory constraints in ephemeral instances.[33]Benefits and Limitations
Advantages
Tmpfs provides significant performance advantages over traditional disk-based filesystems by operating entirely in virtual memory, eliminating disk latency and seek times for read and write operations. This results in faster I/O for temporary files, making it particularly suitable for directories like /tmp, browser caches, and build artifacts where quick access is critical. For instance, writing small files to tmpfs avoids the overhead of disk flushes and journaling, leading to speedups of up to 20 times in workloads such as data generation scripts compared to ext3 or ext4. Additionally, by keeping operations in RAM, tmpfs reduces wear on solid-state drives (SSDs), as temporary data does not contribute to flash memory write cycles.[5] In terms of resource efficiency, tmpfs integrates seamlessly with the kernel's page cache, allowing it to share memory allocations with running processes and dynamically grow or shrink based on usage without requiring dedicated disk space. This enables handling of large temporary datasets—such as intermediate results in computations—entirely in memory until swap is needed as a fallback, optimizing overall system resource utilization. Unlike fixed-size ramdisks, tmpfs only consumes RAM proportional to the actual data stored, promoting efficient memory management.[2] Tmpfs boosts performance in various applications, including compilers where faster linking and compilation steps occur due to reduced I/O, as seen in GCC builds that benefit from options like -pipe. In databases, it supports efficient temporary tables and shared memory segments via /dev/shm, accelerating inter-process communication. On mobile devices, the minimized disk activity contributes to energy savings by avoiding power-intensive I/O operations on storage hardware.[5][2]Disadvantages
Tmpfs exhibits significant volatility, as all data stored within it resides in virtual memory and is lost upon unmounting, system reboot, or power failure, rendering it unsuitable for any applications requiring persistent or critical data storage.[1] This inherent temporality stems from tmpfs's design, where no files are written to permanent storage like hard drives, ensuring that abrupt interruptions—such as out-of-memory (OOM) conditions or hardware failures—result in complete data erasure without recovery options.[1] Resource contention poses another key limitation, as tmpfs consumes physical RAM and, if enabled, swap space, potentially starving other system processes of memory and leading to OOM killer invocations or deadlocks in memory-constrained environments.[1] Oversizing tmpfs instances exacerbates this risk, as the default limit of half the physical RAM can be exceeded, preventing the OOM handler from freeing sufficient resources and causing system instability.[1] In systems under heavy load, this competition for memory can degrade overall performance, particularly when tmpfs usage approaches available limits and triggers ENOSPC errors for further writes. Swapping can be disabled using thenoswap mount option to avoid related overhead.[1][2]
Security concerns arise primarily from tmpfs's potential for denial-of-service (DoS) attacks, where users with write permissions can overfill the filesystem—especially if mounted without size or inode limits—exhausting all RAM and swap, thereby rendering the system unresponsive.[1] Mounting tmpfs with unlimited parameters (e.g., size=0 or nr_inodes=0) amplifies this vulnerability by allowing unbounded growth, creating a larger attack surface in writable locations like /tmp, where malicious actors could place or exploit setuid binaries to escalate privileges before data volatility intervenes.[1] Additionally, the lack of user namespace support for quotas in tmpfs further heightens these risks in multi-user or containerized setups.[1]
Scalability limits become evident with very large files or prolonged usage, as tmpfs relies on paging mechanisms that introduce swapping overhead when RAM is insufficient, leading to increased I/O latency and reduced efficiency compared to disk-based filesystems for bulk data operations.[2]
Usage and Configuration
Mounting Options
Tmpfs filesystems are mounted using the standardmount command on both Linux and BSD systems, specifying the filesystem type and desired options to control size, permissions, and other behaviors. On Linux, a basic mount can be performed with mount -t tmpfs -o size=512m tmpfs /mnt/tmp, where the -o size=512m option limits the filesystem to 512 megabytes of memory.[1] Similarly, on FreeBSD and derivatives, the command is mount -t tmpfs -o size=512m tmpfs /mnt/tmp, using the same syntax for size specification.[9]
Key mounting options include size to set a quota in bytes, supporting suffixes like k, m, g, or percentages of available RAM (e.g., size=50%); without it, Linux defaults to half of physical RAM, while BSD defaults to all available memory and swap space.[1][9] The [mode](/page/Mode) option sets permissions for the root directory in octal notation (e.g., mode=1777 for world-writable with sticky bit), [uid](/page/UID) and [gid](/page/UID) assign ownership (e.g., uid=1000,gid=1000), and nr_inodes (Linux) or inodes (BSD) limits the number of files (e.g., nr_inodes=10000).[1][9] For security, noexec prevents execution of binaries or scripts, and nosuid disables setuid and setgid bits on the mount.[1][9]
To automate mounts at boot, add entries to /etc/[fstab](/page/Fstab); for example, on Linux: tmpfs /tmp tmpfs defaults,size=50% 0 0, which mounts /tmp with default options and a size limit of 50% of RAM.[1] On BSD systems, a similar entry is tmpfs /tmp tmpfs rw,size=512m 0 0.[9] Under systemd on Linux, /tmp is automatically mounted as tmpfs if not already defined in /etc/[fstab](/page/Fstab), using defaults like size=50% and mode=1777.[34] Cross-platform variations include Linux's nr_inodes for inode quotas versus BSD's inodes flag, and differing defaults for maximum size allocation.[9]
Security and Best Practices
To enhance security when using tmpfs, administrators should apply restrictive mount options such asnoexec to prevent the execution of binaries stored on the filesystem, nosuid to disable setuid and setgid bits that could elevate privileges, and nodev to block the interpretation of device files.[3][1] These options are standard for Linux filesystems and help mitigate risks like code injection or unauthorized privilege escalation on volatile storage.[36] In containerized environments, tmpfs mounts are inherently private and cannot be shared between containers, providing isolation for temporary data without persistence to the host disk, though data may still swap to disk if memory pressure occurs.[37]
Best practices include explicitly limiting tmpfs size to a fraction of available RAM—typically 10-50% depending on workload—to prevent out-of-memory (OOM) deadlocks where the system cannot reclaim resources.[1] The default size is half of physical RAM, but oversizing risks system instability; use the size parameter (e.g., size=1G or size=20%) during mounting.[3] Monitoring usage is essential via tools like df for filesystem stats or /proc/meminfo for shared memory (Shmem) entries, ensuring early detection of excessive consumption.[1] Reserve tmpfs for non-sensitive, temporary data only, as its volatility means contents are discarded on unmount or reboot, reducing exposure but requiring backups for critical information.[37][36]
For risk mitigation, combine tmpfs with overlayfs to provide semi-persistent behavior, such as overlaying a writable tmpfs layer on persistent storage for logs, allowing volatile operation while selectively committing changes to disk for auditing.[38] Enable auditing with tools like auditd to log access and modifications on tmpfs mounts, detecting potential abuse such as unauthorized writes or denial-of-service attempts through resource exhaustion.[39] Adjust OOM scoring by setting oom_score_adj values (e.g., -500 for critical processes) to prioritize system stability over tmpfs-heavy workloads during memory pressure.[40]
In modern deployments as of 2025, integrate tmpfs with mandatory access control (MAC) modules like SELinux or AppArmor for fine-grained labeling and confinement; SELinux supports extended attributes on tmpfs files via the fscontext mount option to enforce context-based policies, while AppArmor profiles can restrict container access to tmpfs paths.[41][42] In multi-tenant cloud environments, avoid unrestricted tmpfs usage by enforcing quotas through cgroups, which limit shared memory (including tmpfs/shmem) per tenant via memory.limit_in_bytes to prevent one user from starving others.[40][43]