Fact-checked by Grok 2 weeks ago

tmpfs

Tmpfs is a temporary file system in the Linux kernel that stores all files in virtual memory, utilizing kernel internal caches for rapid access while ensuring data is transient and lost upon unmounting or system reboot. It operates by dynamically growing and shrinking as needed, with contents that can be swapped to disk for memory management, though this can be disabled. Unlike persistent file systems, tmpfs does not write data to non-volatile storage, making it ideal for short-lived operations but unsuitable for long-term data retention. Developed as an extension of the earlier ramfs , tmpfs was introduced on December 1, 2001, by Christoph Rohland to address limitations such as the lack of size controls and swap support in ramfs. Subsequent enhancements, including updates by Hugh Dickins in 2007, KOSAKI Motohiro in 2010, Chris Down in 2020, and André Almeida in 2024, added features like Transparent Huge Pages (THP) support, access control lists (ACLs), extended attributes, user and group quotas, and NUMA memory policies. By default, tmpfs allocates up to half of available for its size and inode count, configurable via mount options such as size (e.g., size=50% or absolute values in k/m/g units) and nr_inodes. Tmpfs is commonly mounted on directories like /tmp or /var/tmp for temporary file storage and on /dev/shm for POSIX shared memory operations, where it appears as "Shmem" in /proc/meminfo. It supports advanced options including case-insensitive lookups with UTF-8 encoding via casefold, 32-bit or 64-bit inode numbers, and initial mount permissions through mode, uid, and gid. While highly performant due to its in-memory nature, oversizing tmpfs can lead to system deadlocks, and quotas are not compatible with user namespaces. Compared to alternatives like ramfs (which lacks swapping and resizing) or block RAM disks (fixed-size and slower), tmpfs balances flexibility and efficiency for modern workloads.

Introduction

Definition and Purpose

Tmpfs is a temporary filesystem that stores all of its files in the system's , utilizing a portion of for file storage and appearing as a standard mounted directory to users and applications. It operates as a virtual memory-based filesystem, where file contents reside primarily in rather than on persistent devices. Key characteristics include its —files are discarded upon unmounting or system reboot—and size limitations tied to available and swap space, with a default allocation of up to 50% of physical . Despite these constraints, tmpfs supports standard filesystem operations such as reading, writing, and directory management, enabling seamless integration with existing tools and APIs. The primary purpose of tmpfs is to provide high-speed, non-persistent storage for short-lived data, prioritizing performance over durability in contrast to traditional disk-based filesystems like or UFS. By avoiding disk I/O, it facilitates rapid access for temporary files, overflow from swap space, and caching mechanisms that benefit from in-memory operations. This design makes it ideal for scenarios where data persistence is unnecessary, such as session-specific files or intermediate results in computational tasks, while ensuring that under memory pressure, contents can spill over into swap without immediate failure. Tmpfs emerged to address performance bottlenecks associated with disk access in common use cases, particularly the /tmp for temporary files and via . In environments like /tmp, disk I/O for small, ephemeral files can introduce significant —often orders of magnitude slower than access—leading to slowdowns in applications reliant on quick file creation and deletion. Similarly, for , tmpfs enables efficient segments, such as Linux's /dev/shm mount point, reducing overhead in data exchange between processes. These motivations underscore tmpfs's role in optimizing system responsiveness for volatile workloads without compromising the integrity of persistent data stores.

Historical Development

tmpfs originated as a memory-based filesystem in , developed by to provide efficient temporary storage using the operating system's resources. First implemented in 4.0 in late 1987, it was refined and documented in 4.1 by 1990, drawing from earlier concepts like RAM disks but introducing dynamic memory allocation without fixed disk partitions. This addressed limitations in prior Unix variants, such as SVR4, which lacked integrated volatile filesystems for high-performance, short-lived data. In , tmpfs evolved from the shmfs ( filesystem) and ramfs precursors, with full integration introduced on December 1, 2001, by Christoph Rohland as part of the 2.4 series. Unlike ramfs, which did not support size limits or swap usage, tmpfs incorporated swap-backed from its inception, allowing it to expand beyond physical while maintaining volatility. This adoption marked a key milestone, enabling widespread use for temporary files in distributions and addressing the need for scalable in-memory . Subsequent enhancements included updates by Hugh Dickins in 2007 for better swap and resizing support, KOSAKI Motohiro in 2010 for Transparent Huge Pages (THP) compatibility, Chris Down in 2020 for ACLs and extended attributes, and André Almeida in 2024 for user and group quotas along with NUMA memory policies. BSD systems adopted tmpfs in the mid-2000s, building on earlier memory filesystems like MFS introduced in 4.2BSD in 1983. The modern tmpfs implementation was developed for during 2005 by Julio M. Merino Vidal and first appeared in 4.0 in 2007. It was subsequently ported to 7.0 in 2007 and 5.5 in 2014, providing a unified, efficient alternative to memory disks with support for dynamic sizing. Notable developments in the included enhancements, such as the noexec option to prevent files on tmpfs mounts, reducing risks from temporary scripts or binaries. In the , adaptations for emerged, with introducing tmpfs volume mounts in 1.10.0 (February 2016) for ephemeral, memory-resident in containers, enhancing in cloud-native environments.

Core Concepts

Semantics and Operations

Tmpfs provides a filesystem interface that manifests as a hierarchical directory tree residing entirely in virtual memory, enabling rapid access to files without involving persistent storage devices. It adheres to POSIX standards for core file operations, including creation (creat or open with O_CREAT), reading (read), writing (write), deletion (unlink), and renaming (rename). Unlike traditional disk-based filesystems, all data in tmpfs is volatile and does not persist across system reboots or unmounts, ensuring that the contents are automatically cleared upon filesystem detachment. File sizes in tmpfs are handled dynamically, allowing them to expand or contract based on write and truncate operations, subject to an overall size limit configurable at mount time (defaulting to half of available ). Directory listings (readdir) and management, such as permissions and timestamps, operate on inodes maintained in , providing low-latency responses typical of -resident structures. When memory resources are exhausted and the size limit is approached, operations fail with the ENOSPC error, preventing further allocation and alerting applications to resource constraints. Several unique behaviors distinguish tmpfs from persistent filesystems. Under memory pressure, individual pages can be swapped to disk if swapping is enabled (default behavior), maintaining functionality while deferring non-critical data to secondary storage. Operations such as file writes and directory modifications are atomic, ensuring consistency in concurrent access scenarios as per POSIX semantics. Due to its fully volatile nature, tmpfs requires no journaling mechanisms or filesystem consistency checks like fsck, simplifying recovery and eliminating overhead associated with durability guarantees. The interface for tmpfs is accessed through standard mounting tools, such as mount -t tmpfs /dev/shm for shared memory or mount -t tmpfs /tmp for storage, where it integrates seamlessly into the directory hierarchy without special user-space drivers.

Memory Management

Tmpfs stores all files within the system's subsystem, utilizing the kernel's to hold file data and metadata. This integration allows tmpfs pages to be treated as shared memory (shmem), visible as "Shmem" in /proc/meminfo and contributing to the "Shared" category in tools like free(1). Pages are allocated on demand when data is written, using the kernel's page allocator rather than pre-allocating contiguous blocks, enabling non-contiguous physical page distribution while maintaining virtual address continuity for file mappings. The filesystem exhibits dynamic growth, expanding as files are created or modified and contracting upon deletion, without fixed upfront reservations. This is capped by mount-time limits, such as the size option (e.g., size=1G or size=50% of physical by default), which enforces an upper bound on total consumption including both and swap usage. Inode limits can also be set via nr_inodes, defaulting to half the number of physical pages or lowmem pages, preventing excessive overhead. These limits interact with broader configurations, such as those tunable via /proc/sys/vm/ sysctls (e.g., vm.overcommit_memory for allocation policy), ensuring tmpfs adheres to system-wide memory heuristics. Swapping is integrated seamlessly, with tmpfs pages eligible for eviction to swap space under memory pressure, similar to anonymous memory mappings. This enables tmpfs instances to exceed available physical , but introduces performance trade-offs due to the latency of swap I/O. The noswap mount option (available since 6.4) disables this behavior, keeping all data in exclusively. In high-pressure scenarios, overcommitment of tmpfs can trigger the Out-Of-Memory () killer; however, since tmpfs memory is not easily reclaimable by the OOM handler, excessive sizing may lead to system . For performance, tmpfs eliminates physical disk seeks entirely for read and write operations when data resides in , relying on the for buffering and . This results in near-native memory speeds for I/O, though swap usage degrades throughput. To optimize large allocations, tmpfs supports Transparent Huge Pages (THP) when enabled via (CONFIG_TRANSPARENT_HUGEPAGE), with options like huge=always or huge=within_size controlling usage; this reduces TLB overhead for workloads with big files, improving efficiency on systems with sufficient huge page reserves. System-wide THP for shmem/tmpfs is tunable via /sys/[kernel](/page/Kernel)/mm/transparent_hugepage/shmem_enabled.

Implementations

Linux

Tmpfs was introduced in the Linux kernel 2.4 series in December 2001, implemented in the fs/tmpfs/ subsystem, backed by shared memory structures in mm/shmem.c to provide a temporary, memory-resident filesystem. It serves as the backing for shared anonymous memory mappings and System V shared memory segments internally, even when the user-visible tmpfs is disabled via kernel configuration. This integration leverages the shared memory filesystem (shmem) subsystem, where operations like shmem_file_setup create unlinked tmpfs files to back shared memory without requiring an explicit mount. A key feature of the tmpfs is its support for shared memory, typically mounted at /dev/shm to enable System V and inter-process communication mechanisms, such as those used by since version 2.2. options include nr_blocks to cap the total number of file blocks and nr_inodes to limit inode allocation, alongside mode for setting default file permissions on the . Additionally, tmpfs integrates with control groups () through the , which accounts for shmem/tmpfs usage in cgroup memory limits, enabling containerized environments to restrict shared memory consumption. In , derived from the , tmpfs is employed for temporary app data storage, such as caches and shared memory segments, to optimize performance on resource-constrained devices. The implementation has evolved significantly since its inception. Post-2015 developments include enhanced fanotify event reporting for tmpfs, with kernel patches from around 2018 adding support for additional event types like create and delete, and later improvements in 2021 enabling file handle reporting for better monitoring in containerized setups. In the , tmpfs gained efficient support via , introduced in kernel 5.1, allowing high-performance operations on memory-backed files without traditional syscalls. In 2024, 6.13 introduced support for arbitrary-sized large folios in tmpfs, improving read performance by up to 20% for memory-intensive operations. It also handles overlays with filesystems, permitting tmpfs as an upper layer in configurations for volatile, in-memory modifications atop slower backends. At the code level, tmpfs files are backed by shmem structures, with pages indexed via a for efficient sparse addressing and retrieval in the . Large page support is provided through transparent huge pages (THP) via the huge option, which promotes 2MB or larger pages for reduced TLB overhead, though explicit hugetlbfs integration requires additional configuration for reserved huge pages.

BSD and Derivatives

tmpfs was introduced in FreeBSD 7.0 in 2007, ported from as part of a project led by Julio M. Merino Vidal, with further adaptations by Rohit Jalan and others. This implementation provides an efficient in-memory filesystem that stores both file data and metadata in , primarily using physical RAM but spilling file data to swap space under memory to prevent system instability; metadata, however, remains non-swappable to ensure filesystem integrity. Unlike earlier memory-backed approaches such as the md(4) virtual disk driver, tmpfs offers a dedicated filesystem interface without requiring a backing block device, emphasizing low-latency operations for temporary storage needs like /tmp. Mounting a tmpfs filesystem in FreeBSD is performed via the mount(8) command with the -t tmpfs option, for example: mount -t tmpfs -o size=1g /mnt/tmp. Key mount options include size to limit total capacity (defaulting to half of available RAM plus swap), inodes for maximum file nodes, uid and gid for root ownership, and mode for permissions, allowing fine-tuned control over resource usage. In environments with ZFS as the root filesystem, tmpfs is frequently mounted on directories like /tmp or /var/tmp to combine the speed of in-memory storage with ZFS's persistent, snapshot-capable datasets for other system areas. The design leverages FreeBSD's virtual memory subsystem to handle allocation, mitigating fragmentation risks in low-RAM scenarios by paging out file contents while keeping structural elements in core memory. In BSD derivatives, implementations vary while sharing conceptual roots with the original tmpfs from , which pioneered memory-based filesystems in Solaris 2.1 (1992) using structures for efficient temporary storage. introduced tmpfs in version 4.0 (2007), mounting it via the dedicated mount_tmpfs(8) utility with options for size, nodes, and ownership similar to , and it supports swap-backed expansion for data beyond physical RAM. added tmpfs support in version 5.5 (2014) through mount_tmpfs(8), but disabled the kernel module in 6.0 (2016) due to insufficient maintenance; users instead rely on the mfs(8) command, which creates a memory-backed UFS instance via md(4)-like virtual devices for comparable in-RAM temporary filesystems. macOS, as a Darwin-based BSD derivative, added native support for tmpfs via the mount_tmpfs command in macOS 11 (2020), requiring administrative privileges and offering options like size and mode for creating RAM-resident mounts, though system paths such as /private/tmp remain on persistent APFS volumes rather than tmpfs by default. In macOS 13 Ventura (2022), /private/tmp received privacy enhancements through reduced world-writability (mode 1775 instead of 1777), limiting inter-process access to temporary files without shifting to an in-memory backend. These adaptations highlight BSD's focus on secure, efficient temporary storage, with quota-like controls achieved via the size option rather than per-user soft or hard limits.

Other Systems

The original implementation of tmpfs appeared in 4.1, where it served as a virtual memory-based leveraging the operating system's paging mechanisms to store files directly in for improved performance over disk-based temporary storage. This design allowed tmpfs to function as a temporary without dedicated disk resources, with files paged in and out as needed based on availability. In later releases, tmpfs evolved to integrate more deeply with the kernel's subsystem, enabling mounts like /tmp to use swap space as an extension when physical was exhausted, while maintaining its core semantics. For scenarios requiring block-device-like RAM disks, provided the Loopback File Interface (lofi), which could back devices with memory or null files to simulate high-speed, non-persistent storage. Although tmpfs remains supported in 11, some deployments have shifted toward mounting temporary directories on file systems for added features like snapshots, though this does not replace tmpfs's memory-centric approach. Microsoft Windows lacks a native tmpfs equivalent, as its file system architecture does not include a built-in virtual memory-backed temporary file system. Instead, third-party tools like ImDisk provide RAM disk functionality by creating virtual block devices entirely in physical memory, allowing users to mount high-speed, volatile volumes for temporary data storage. ImDisk supports dynamic allocation of RAM for these disks, with performance benefits for I/O-intensive tasks, though data is lost on reboot or power failure similar to tmpfs. For Linux compatibility on Windows, Windows Subsystem for Linux 2 (WSL2), introduced in 2019, emulates a full environment where tmpfs mounts operate as expected, often using the host's memory for /tmp and other temporary mounts to enhance in cross-platform workflows. In AIX, the /tmp directory is typically implemented as a disk-backed Journaled File System (JFS or JFS2) rather than a memory-based one, though administrators can configure disks using devices for temporary needs in performance-critical scenarios. For embedded systems, RTOS offers a -based "filesystem" under /dev/shmem, which functions analogously to tmpfs by storing read/write files directly in , ensuring low-latency access suitable for deterministic operations in automotive and industrial applications. This approach prioritizes volatility and speed, with /tmp often symlinked to this memory area during builds or runtime. Details on early SunOS tmpfs implementations are somewhat outdated due to limited archival documentation beyond seminal papers, complicating precise replication in modern contexts. In cloud virtual machines, tmpfs-like memory-backed storage has gained traction; for instance, Amazon Linux 2023 on AWS EC2 defaults to mounting /tmp as tmpfs, limited to 50% of available to balance gains against memory constraints in ephemeral instances.

Benefits and Limitations

Advantages

Tmpfs provides significant performance advantages over traditional disk-based filesystems by operating entirely in , eliminating disk latency and seek times for read and write operations. This results in faster I/O for temporary files, making it particularly suitable for directories like /tmp, browser caches, and build artifacts where quick access is critical. For instance, writing small files to tmpfs avoids the overhead of disk flushes and journaling, leading to speedups of up to 20 times in workloads such as data generation scripts compared to or ext4. Additionally, by keeping operations in , tmpfs reduces wear on solid-state drives (SSDs), as temporary data does not contribute to write cycles. In terms of resource efficiency, tmpfs integrates seamlessly with the kernel's , allowing it to share allocations with running processes and dynamically grow or shrink based on usage without requiring dedicated disk space. This enables handling of large temporary datasets—such as intermediate results in computations—entirely in until swap is needed as a fallback, optimizing overall resource utilization. Unlike fixed-size ramdisks, tmpfs only consumes RAM proportional to the actual data stored, promoting efficient . Tmpfs boosts performance in various applications, including compilers where faster linking and compilation steps occur due to reduced I/O, as seen in builds that benefit from options like -pipe. In databases, it supports efficient temporary tables and segments via /dev/shm, accelerating . On mobile devices, the minimized disk activity contributes to energy savings by avoiding power-intensive I/O operations on storage hardware.

Disadvantages

Tmpfs exhibits significant volatility, as all data stored within it resides in virtual memory and is lost upon unmounting, system reboot, or power failure, rendering it unsuitable for any applications requiring persistent or critical data storage. This inherent temporality stems from tmpfs's design, where no files are written to permanent storage like hard drives, ensuring that abrupt interruptions—such as out-of-memory () conditions or hardware failures—result in complete without options. Resource contention poses another key limitation, as tmpfs consumes physical and, if enabled, swap space, potentially starving other system processes of and leading to killer invocations or deadlocks in memory-constrained environments. Oversizing tmpfs instances exacerbates this risk, as the default limit of half the physical can be exceeded, preventing the handler from freeing sufficient resources and causing system instability. In systems under heavy load, this competition for can degrade overall , particularly when tmpfs usage approaches available limits and triggers ENOSPC errors for further writes. can be disabled using the noswap mount option to avoid related overhead. Security concerns arise primarily from tmpfs's potential for denial-of-service (DoS) attacks, where users with write permissions can overfill the filesystem—especially if mounted without size or inode limits—exhausting all RAM and swap, thereby rendering the system unresponsive. Mounting tmpfs with unlimited parameters (e.g., size=0 or nr_inodes=0) amplifies this vulnerability by allowing unbounded growth, creating a larger attack surface in writable locations like /tmp, where malicious actors could place or exploit setuid binaries to escalate privileges before data volatility intervenes. Additionally, the lack of user namespace support for quotas in tmpfs further heightens these risks in multi-user or containerized setups. Scalability limits become evident with very large files or prolonged usage, as tmpfs relies on paging mechanisms that introduce overhead when is insufficient, leading to increased and reduced compared to disk-based filesystems for bulk operations.

Usage and Configuration

Mounting Options

Tmpfs filesystems are ed using the standard mount command on both and BSD systems, specifying the filesystem type and desired options to control size, permissions, and other behaviors. On , a basic mount can be performed with mount -t tmpfs -o size=512m tmpfs /mnt/tmp, where the -o size=512m option limits the filesystem to 512 megabytes of memory. Similarly, on FreeBSD and derivatives, the command is mount -t tmpfs -o size=512m tmpfs /mnt/tmp, using the same syntax for size specification. Key mounting options include size to set a quota in bytes, supporting suffixes like k, m, g, or percentages of available (e.g., size=50%); without it, defaults to half of physical , while BSD defaults to all available memory and swap space. The [mode](/page/Mode) option sets permissions for the in octal notation (e.g., mode=1777 for world-writable with ), [uid](/page/UID) and [gid](/page/UID) assign ownership (e.g., uid=1000,gid=1000), and nr_inodes () or inodes (BSD) limits the number of files (e.g., nr_inodes=10000). For security, noexec prevents execution of binaries or scripts, and nosuid disables and setgid bits on the mount. To automate mounts at boot, add entries to /etc/[fstab](/page/Fstab); for example, on : tmpfs /tmp tmpfs defaults,size=50% 0 0, which mounts /tmp with default options and a size limit of 50% of . On BSD systems, a similar entry is tmpfs /tmp tmpfs rw,size=512m 0 0. Under on , /tmp is automatically mounted as tmpfs if not already defined in /etc/[fstab](/page/Fstab), using defaults like size=50% and mode=1777. Cross-platform variations include 's nr_inodes for inode quotas versus BSD's inodes flag, and differing defaults for maximum size allocation.

Security and Best Practices

To enhance security when using tmpfs, administrators should apply restrictive mount options such as noexec to prevent the execution of binaries stored on the filesystem, nosuid to disable and setgid bits that could elevate privileges, and nodev to block the interpretation of device files. These options are standard for filesystems and help mitigate risks like or unauthorized on volatile storage. In containerized environments, tmpfs mounts are inherently private and cannot be shared between containers, providing for temporary data without persistence to the host disk, though data may still swap to disk if memory pressure occurs. Best practices include explicitly limiting tmpfs size to a fraction of available —typically 10-50% depending on workload—to prevent out-of-memory () deadlocks where the cannot reclaim resources. The default size is half of physical , but oversizing risks instability; use the size parameter (e.g., size=1G or size=20%) during mounting. Monitoring usage is essential via tools like df for filesystem stats or /proc/meminfo for (Shmem) entries, ensuring early detection of excessive consumption. Reserve tmpfs for non-sensitive, temporary data only, as its volatility means contents are discarded on unmount or , reducing exposure but requiring backups for critical information. For risk mitigation, combine tmpfs with to provide semi-persistent behavior, such as overlaying a writable tmpfs layer on persistent storage for logs, allowing volatile operation while selectively committing changes to disk for auditing. Enable auditing with tools like auditd to log access and modifications on tmpfs mounts, detecting potential abuse such as unauthorized writes or denial-of-service attempts through resource exhaustion. Adjust scoring by setting oom_score_adj values (e.g., -500 for critical processes) to prioritize system stability over tmpfs-heavy workloads during memory pressure. In modern deployments as of 2025, integrate tmpfs with (MAC) modules like SELinux or for fine-grained labeling and confinement; SELinux supports extended attributes on tmpfs files via the fscontext mount option to enforce context-based policies, while profiles can restrict container access to tmpfs paths. In multi-tenant cloud environments, avoid unrestricted tmpfs usage by enforcing quotas through , which limit (including tmpfs/shmem) per tenant via memory.limit_in_bytes to prevent one user from starving others.

References

  1. [1]
    Tmpfs — The Linux Kernel documentation
    Tmpfs is a file system that keeps all files in virtual memory, making everything temporary and lost upon unmounting. It extends ramfs with configurable options.
  2. [2]
    Tmpfs - The Linux Kernel documentation
    Tmpfs is a file system which keeps all of its files in virtual memory. Everything in tmpfs is temporary in the sense that no files will be created on your hard ...
  3. [3]
    tmpfs(5) - Linux manual page - man7.org
    The tmpfs facility allows the creation of filesystems whose contents reside in virtual memory. Since the files on such filesystems typically reside in RAM, file ...Missing: definition | Show results with:definition
  4. [4]
    Data Driven Analysis: /tmp on tmpfs - Ubuntu
    Jan 20, 2016 · Putting /tmp on tmpfs improves I/O, reduces energy use, extends SSD life, and provides faster performance, security, and reliability.
  5. [5]
    Temporary files: RAM or disk? - LWN.net
    May 31, 2012 · Mounting /tmp on tmpfs puts all of the temporary files in RAM. That will reduce the amount of disk I/O that needs to be done, as the filesystem ...Missing: origin motivation<|control11|><|separator|>
  6. [6]
    [PDF] tmpfs: A Virtual Memory File System - SunHELP
    This paper describes the design and implementation of tmpfs, a file system based on SunOS virtual memory resources. Tmpfs does not use traditional non-volatile ...
  7. [7]
    SunOS man pages
    Oct 9, 1990 · tmpfs is a memory based file system which uses kernel resources relating to the VM system and page cache as a file system.
  8. [8]
    Software > tmpfs for NetBSD - Julio Merino (jmmv.dev)
    This project was developed under the Summer of Code 2005 program for NetBSD by me and was later ported to FreeBSD in 2008 by other people.Missing: history | Show results with:history
  9. [9]
    tmpfs - FreeBSD Manual Pages
    ... HISTORY The tmpfs driver first appeared in FreeBSD 7.0. AUTHORS The tmpfs kernel implementation was written by Julio M. Merino Vidal <jmmv@NetBSD.org> as a ...
  10. [10]
    mount_tmpfs(8) - OpenBSD manual pages
    The mount_tmpfs command attaches an instance of the efficient memory file system to the global file system namespace. The tmpfs parameter only exists for ...
  11. [11]
    tmpfs mounts - Docker Docs
    tmpfs mounts in Docker allow containers to create temporary files outside the writable layer, only persisted in host memory, and are best for non-persistent ...
  12. [12]
    Transparent Hugepage Support - The Linux Kernel documentation
    Traditionally, tmpfs only supported a single huge page size (“PMD”). Today, it also supports smaller sizes just like anonymous memory, often referred to as “ ...
  13. [13]
    Memory Resource Controller - The Linux Kernel documentation
    When a cgroup goes over its limit, we first try to reclaim memory from the cgroup so as to make space for the new pages that the cgroup has touched. If the ...
  14. [14]
    Android kernel file system support
    In general, virtual file systems, including the following, are supported. debugfs; overlayfs; procfs; sysfs; tmpfs; tracefs. Note: ...
  15. [15]
    fanotify: add support for more event types - LWN.net
    Dec 2, 2018 · fanotify: add support for more event types ; Subject: [PATCH v4 00/15] fanotify: add support for more event types ; Date: Sun, 2 Dec 2018 13:38:11 ...
  16. [16]
    Recoving tmpfs from Memory with Volatility
    Aug 13, 2012 · To find the correct index in the tree for a page, we simply divide the offset of the page into the file by page size (4k in this case). We do ...
  17. [17]
  18. [18]
    Chapter 22. The Z File System (ZFS) | FreeBSD Documentation Portal
    May 29, 2025 · ZFS is an advanced file system designed to solve major problems found in previous storage subsystem software.
  19. [19]
    tmpfs(7FS) - Device and Network Interfaces
    tmpfs is a memory based file system which uses kernel resources relating to the VM system and page cache as a file system. Once mounted, a tmpfs file system ...Missing: origin SunOS
  20. [20]
    [PDF] tmpfs: A Virtual Memory File System
    This paper describes tmpfs, a memory-based file system that uses resources and structures of the SunOS virtual memory subsystem. Rather than using dedicated ...
  21. [21]
  22. [22]
    How to keep /private/tmp on a RAM drive with Big Sur & Monterrey+?
    Nov 22, 2021 · This will create a 4 GB RAM disk that is available on /Volumes/ramtmp. Now you can set your compiler to use that folder for temporary files.Can I mount the root (system) filesystem as writable in Big Sur?MacOS kernel panics when unmounting Time Machine disk images ...More results from apple.stackexchange.com
  23. [23]
    /tmp (/private/tmp) not writeable anymore - Apple Community
    Sep 16, 2023 · Since upgrading to MacOS 13.5.2 I've noticed that /tmp --> /private/tmp has started changing from "drwxrwxrwt" (1777) to "drwxrwxr-t" (1775).I found some things in my iMac that I don't have access toHow do I access to /private/tmp folder? - Apple Support CommunitiesMore results from discussions.apple.com
  24. [24]
    UNIX Filesystems: Evolution, Design, and Implementation - O'Reilly
    The Sun tmpfs Filesystem. Sun developed a memory-based filesystem that used the facilities offered by the virtual memory subsystem [SNYD90].
  25. [25]
    Swap Space and the TMPFS File System
    The TMPFS file system is activated automatically in the Oracle Solaris environment by an entry in the /etc/vfstab file. The TMPFS file system stores files and ...Missing: origins SunOS
  26. [26]
    Oracle Solaris 11 File System Changes
    Similar to Oracle Solaris 10 releases, a ZFS file system is mounted automatically when it is created. No need exists to edit the /etc/vfstab to mount local ZFS ...
  27. [27]
    ImDisk Toolkit download | SourceForge.net
    Oct 20, 2025 · This tool is dedicated to ramdisk creation and mounting of image files of hard drive or cd-rom. It is intended for fixing most compatibility ...Files · ImDisk Toolkit Files · Reviews · Support<|separator|>
  28. [28]
    FAQ's about Windows Subsystem for Linux - WSL - Microsoft Learn
    WSL 2 supports the same wsl.conf file that WSL 1 uses. This means that any configuration options that you had set in a WSL 1 distro, such as automounting ...
  29. [29]
    Troubleshooting full filesystems - IBM
    This technote addresses some reasons a filesystem can become full, and presents ways to find out what may be filling it up in order to free the space.
  30. [30]
    RAM “filesystem” - QNX
    Every QNX OS system also provides a simple RAM-based filesystem that allows read/write files to be placed under /dev/shmem.
  31. [31]
    Creating a temporary directory (/tmp) - QNX
    You can use your buildfile to create a temporary directory in the RAM shared memory “filesystem”.Missing: real- | Show results with:real-
  32. [32]
    /tmp is now tmpfs - Amazon Linux 2023 - AWS Documentation
    Amazon Linux 2023 defaults to using tmpfs for /tmp with a limit of 50% of RAM and a maximum of one million inodes.
  33. [33]
    9.5 Release Notes | Red Hat Enterprise Linux | 9
    TMPFS blocks can be swapped out when there is a memory shortage, which poses a problem for certain performance- or privacy-critical workloads. Passing the new ...
  34. [34]
    Can a tmpfs become fragmented? - Server Fault
    Jun 30, 2011 · It's also possible for tmpfs in RAM to become "fragmented", but this isn't a practical concern: RAM is "fast enough" that you'd waste more CPU ...Missing: low | Show results with:low
  35. [35]
    tmpfs - ArchWiki
    Sep 28, 2025 · tmpfs is a temporary filesystem in memory/swap, used to speed up file access or clear contents on reboot. It's used in /tmp, /var/lock, and / ...
  36. [36]
    [PDF] CONFIGURATION RECOMMENDATIONS OF A GNU/LINUX SYSTEM
    Oct 3, 2022 · This document, written by ANSSI, the French National Information Security Agency, presents the “Configuration recommendations of a GNU/LINUX ...
  37. [37]
    Overlay Filesystem — The Linux Kernel documentation
    An overlay-filesystem tries to present a filesystem which is the result over overlaying one filesystem on top of the other.
  38. [38]
    Understanding Linux Audit | Security and Hardening Guide
    The Linux audit framework collects security event information, using components like auditd, auditctl, and audit rules, to determine security policy violations.Missing: abuse | Show results with:abuse
  39. [39]
    Memory Resource Controller — The Linux Kernel documentation
    ### Summary: Limiting tmpfs/shmem Usage with cgroups in Multi-Tenant Environments
  40. [40]
    CONFIG_TMPFS_SECURITY: tmpfs Security Labels
    This option enables an extended attribute handler for file security labels in the tmpfs filesystem. If you are not using a security module that requires using ...
  41. [41]
    5.6. SELinux Contexts – Labeling Files | Security-Enhanced Linux
    SELinux contexts label files with security info including user, role, type, and level, used for access control. New files inherit their parent's type.
  42. [42]
    shmem: Add user and group quota support for tmpfs - LWN.net
    shmem: Add user and group quota support for tmpfs ; Subject: [PATCH V2 0/6] shmem: Add user and group quota support for tmpfs ; Date: Thu, 20 Apr ...