Fact-checked by Grok 2 weeks ago

QFS

The Quick File System (QFS) is a high-performance, scalable originally developed in the early 1990s by Large Storage Configurations (LSC) for the operating system, later acquired by in 2001. It enables efficient storage and access of large-scale data volumes across multiple hosts in shared environments. It integrates closely with archival and storage management tools to support virtually unlimited capacity, direct I/O operations, and for applications handling text, images, audio, video, and . Key features include metadata separation for faster access, support for disk quotas, and compatibility with Storage Area Networks (SANs) and Cluster for fault-tolerant configurations. Following Sun's acquisition by in 2010, QFS became part of Oracle Hierarchical Storage Manager (OHSM). As of 2019, Oracle announced the end of sale for OHSM, though QFS remains notable for its role in legacy and archival storage solutions, with modern open-source forks like Versity Storage Manager extending its capabilities to and cloud integrations.

Overview

Definition and Purpose

QFS, or Quick File System, is a high-performance, originally developed by Large Storage Configurations (LSC) and acquired by in 2001, later maintained by for managing large-scale in UNIX environments. It provides a standard interface while enabling efficient handling of diverse file types, including text, images, audio, video, and , all within a single logical . As a standalone or shared distributed system, QFS supports mounting on multiple hosts, making it suitable for networked storage area networks (). The primary purpose of QFS is to deliver device-rated data access speeds for multiple concurrent users in data-intensive applications, such as scientific computing, media processing, and enterprise workloads. It addresses the challenges of massive datasets by offering virtually unlimited in and file sizes, up to 2^63 bytes per file, without compromising performance. QFS supports parallel I/O operations through direct I/O capabilities and shared reader/writer functionality, ensuring high throughput in environments requiring rapid data retrieval and manipulation. At its core, QFS achieves its objectives through striped data distribution across multiple disks, which balances I/O load and maximizes using configurable stripe widths (e.g., default 128 increments). striping further enhances quick lookups by spreading operations across dedicated devices, reducing in navigation. Additionally, it facilitates seamless tiered storage management, transitioning data from high-speed disk caches to archival media like tape for cost-effective long-term retention. QFS was introduced as an integral component of the Storage and Archive Manager (SAM), a for hierarchical , and is commonly referred to as SAM-QFS to reflect this tight integration, which extends its disk-based performance with automated archiving capabilities. This combination positions QFS as a foundational tool for environments balancing immediate access needs with archival scalability.

Key Components

Sun QFS is composed of several core software elements and storage configurations that enable its high-performance operation in shared environments. At the heart of the system is the metadata server, which manages the file namespace, attributes, locking, and block allocation. This server maintains a centralized view of the file system's directory structure and directs clients to appropriate data locations, ensuring consistency across multiple hosts via network communication, typically over TCP on port 7105. Metadata operations, such as file creation and permission checks, are routed through the metadata server, which uses in-memory storage backed by logs for durability. Key daemons support the system's functionality, including the sam-fsd master daemon, which starts other processes and ensures automatic restarts on failure, and the sam-sharefsd daemon, which handles client-server communication for each mounted shared . The sam-archiverd daemon automates file archiving to or other , while sam-stagerd manages staging of archived files back to disk, and sam-releaser optimizes space by releasing disk based on water marks. These daemons integrate to provide within SAM-QFS. On the storage side, QFS relies on disk organized through a that abstracts hardware into logical units defined in the master configuration file (mcf). These include metadata devices (mm) for storing file system and data devices (md for striped/ with dual allocation, mr for single allocation), supporting up to 252 partitions per for scalable capacity. Striped groups (gXXX) allow distribution across multiple disks or devices for parallelism, with configurable disk allocation units (DAU) starting from 16 . Integration with archival media, such as libraries, is facilitated through automated catalogs and queues managed by daemons that track volume serial numbers (VSNs) and handle recall operations, enabling seamless tiering between disk and archival in a unified . Network infrastructure, such as for data paths and Ethernet for , is essential for multi-host coordination. QFS supports multi-host access with options for shared mounting, including multi-reader capabilities and integration with Cluster for . Overall, QFS presents a single, unified to all clients, achieved through centralized management and client-side coordination.

Architecture

Metadata Management

In Sun QFS, metadata management is handled by a dedicated metadata (mdserver), which stores inode information, directory structures, and file extents in a centralized manner to ensure efficient organization and access. The mdserver operates on specialized metadata devices, separate from , allowing metadata to be hosted on dedicated servers for optimized and . This separation enables the system to support up to 2³²-1 files per , approximately 4.29 billion, by dynamically allocating inode space on these devices. The in QFS is extended to accommodate large-scale needs, with each inode measuring 512 bytes and supporting files up to 8 EiB (2⁶³ bytes). These inodes include attributes for archival policies, such as flags for release (-release) and (-stage), which integrate with the Storage and Archive Manager (SAM) to manage data lifecycle transitions across storage tiers. is striped across multiple disks on the mdserver for enhanced redundancy and speed, using techniques like mirroring or configurations to prevent single points of failure and accelerate access. Clients access through network queries to the mdserver for path resolution and file operations, with the sam-sharefsd daemon facilitating shared access in multi-host environments via protocols. To minimize latency, QFS implements client-side caching of inode and directory attributes, governed by configurable timeouts (e.g., meta_timeo parameter, default 3 seconds), which balance freshness with performance. Leases issued by the mdserver (read, write, append) ensure consistency, renewable at intervals up to 600 seconds. This architecture briefly integrates with for archiving alongside data, preserving inode details in archive sets.

Storage Integration

QFS integrates primary with secondary archival systems through its , leveraging the SAM-QFS framework to automate across tiers. This approach combines high-performance disk pools for active data with cost-effective such as magnetic s and optical platters for long-term retention, enabling seamless transitions based on file access frequency and capacity needs. The system supports up to four archival copies per file, which can be distributed across local disk, libraries, optical jukeboxes, or remote locations to enhance redundancy and . Volume management in QFS facilitates dynamic allocation of disk volumes, allowing administrators to expand storage pools on-the-fly without downtime by adding devices through the media configuration file (/etc/opt/SUNWsamfs/mcf) and using the samgrowfs command. Data striping across multiple disks is achieved via configurable disk allocation units (DAUs), ranging from 8 KB to 65,528 KB, which enable parallel I/O operations for improved throughput in shared environments. This striping supports up to 128 devices per striped group, optimizing performance for large-scale data access. Archival policies are defined in the archiver.cmd file, where user-specified rules govern the and unstaging of according to criteria like file age, size, path, and access patterns, with a default archive age of four minutes triggering initial copies. For large , QFS employs partial file recall, segmenting them into minimum units for selective , which minimizes by retrieving only requested portions while queuing the remainder. These policies integrate with the releaser process, which monitors high-water marks on disk space to automate tiering decisions. QFS employs a (SAN) via for shared access, enabling multiple hosts to concurrently read and write to the same through distributed I/O configurations set in /etc/opt/SUNWsamfs/defaults.conf. This setup supports petabyte-scale storage environments by automating tiering across disk caches and archival media, ensuring efficient data placement without manual intervention.

Features

Performance Capabilities

Sun QFS provides high-performance data access through its integrated volume management and parallel I/O capabilities, allowing files to be striped across multiple disks without requiring a separate volume manager. It supports both paged I/O, which uses the operating system's virtual memory caching, and direct I/O for efficient transfer of large sequential blocks directly between user memory and storage devices, reducing overhead for high-throughput applications. The disk allocation unit (DAU) is tunable from 16 KB to 65,528 KB in multiples of 8 KB (default 64 KB), enabling optimization for specific workloads, such as larger DAUs for sequential access to minimize metadata operations. Striped allocation distributes data across devices in round-robin or explicit stripe patterns, supporting up to 128 striped groups for parallel reads and writes at device-rated speeds. Additional performance features include configurable parameters for I/O (default 16, up to 256 concurrent threads), write throttling (default 16 ), and readahead/writebehind buffering (default 1024 for reads, tunable flush behind from 16–8192 ). In shared environments, the qwrite mount option enables simultaneous reads and writes by multiple clients, while operations are optimized by separating them onto dedicated devices to reduce seek times. These mechanisms support high-speed handling of large files up to 2^63 bytes, making QFS suitable for applications requiring rapid access to terabyte-scale datasets, such as media processing and scientific computing on platforms.

Scalability and Reliability

Sun QFS scales to manage virtually unlimited storage capacities by supporting up to 252 device partitions per , each up to 16 TB, with no practical limit on the number of file systems or files (up to 2^32-1 files, practically limited to around 10^8 based on device size). It uses 64-bit addressing for files and integrates with Storage Area Networks () for shared access across multiple hosts, with dynamic addition of chunk servers or devices without downtime. The global namespace is maintained via a single (with support for multiple in setups), providing a unified view of all data and enabling horizontal scaling through striped groups and incremental disk additions. Reliability is enhanced by separating onto dedicated disks (e.g., solid-state or mirrored mm devices) from file data (md, mr, or striped gXXX devices), improving and allowing metadata mirroring. The system supports Cluster for , with automatic between metadata servers using configurable lease times (15–600 seconds, default 30 seconds) to ensure consistency in multireader/multiwriter configurations. is maintained through validation records on inodes, directories, and blocks, enabling fast recovery without full scans—multiterabyte systems can remount immediately after failure via serial writes and identification records. The sam utility repairs inconsistencies, and integration with SAM-QFS provides archival for long-term . These features ensure robust operation in environments with frequent disk failures or node outages.

History and Development

Origins and Acquisitions

QFS originated from the work of Large Storage Configurations (LSC), a company founded in the early to address the growing demands of large-scale in . LSC developed QFS as a scalable, high-performance tailored for supercomputing applications, emphasizing efficient in environments requiring massive parallelism. The system's design focused on supporting parallel access across multiple nodes, making it particularly suited for massively parallel processing () architectures common in scientific and workloads. The initial commercial release of QFS occurred in , marking LSC's entry into the for advanced solutions that could handle terabyte-scale datasets with low latency and high . Early deployments targeted supercomputing sites where traditional file systems struggled with the volume and of generated by applications. LSC's lay in QFS's extent-based allocation and striping mechanisms, which enabled concurrent I/O operations from multiple clients without significant bottlenecks. A pivotal milestone came in 2001 when acquired LSC for approximately $74 million in stock, integrating QFS into Sun's broader storage portfolio. This acquisition, announced in February and completed in May, provided Sun with a robust that enhanced its offerings for enterprise and . Post-acquisition, QFS was tightly integrated with Sun's Storage and Archive Manager (SAM), forming the SAM-QFS suite to support , where disk caches fed into tape archives for long-term retention. The combination allowed for seamless data tiering, improving scalability in clustered environments by distributing metadata and I/O loads across nodes. Sun's ownership further advanced QFS's clustering capabilities, enabling shared access from multiple Solaris hosts in storage area networks (SANs), which bolstered its adoption in data-intensive supercomputing clusters. In 2010, Oracle Corporation completed its acquisition of Sun Microsystems for $7.4 billion on January 27, bringing QFS under Oracle's control as part of its expanded hardware and storage ecosystem. This corporate shift eventually led to the rebranding of the SAM-QFS product as Oracle Hierarchical Storage Manager (HSM) in subsequent releases, aligning it with Oracle's enterprise storage strategy.

Open Source Releases and End-of-Life

In March 2008, Sun Microsystems released the source code for SAM-QFS, which encompasses the QFS file system, under the Common Development and Distribution License (CDDL) as part of the OpenSolaris project. This open-source contribution facilitated broader access to the codebase, enabling community-driven enhancements and adaptations, including initial client support for Linux environments derived from the shared code. Following Oracle's acquisition of Sun in 2010 and the discontinuation of , the SAM-QFS codebase was integrated into the project, an open-source fork aimed at continuing development of Solaris-derived components. Community efforts through and its distributions, such as , have maintained the software for Solaris-compatible systems, with ongoing packaging and updates available. A notable derivative emerged in 2014 with Versity Storage Manager (VSM), a product built on the open-source SAM-QFS foundation and ported to . VSM incorporates modern features like integration while preserving compatibility with existing SAM-QFS archives. announced the end of Premier Support for Hierarchical Storage Manager (HSM, the rebranded SAM-QFS) version 6.1 in April 2021, marking the cessation of new development and enhancements. Extended Support concluded in April 2024, after which only Sustaining Support—limited to documentation and existing patches—remains available indefinitely for legacy deployments.

Implementations and Usage

Supported Platforms

Prior to its end of life in April 2024, QFS offered full server support on operating systems up to version 11, including both and architectures, with 11 requiring Support Repository Update (SRU) 9 or later for version 11.4 installations. Earlier versions like 10 were supported in legacy releases such as SAM-QFS 5.3 but not in later iterations like Oracle HSM 6.1. As of November 2025, provides only sustaining support for QFS, which includes access to existing releases but no new features, fixes, or patches. For shared file systems, legacy client access was available on 10 (10/08 or later) and 11, as well as outdated distributions including Oracle Enterprise Linux 5.4 and 5.6 (x64), 4.5, 5.4, and 5.6 (x64, compatible via Oracle Enterprise Linux), and SUSE Linux Enterprise Server 9 SP4, 10 SP2/SP3, and 11 SP1 (x64). These versions reached their own end of life over a decade ago and are not recommended for new or ongoing deployments. QFS does not natively support modern distributions. Hardware requirements for QFS deployment centered on architectures, such as processors on Sun servers, with legacy support for SPARC-based systems like UltraSPARC. QFS was compatible with (SAN) fabrics, including , enabling direct data transfer from shared disks to hosts in clustered environments. In later versions, QFS integrated with for underlying volume management, allowing ZFS pools or raw volumes to serve as storage devices for QFS file systems. Key limitations include the absence of native support for Windows operating systems, restricting deployment to environments. Clustered setups required homogeneous configurations, with all metadata servers using the same hardware platform type (no mixing of and ) and software releases within one version of each other across servers and clients. Following the end-of-life of QFS, deployments seeking parallel capabilities may consider archival-focused alternatives like Versity Storage Manager, a modern supporting current distributions such as 8 and 9, 8 and 9, and 8 and 9.

Integration with Other Systems

QFS, as part of the SAM-QFS suite, maintained tight integration with the Storage Archive Manager (SAM) component to enable automated archiving of file system data to secondary storage media such as or disk volumes. This coupling allowed for user-defined policies that automatically copy files from the primary disk to archival storage, supporting up to four copies per file while optimizing for device-rated speeds and without requiring separate applications. The integration was facilitated through processes like sam-archiverd and sam-arfind, which identify and group files based on criteria such as age, size, or access patterns for efficient migration. For client access in heterogeneous environments, QFS file systems were compatible with standard network protocols including NFS and CIFS/, presenting a POSIX-compliant UNIX that enabled seamless mounting and access from multiple hosts across UNIX, , and Windows clients. Shared QFS file systems supported NFS version 4 lists (ACLs) to ensure consistent permissions, allowing exports via these protocols for broad in enterprise networks. This compatibility extended to (HPC) settings, where QFS integrated with like for secure data transfers in grid computing environments, leveraging its high-throughput striping and handling to support large-scale scientific workflows. In the broader ecosystem, QFS worked with established backup solutions such as Legato Networker and Veritas NetBackup, which could interface with QFS volumes for incremental backups and restores using tools like samfsdump and samfsrestore to handle and archived . Custom archival policies were configurable through directive files like archiver.cmd, providing an API-like mechanism via command-line utilities and configuration parameters to tailor retention, , and release behaviors without custom coding. For modern adaptations, forks such as Versity Storage Manager (VSM) extend QFS functionality by incorporating support for cloud gateways, including AWS S3 compatibility, and integration with container orchestration platforms like for hybrid cloud deployments as of 2025.

References

  1. [1]
    What Is Sun QFS?
    Sun QFS software is a high performance file system that can be installed on Oracle Solaris x64 AMD and SPARC platforms.
  2. [2]
    File System Features - Sun QFS File System 5.3 Configuration and ...
    A Sun QFS shared file system is a distributed file system that can be mounted on multiple Oracle Solaris OS host systems. In a Sun QFS shared file system ...Missing: Quick | Show results with:Quick
  3. [3]
    The End of OHSM – What It Means for Oracle Sites and the Archival ...
    Jun 11, 2019 · ... QFS which stood for Storage and Archive Manager – Quick File System. SAM-QFS was created in the early 1990's by a private company called ...
  4. [4]
    C H A P T E R 1 - File System Overview
    The Sun StorEdge SAM-FS file system is a configurable file system that presents a standard UNIX file system interface to users. TABLE 1-1 shows the entire ...
  5. [5]
    [PDF] Sun QFS, Sun SAM-FS, and Sun SAM-QFS File System ...
    Sun SAM-FS file system. The Sun SAM-FS environment includes a general purpose file system along with the storage and archive manager, SAM. The Sun.
  6. [6]
    [PDF] Sun QFS File System 5.3 Configuration and Administration Guide
    File System Features. Key features of the Sun QFS file systems are described in the following sections: □. “Volume Management” on page 16. □. “Support for ...
  7. [7]
    [PDF] Sun SAM-FS and Sun SAM-QFS Storage and Archive Management ...
    This manual, the Sun SAM-FS and Sun SAM-QFS Storage and Archive Management. Guide, describes the storage and archive management software supported within ...Missing: Quick | Show results with:Quick
  8. [8]
    [PDF] Sun QFS and Sun Storage Archive Manager 5.1 - Oracle Help Center
    ... Sun QFS and Sun Storage Archive Manager (SAM-QFS) 5.1 Information. This PDF file contains documentation for the Sun QFS file system and Sun Storage Archive ...
  9. [9]
    [PDF] Oracle® Hierarchical Storage Manager and StorageTek QFS Software
    Oct 4, 2019 · This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for ...
  10. [10]
    [PDF] The Quantcast File System - VLDB Endowment
    Aug 30, 2013 · The Quantcast File System (QFS) employs Reed-Solomon erasure coding instead of three-way replication.
  11. [11]
    Performance Comparison to HDFS
    ### Performance Metrics for QFS vs. HDFS
  12. [12]
    Introduction To QFS
    ### Summary of Key Components in QFS
  13. [13]
    quantcast/qfs: Quantcast File System - GitHub
    Quantcast File System (QFS) is a high-performance, fault-tolerant, distributed file system developed to support MapReduce processing, or other applications.Missing: Quantum | Show results with:Quantum
  14. [14]
    Sun completes storage acquisition - CNET
    May 4, 2001 · May 4, 2001 4:05 p.m. PT. Sun Microsystems has completed its acquisition of LSC Software, an Eagen, Minn., company that provides file system ...
  15. [15]
    C H A P T E R 5 - Sun StorEdge QFS Shared File System
    A Sun StorEdge QFS shared file system is a distributed file system that can be mounted on multiple Solaris operating system (OS) host systems.Missing: mdserver bufserver
  16. [16]
    Oracle completes acquisition of Sun Microsystems - Phys.org
    Jan 27, 2010 · The deal gives Oracle ownership of the Java programming language, which runs on more than a billion devices, and the Solaris operating system.Missing: QFS HSM
  17. [17]
    SAM and QFS available as open source | Inside HPC & AI News
    Mar 20, 2008 · You can find out more, and download the source for yourself, at the OpenSolaris site: http://opensolaris.org/os/project/samqfs/.Missing: Microsystems CDDL
  18. [18]
    [PDF] VSM Whitepaper | Versity
    The technology behind VSM was based on Sun Microsystem's open-source SAM-QFS product. Versity ported SAM-QFS to Linux, assembled a team of leading archive ...Missing: 2014 | Show results with:2014
  19. [19]
    Storage Archive Manager Installation - OpenIndiana Docs
    May 23, 2022 · QFS (quick file system) can be shared between multiple hosts, if you use shared storge like it is possible in FC-SAN. For QFS you need two ...
  20. [20]
    [PDF] Lifetime Support Policy: Oracle and Sun System Software
    Maximize your support investment and unlock the full value of your Oracle products with the industry's leading support.Missing: life | Show results with:life
  21. [21]
    4 Installing Oracle HSM and QFS Software
    Download the installation package for your level of HSM. The download method depends on the version.
  22. [22]
    Hardware and Software Requirements - Sun QFS and Sun Storage ...
    Oracle Solaris 10 10/08 or later. Oracle Solaris 11. Oracle Solaris 10 10/08 or later for x86 (32-bit). Oracle Enterprise Linux ...
  23. [23]
    Solaris Filesystem Choices - OSnews
    Apr 21, 2008 · QFS cannot manage its own RAID, besides striping. For this, you need a hardware controller, a traditional volume manager, or a raw ZFS volume.
  24. [24]
    QFS support on other operating systems? - Oracle Forums
    We're running a huge SAN infrastructure, mostly based on QFS over a few dozen Sun servers, but for some file systems we'd now need access from Windows and Linux ...
  25. [25]
    QFS - Wikipedia
    QFS (Quick File System) is a filesystem from Oracle. It is tightly integrated with SAM, the Storage and Archive Manager, and hence is often referred to as SAM- ...
  26. [26]
    1 Deploying SAM-QFS Solutions
    Archiving file systems combine one or more QFS ma - or ms -type file systems with archival storage and Oracle StorageTek Storage Archive Manager (SAM) software.Missing: Quick | Show results with:Quick
  27. [27]
    Accessing File Systems from Multiple Hosts Using SAM-QFS Software
    SAM-QFS makes file systems available to multiple hosts by configuring a server and one or more clients that all mount the file system simultaneously. File data ...Missing: parallel MPP
  28. [28]
    [PDF] Data Management as a Cluster Middleware Centerpiece
    Since data management is one of the major concerns in High Performance Computing (HPC), the final ... SAM-QFS storage server. The tests reported here used a ...
  29. [29]
    [PDF] Storage Overview - Oracle
    Automated Tiering: Partitions, SAM QFS, Hybrid Storage Pools, VSM. Data ... Oracle Secure Backup 10.2, 10.3 • Legato 7.4, 7.5, 7.6 • Zmanda Amanda Ent ...