Fact-checked by Grok 2 weeks ago

GFS2

GFS2, or Global File System 2, is a 64-bit symmetric cluster file system designed for environments, enabling multiple nodes in a to simultaneously access and share a common block storage device—such as those connected via , , or network block devices—while ensuring strict data consistency and coherency across all participants without requiring a dedicated . Development of GFS2 traces its roots to the original GFS project, initiated in 1995 by Matt O'Keefe's team at the as a scalable for supercomputing applications, which was later ported to for better code accessibility and initially relied on reservations for coordination. Commercialization efforts by Sistina Software in the early 2000s introduced network-based locking mechanisms like GULM, but these evolved into the more robust (DLM) developed by Patrick Caulfield and Dave Teigland; acquired Sistina in 2003, integrating GFS into its enterprise offerings. GFS2 itself was architected starting in 2005 by Ken Preslan and others at , with its core code accepted into the version 2.6.16 and first appearing in Core 6 distributions later that year, marking a significant redesign from GFS1 to support 64-bit addressing and enhanced scalability. While not fully on-disk compatible with the original GFS, GFS2 includes tools like gfs2_convert to facilitate in-place upgrades from GFS1 volumes. At its core, GFS2 employs a symmetric architecture where all cluster nodes are peers, using the DLM—typically managed by user-space components like Corosync or Pacemaker—for distributed locking to coordinate I/O operations and prevent conflicts on shared resources. Key features include perfect consistency, where file system changes on one node are immediately visible to others; support for POSIX ACLs, SELinux integration, and hashed directories for efficient lookups; and a metadata file system (metafs) that handles journals, quotas, and resource allocation via resource groups to enable parallel data placement and reduce contention. One journal is required per mounting node to manage local transaction logging, and the upstream kernel supports both clustered and local modes—the latter via a lock_nolock parameter for single-node use without cluster overhead, though enterprise distributions like Red Hat Enterprise Linux support it only in limited scenarios such as mounting snapshots. Performance optimizations, such as equal-height metadata trees, fuzzy statfs for quick space reporting, and glock (global lock) enhancements, make it suitable for high-throughput environments like storage area networks (SANs). GFS2 was integral to enterprise Linux distributions up to Red Hat Enterprise Linux 9 (via the Resilient Storage Add-On) and remains available in SUSE Linux Enterprise High Availability Extension, where it is often deployed in conjunction with clustering software for applications requiring shared data access, such as databases or virtualization pools; however, support for GFS2 was removed starting with Red Hat Enterprise Linux 10 in 2025. Tools from the gfs2-utils package, such as mkfs.gfs2 for formatting, gfs2_grow for online expansion, and fsck.gfs2 for integrity checks, are essential for administration, emphasizing its focus on reliability in multi-node setups.

Overview and History

Overview

GFS2, or File System 2, is a 64-bit symmetric file system designed for environments, enabling multiple nodes to share a common block device while providing a unified and ensuring data coherency across the . In this symmetric architecture, all nodes have equal access to the without relying on a dedicated , allowing concurrent read and write operations that mimic the behavior of a file system. The primary use cases for GFS2 include high-availability clusters and shared storage environments in enterprise settings, such as those utilizing (RHEL) for applications like databases or collaborative services that require simultaneous access from multiple servers. It supports for up to 16 nodes on x86 architectures, facilitating reliable data sharing in clustered setups without compromising consistency. Key benefits encompass cluster-wide file locking to prevent conflicts and seamless integration with cluster managers like for resource orchestration and handling. In its operational model, GFS2 nodes access the shared storage through block-level protocols, maintaining no for operations due to its distributed design. It employs journaling for crash recovery and distributed locking mechanisms to uphold coherency, ensuring robust in multi-node scenarios.

Development History

The Global (GFS), the predecessor to GFS2, originated in the late when Sistina Software commercialized a initially developed for environments. GFS2, a complete rewrite of GFS to address limitations in , , and support for 64-bit architectures, began development in early 2005 under the leadership of Ken Preslan at following their acquisition of Sistina Software in December 2003 for $31 million. This acquisition integrated GFS into Red Hat's product portfolio, shifting its focus toward enterprise clustering solutions. Key development milestones for GFS2 included its integration into the version 2.6.19, released in November 2006, enabling broader adoption beyond 's proprietary distributions. Subsequent enhancements arrived in later kernel versions, such as quota support introduced in kernel 2.6.33 (late 2009) and further refined in the 3.x series for improved cluster-wide enforcement, alongside other scalability improvements like better handling of large file systems. More recent advancements include non-blocking lookups added in 6.8 in March 2024, which reduce latency in directory operations for clustered environments. These updates were primarily driven by engineers, including Steven Whitehouse, who served as the primary maintainer and led much of the core implementation. GFS2's evolution marked a transition from a proprietary system under Sistina to an open-source project licensed under the GPL, with releasing the source code in June 2004 to foster community contributions and integration with the ecosystem. It played a central role in (RHEL) starting with version 5 in 2007, where it became part of the Resilient Storage Add-On for high-availability clustering. Management tools like gfs2_utils (including commands such as mkfs.gfs2 and .gfs2) were developed alongside the to handle creation, maintenance, and repair tasks in cluster setups. However, in a significant shift, announced the deprecation of GFS2 support in RHEL 10, released in 2025, discontinuing the Resilient Storage Add-On. Despite its deprecation in RHEL, GFS2 remains part of the upstream and continues to receive maintenance updates, including security fixes in 2025.

Architecture and Requirements

Hardware Requirements

GFS2 requires a shared block device that is simultaneously accessible by all nodes in the to enable concurrent file system operations while maintaining data consistency. This shared storage must be provided through a (SAN) or equivalent infrastructure, with interconnects recommended for optimal performance and reliability, as they have been extensively tested by . While GFS2 can function with lower-cost options such as or (FCoE), these may result in reduced performance compared to dedicated SAN solutions. Local disks are not supported for GFS2 deployments, as the is designed exclusively for clustered environments with shared access; non-shared configurations would violate coherency guarantees. The cluster interconnect for GFS2 relies on a low-latency, high-bandwidth network to facilitate communication via the (DLM), which coordinates locking across nodes. Ethernet is fully supported for this purpose, with or faster links recommended to ensure reliable packet delivery and minimize contention in cluster traffic. can be used in certain configurations for enhanced performance, though it is not supported with redundant ring protocols (RRP) due to limitations with IP over (IPoIB). Higher-quality network equipment, including dedicated interfaces for inter-node communication, is advised to improve overall GFS2 reliability and speed. GFS2 is compatible with standard block device interfaces such as and , as well as NVMe over Fabrics (NVMe/oF) for high-performance shared storage scenarios. Representative examples of supported storage arrays include high-end systems from vendors like (now ) Symmetrix, , , and , which integrate well with multipath configurations for . On the compute side, each should feature at least 1 GB of minimum, with 1 GB per logical CPU recommended for balanced performance; larger clusters benefit from multi-core processors (e.g., 4 or more cores per ) to handle DLM overhead and I/O scaling. Cluster scale is limited to a maximum of 16 nodes for x86 architectures in supported (RHEL) configurations, though fewer nodes are typical for production to maintain performance. Extensions beyond this limit are possible with custom tuning but are not officially supported by . For safety, all GFS2 clusters mandate hardware, such as (Shoot The Other Node In The Head) devices, to isolate failed nodes and prevent from partition scenarios.

Core Components and Design

GFS2 employs a sophisticated metadata structure to manage file system operations in a clustered environment. Central to this are dinodes, which serve as on-disk representations of file inodes, each spanning a single block and containing fields for file attributes, pointers to data blocks, and metadata such as access times and permissions. These dinodes support "stuffed" data for small files directly within the dinode to optimize space, while larger files use pointers organized in height-balanced metadata trees for efficient access. Resource groups (RGs) handle block allocation by dividing the disk into fixed-size slices, each featuring a header and a bitmap that tracks block states—such as free (00), allocated non-inode (01), unlinked inode (10), or allocated inode (11)—enabling localized allocation to reduce contention across nodes. Journals function as special files within the metadata structure, one per node, to log transactions and ensure consistency during concurrent access. This architecture leverages 64-bit addressing, theoretically supporting file systems up to 8 exabytes (EB), though practical limits are lower based on hardware constraints. The design principles of GFS2 emphasize symmetric clustering, where all nodes operate as peers with identical software stacks and direct access to shared storage, eliminating master-slave hierarchies for improved and . This peer model relies on a (DLM) for coordination, but the on-disk format prioritizes locality and efficiency through elements like the —positioned 64 KB from the disk start for —and the resource index (RI) tables. The RI tables, stored in the metadata file system, map the physical locations of RGs, allowing quick metadata retrieval without scanning the entire disk and enhancing in multi-node scenarios. Overall, this format ensures that metadata operations remain localized, minimizing inter-node communication for common tasks. Key subsystems in GFS2 include block allocation managed via RGs, where each group uses per-RG locking to allow parallel allocations from different nodes, thereby minimizing hotspots and supporting high-throughput workloads. Quota enforcement tracks usage at the and group levels through a dedicated system-wide quota , which is periodically synchronized across the ; when mounted with quota=on, GFS2 maintains accurate accounting even without active limits, enabling enforcement via tools like quotacheck and setquota. Integration with the Linux Virtual File System (VFS) layer ensures compliance, providing standard semantics for operations like I/O, permissions, and access control lists (ACLs), while hashed directories and extended attributes align with local expectations. For , GFS2 uses height-balanced trees in its metadata structures to maintain constant access depth regardless of file system growth, and multi-level indirect blocks that extend dynamically—adding layers as needed rather than fixed power-of-two pointers—to accommodate large files without issues from predecessors. Journaling for metadata updates is handled per-node to replay transactions during , ensuring cluster-wide .

Core Functionality

Journaling Mechanism

GFS2 employs a per- journaling model to maintain in a clustered environment, where each mounting maintains its own dedicated stored as a regular file within the filesystem's structure. This design allows dynamic addition of journals as new join the without requiring filesystem expansion or downtime. Journals primarily log changes. In ordered mode, file data is written to disk before the is committed, ensuring similar to local filesystems like ext4. Spectator mounts, which provide read-only access without modifications, do not require or use a , minimizing overhead for non-participating . The journaling operation in GFS2 involves asynchronous commits of dirty data to the journal, occurring at regular intervals or triggered by explicit sync operations to balance performance and durability. By default, commits happen every 60 seconds if dirty data is present, though this interval can be tuned via mount options like commit= for specific workloads. Journals support revocable entries, allowing blocks to be withdrawn from the log before finalization, which prevents unnecessary replay during clean unmounts by ensuring the journal is flushed and marked consistent upon proper shutdown. GFS2 offers data ordering modes analogous to ext4, including the default ordered mode (where data is written before metadata) and writeback mode (for higher performance with potential data loss on crash). Additionally, data journaling—where both data and metadata are journaled—can be enabled on specific files or directories for maximum consistency, albeit at higher I/O cost. These modes enable tailored trade-offs between reliability and throughput in clustered scenarios. In the event of a node failure, GFS2's recovery process ensures filesystem consistency through journal scanning and replay performed by surviving or remounting nodes. Upon detection of a failure, other cluster nodes coordinate to scan the affected journal, identifying uncommitted transactions via journal descriptor blocks that track entry states and transaction boundaries. Replay then applies these changes atomically, restoring metadata (and data in journaled mode) to a consistent state; this multi-node recovery relies on lock grants to serialize access and prevent conflicts during the process. The procedure typically completes quickly for small journals but can extend for larger ones, with logs reporting details such as blocks scanned, transactions replayed, and revoke tags processed. This mechanism contributes to overall coherency by isolating recovery to the failed node's changes without impacting active nodes. Performance tuning of GFS2 journaling focuses on journal sizing to scale with size and workload demands, as larger journals reduce commit frequency and improve I/O utilization. The default journal size is 128 when creating the filesystem, though added journals default to 32 with a minimum of 8 and a maximum configurable up to 1 to accommodate high-activity s. involves provisioning one per active , with dynamic growth recommended for expanding s to avoid bottlenecks; for instance, insufficient space can lead to stalled operations during bursts. Data journaling, while ensuring stronger guarantees, increases write amplification and I/O consumption, particularly for medium-to-large files, so it is selectively enabled via attributes like +j for critical directories.

Locking and Coherency Management

GFS2 relies on the (DLM) to coordinate access to shared resources across multiple s in a . The DLM, typically integrated with user-space such as corosync or , maintains a distributed lock database replicated on each , enabling lock operations over the network. Lock requests specify modes including Protected Read (PR) for concurrent read access allowing multiple s to data, Concurrent Write (CW) for operations like direct I/O where multiple s can write without exclusive ownership, and Exclusive (EX) for modifications requiring sole control to prevent conflicts. These modes map directly to GFS2's glocks, which are per-object locks (e.g., inodes or resource groups) that DLM states locally to minimize communication overhead. Cache coherency in GFS2 is enforced through a lock-based protocol centered on glocks, which manage invalidations to ensure all nodes see consistent data views. When a node acquires an EX lock for writing, it invalidates cached pages on other nodes holding PR locks, forcing them to re-read from disk upon next access; similarly, demoting from EX to PR triggers write-back and invalidation. Directory and inode glocks provide namespace consistency by locking entire objects during operations like renames or attribute updates, serializing metadata access cluster-wide. Unlike GPFS, which employs cache fusion to transfer dirty pages directly between nodes, GFS2 avoids this mechanism, opting for simpler invalidation and disk fetches to maintain coherency without complex inter-node data movement. Lock granularity balances efficiency and consistency: byte-range locks handle fine-grained file data access via the VFS layer, coordinated under the broader inode glock for cluster synchronization, while metadata operations use whole-file or whole-object locks to protect structures like journals and allocation bitmaps. Promotion and demotion of glock states—e.g., from PR to EX—optimize traffic by retaining compatible modes when possible, reducing DLM invocations. To safeguard against scenarios where nodes lose communication but continue accessing storage, GFS2 mandates integration with node fencing mechanisms, such as (Shoot The Other Node In The Head) in clusters, ensuring failed or partitioned nodes are isolated before locks are recovered. This prevents concurrent writes that could corrupt data. quorum, enforced via the votequorum service, requires a majority of nodes to form on membership and lock recovery, blocking operations in minority partitions until resolution. These features, built atop the DLM's reliable messaging, ensure robust consistency even during failures.

Comparisons and Differences

Differences from Local Filesystems

GFS2 enables concurrent access and writes from multiple nodes to the same shared block storage, in contrast to local filesystems like , which enforce exclusive single-node access to prevent without distributed coordination. This clustered access model relies on a (DLM) over /IP for coherency, introducing network round-trips for lock acquisition that increase latency compared to the direct, low-latency operations of local filesystems. Performance in GFS2 is impacted by this locking overhead, resulting in lower throughput for random I/O workloads—typically due to contention in acquiring exclusive locks—while excelling in parallel across nodes where multiple can operate without frequent synchronization. Unlike local filesystems, which assume unchecked local for optimal speed, GFS2 maintains cache coherency via states, preventing stale data but adding validation costs that reduce single-node efficiency. For instance, operations involving high contention, such as frequent updates, can degrade performance significantly compared to the streamlined local handling in ext4. Metadata management in GFS2 is distributed across nodes using cluster-wide glocks for each inode, differing from the centralized, node-local approach in filesystems like that avoids inter-node coordination. This design supports shared namespace consistency but can create lock contention hotspots, particularly in directories with heavy concurrent inserts or deletes from multiple nodes. For reliability, GFS2 integrates and mechanisms within the infrastructure to isolate failed and maintain , enabling continued operation despite individual node failures without requiring full cluster downtime. Local filesystems, by contrast, depend on hardware-level redundancy like for but offer no inherent support for multi-node failure scenarios, as they operate in isolation.

Improvements over GFS1

GFS2 introduced significant architectural enhancements over its predecessor, GFS1, primarily through its adoption of a , which enables support for much larger filesystems compared to GFS1's 32-bit limitations. While GFS1 was constrained to a maximum filesystem size of 8 TB, GFS2 supports up to 16 TB on 32-bit hardware and theoretically up to on 64-bit systems, with practical limits reaching 100 TB for files and filesystems in supported configurations. This shift allows GFS2 to handle exabyte-scale environments more effectively, addressing the bottlenecks inherent in GFS1's design. Additionally, GFS2 simplifies locking mechanisms by eliminating the lm-lockspace used in GFS1, instead relying on a (DLM) with finer-grained glocks for resource groups, which reduces complexity and improves concurrency across cluster nodes. Performance improvements in GFS2 stem from reduced metadata overhead and more efficient , leading to faster operations such as mounts, which can be up to twice as quick due to the absence of metadata generation numbers and a streamlined log manager that no longer tracks unlinked inodes or quota changes. Unlike GFS1, GFS2 provides robust online growth capabilities via gfs2_grow—features absent in GFS1 that required offline operations. These changes result in better overall I/O performance, including faster synchronous writes, cached reads without locking overhead, and reduced memory usage, making GFS2 more suitable for high-throughput workloads. GFS2 adds several key features not present in GFS1, including native quota enforcement (enabled via mount options), POSIX ACL inheritance for streamlined access control, and support for multi-device filesystems using underlying layers like LVM or mdadm for greater flexibility in storage pooling. It also removes the separate "meta" filesystem required in GFS1 for metadata management, enabling direct compatibility with standard Linux tools and a unified namespace. For backward compatibility, GFS2 provides the gfs2_convert utility to perform in-place upgrades from GFS1 filesystems, allowing existing data to be migrated without data loss after ensuring integrity with fsck.gfs, though direct mounting of GFS1 volumes requires this conversion step.

Features and Compatibility

Advanced Features

GFS2 provides robust quota management to control resource usage in clustered environments, supporting both per-user and per-group limits enforced at the level. Quota enforcement is disabled by default but can be enabled via the quota=on mount option, allowing administrators to set soft and hard limits using the edquota command without interrupting operations. The quotacheck utility is used to examine quota-enabled s and build a table of current disk usage, updating the quota files; it requires the file system to be unmounted or mounted read-only and is typically run manually after enabling quotas or if inaccuracies are suspected (e.g., after a crash). Ongoing quota changes are synchronized to disk periodically, defaulting to every 60 seconds and adjustable via the quota_quantum mount option, ensuring accurate accounting across multiple nodes. Dynamic operations in GFS2 enable maintenance and optimization while the filesystem remains mounted and accessible by all nodes. Filesystem growth is supported using the gfs2_grow command, which extends the filesystem to utilize additional space on the underlying , such as after expanding a logical volume, thereby accommodating increasing storage needs without . For optimization, GFS2 lacks a dedicated defragmentation tool, but fragmentation can be addressed manually by identifying affected files with filefrag, copying them to rewrite contiguously, and renaming to replace the originals, which helps mitigate I/O inefficiencies in heavily accessed directories. Security features in GFS2 extend access control lists () for granular permissions beyond traditional UNIX modes, mountable with the acl option to enable support including inheritance rules that propagate to newly created files and subdirectories. Integration with SELinux is available through the context mount option, allowing specification of security contexts (e.g., system_u:object_r:httpd_sys_content_t:s0) to enforce mandatory controls consistently across nodes, though careful is required to avoid conflicts in multi-host scenarios. Additional enhancements include multi-host fencing mechanisms integrated with the (DLM), which isolates faulty via actions (e.g., pcs stonith fence [node](/page/Node)) to maintain consistency and prevent corruption during failures. The withdraw function serves as a protective mode, halting I/O operations upon detecting or inconsistencies—such as inode errors—and logging details for diagnosis, with recovery achieved by unmounting, running fsck.gfs2 on the device, and remounting to restore functionality. GFS2 also supports striped layouts through resource groups, enabling efficient parallel I/O by distributing blocks across multiple allocation units, which optimizes throughput in workloads. A recent enhancement, introduced in 6.8 (March 2024), enables non-blocking lookups by using a non-blocking global lock flag, improving scalability and reducing latency in lookup-heavy operations.

Compatibility Mechanisms

GFS2 provides with legacy GFS1 file systems through an in-place process using the gfs2_convert utility, which transforms the on-disk from the GFS1 to the GFS2 without requiring additional disk space. This must be performed offline, with the file system unmounted on all nodes and first verified using gfs_fsck to ensure integrity, as the process is irreversible and any interruptions could lead to . Unlike GFS1, which relied on a separate structure for certain operations, GFS2 integrates all directly into the primary , eliminating the need for auxiliary components during access or . The gfs2-utils package includes tools such as gfs2_convert for the upgrade process and gfs2_tool for subsequent editing, querying, and maintenance of converted GFS2 file systems, including management and adjustments. For legacy clusters transitioning to GFS2, a mode is not natively supported in the , but the allows seamless by enabling GFS2 nodes to access the upgraded file systems while phasing out GFS1-only environments. GFS2 integrates with cluster management software like CMAN in Red Hat Enterprise Linux 6 and earlier, and with Corosync in RHEL 7 through 9, for high-availability configurations; support for GFS2 was removed in RHEL 10 (2024). For non-Red Hat distributions, GFS2 is available through the upstream , allowing deployment with compatible distributed lock managers like DLM without proprietary extensions. However, GFS2 does not support read-only mounting of unconverted GFS1 file systems, as the on-disk formats are incompatible, necessitating full conversion before access. Mixed-mode operations involving both GFS1 and GFS2 nodes on the same file system are not possible post-conversion, and improper handling during the —such as failing to back up or encountering issues—can result in permanent .

Deployment and Recent Developments

Mounting and Configuration

Before mounting a GFS2 filesystem, the cluster must be initialized using tools like corosync for messaging and Pacemaker for resource management, ensuring all nodes have synchronized clocks via NTP or PTP and access to shared storage such as a logical volume managed by LVM. The filesystem itself is created on the shared block device using the mkfs.gfs2 command, which requires specifying the locking protocol and resource ID; for example, mkfs.gfs2 -p lock_dlm -t clustername:fsname -j number_of_journals /dev/blockdevice, where the -j option sets the number of journals equal to the expected number of mounting nodes (one per node), with a default journal size of 128 MB. The mounting process uses the mount.gfs2 command on each node, specifying options for cluster locking; a typical command is mount.gfs2 /dev/blockdevice /mountpoint -o lockproto=lock_dlm,locktable=clustername:fsname, where lockproto=lock_dlm enables the for coherency across nodes, and locktable identifies the cluster and filesystem to the DLM. Journals can be added dynamically after initial creation and mounting, using gfs2_jadd -j number /mountpoint to accommodate additional nodes without recreating the filesystem; for instance, gfs2_jadd -j 1 /mygfs2 adds one journal. For persistent mounting, add an entry to /etc/[fstab](/page/Fstab) on each , such as /dev/vgname/lvname /mountpoint gfs2 [defaults](/page/Default),_netdev,lockproto=lock_dlm,locktable=clustername:fsname 0 0, ensuring the _netdev option delays mounting until is available for cluster communication. The locktable for DLM is configured during filesystem creation with the -t option in mkfs.gfs2 (e.g., -t alpha:mydata) and referenced at mount time to locks to the specific and filesystem, preventing interference with other resources. Basic tuning can be applied via options like commit=seconds to adjust the commit interval ( 60 seconds) or quota_quantum=seconds for quota checks ( 60 seconds), optimizing for workload-specific behavior. GFS2 management tools include gfs2_fsck for filesystem integrity checks and repairs, invoked as fsck.gfs2 -y /dev/blockdevice to automatically fix issues when the filesystem is unmounted. The gfs2_edit utility allows inspection of , such as viewing the journal index with gfs2_edit -p jindex /dev/blockdevice for purposes. Unmounting is performed with umount.gfs2 /mountpoint or standard umount on each node, ideally managed through resources to ensure clean shutdown and avoid hangs during node fencing.

Limitations and Updates

GFS2 incurs significant CPU overhead in large clusters due to its (DLM), which coordinates operations across nodes, leading to high workloads from threads like kworker/gfs2-delete during intensive operations. This overhead is exacerbated by contention, where frequent locking for cache coherency can result in excessive CPU utilization by the glock_workqueue process. Unlike filesystems with native replication, GFS2 lacks built-in data replication mechanisms and instead relies entirely on the underlying shared storage, such as or , for redundancy. Consequently, any failure in the shared storage—such as a issue or withdrawal event—can cause the entire GFS2 filesystem to hang or freeze across all nodes, often requiring a full to recover and maintain . Performance bottlenecks in GFS2 are particularly evident in metadata-intensive workloads, where operations like directory traversals or file creations/deletions suffer from slower throughput compared to local filesystems, due to the overhead of inter-node communication for locking. To mitigate this, administrators are recommended to deploy GFS2 on SSD-backed storage rather than HDDs, as solid-state drives provide the necessary for handling operations more efficiently without amplifying the cluster-wide . Recent updates to GFS2 have focused on performance enhancements and security fixes. In Linux kernel 6.8, released in March 2024, support for non-blocking directory lookups was introduced, allowing faster revalidation of directory entries without stalling on lock acquisition, which improves scalability for read-heavy cluster workloads. Kernel patches in 2025 addressed evict and remote delete processing, including documentation updates and cleanups to reduce potential hangs during inode eviction in clustered environments. Security improvements included fixes for CVE-2025-38710, which resolved improper validation of directory depth (i_depth) in exhash directories, preventing potential corruption or denial-of-service from malformed structures. Regarding deprecations, announced the end of GFS2 support in RHEL 10, starting in 2025, with the Resilient Storage Add-On discontinued and the gfs2 and dlm kernel modules removed from future releases; the company recommends migrating to alternatives like Ceph for clustered storage needs. Despite this, GFS2 receives ongoing upstream maintenance in the , including integration in version 6.17 released in September 2025, ensuring continued availability for non-Red Hat distributions.

References

  1. [1]
    Global File System 2 - The Linux Kernel documentation
    GFS2 is a cluster file system. It allows a cluster of computers to simultaneously use a block device that is shared between them (with FC, iSCSI, NBD, etc).
  2. [2]
    Chapter 1. GFS2 Overview | Red Hat Enterprise Linux | 7
    The Red Hat GFS2 file system is a 64-bit symmetric cluster file system which provides a shared namespace and manages coherency between multiple nodes ...
  3. [3]
    [PDF] The GFS2 Filesystem
    Jun 30, 2007 · The GFS2 filesystem is a symmetric cluster filesystem designed to provide a high performance means of shar- ing a filesystem between nodes.
  4. [4]
    GFS2 | Administration Guide | SLE HA 15 SP7 - SUSE Documentation
    GFS2 is a shared disk file system for Linux computer clusters. GFS2 allows all nodes to have direct concurrent access to the same shared block storage.
  5. [5]
    Configuring GFS2 file systems | Red Hat Enterprise Linux | 9
    This title provides information about planning a GFS2 file system deployment as well as procedures for configuring, troubleshooting, and tuning GFS2 file ...
  6. [6]
    Red Hat releases Sistina software under GPL - InfoWorld
    Jun 25, 2004 · In addition to the Global File System, or GFS, Red Hat released the source code to Sistina's distributed lock manager and membership services ...
  7. [7]
    Red Hat Continues Scale Out of Open Source Architecture with ...
    Dec 18, 2003 · Sistina GFS for Oracle9i RAC, a solution designed to specifically reduce the complexity of implementing and maintaining an Oracle9i RAC system.Missing: GFS2 | Show results with:GFS2
  8. [8]
    Linux_2_6_19 - Linux Kernel Newbies
    GFS2 is a clustering filesystem developed mainly by Red Hat (after purchasing Sistina and opening the source code, since it was closed at Sistina). It's not the ...
  9. [9]
    Chapter 4. GFS2 quota management | Red Hat Enterprise Linux | 8
    GFS2 supports the standard Linux quota facilities. In order to use this you will need to install the quota RPM. This is the preferred way to administer quotas ...Missing: x | Show results with:x
  10. [10]
    GFS2 File-System Enables Non-Blocking Lookups With Linux 6.8
    The Global File-System 2 (GFS2) for Linux clusters continues to advance thanks to Red Hat and with Linux 6.8 there is now support for non-blocking lookups.Missing: kernel | Show results with:kernel
  11. [11]
    The Global File System goes full circle - LWN.net
    Jun 30, 2004 · Red Hat made good on that promise on June 24 by re-releasing the Global File System under the GPL. The Global File System (GFS) has a fairly ...
  12. [12]
    Global File System 2 | Red Hat Enterprise Linux | 5
    This book provides information about configuring and maintaining Red Hat GFS2 (Red Hat Global File System 2). Red Hat GFS2 can be run in Red Hat Enterprise ...
  13. [13]
    Chapter 1. GFS2 Overview | Red Hat Enterprise Linux | 6
    This section lists new and changed features of the GFS2 file system and the GFS2 documentation that are included with the initial and subsequent releases of Red ...
  14. [14]
    Chapter 1. Planning a GFS2 file system deployment
    When you configure a GFS2 file system as a cluster file system, you must ensure that all nodes in the cluster have access to the shared storage. Asymmetric ...
  15. [15]
    Support Policies for RHEL High Availability Clusters
    Sep 28, 2023 · This guide lays out Red Hat's policies related to network interconnect interfaces used by RHEL High Availability clusters.
  16. [16]
    Chapter 1. Overview of available storage options | 9
    A GFS2 file system is intended to provide a feature set which is as close as possible to a local file system, while at the same time enforcing full cluster ...
  17. [17]
    [PDF] Red Hat Enterprise Linux 6 Storage Administration Guide
    Aug 27, 2021 · This guide provides instructions on how to effectively manage storage devices and file systems on. Red Hat Enterprise Linux 6. It is intended ...Missing: Symmetrix | Show results with:Symmetrix
  18. [18]
    Red Hat Enterprise Linux Technology Capabilities and Limits
    Sep 10, 2025 · Minimum required memory ; x86, 512MB minimum, 1 GB per logical CPU recommended, N/A ; x86_64, 1GB minimum, 1 GB per logical CPU recommended, 1GB ...
  19. [19]
    Chapter 10. Configuring fencing in a Red Hat High Availability cluster
    The only way to be certain that your data is safe is to fence the node using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head" and it ...
  20. [20]
    Configuring GFS2 file systems | Red Hat Enterprise Linux | 8
    This title provides information about planning a GFS2 file system deployment as well as procedures for configuring, troubleshooting, and tuning GFS2 file ...Missing: Sistina | Show results with:Sistina
  21. [21]
    3.4. GFS2 Quota Management | Global File System 2
    GFS2 supports the standard Linux quota facilities. In order to use this you will need to install the quota RPM. This is the preferred way to administer quotas ...Missing: kernel | Show results with:kernel
  22. [22]
    GFS2 reference guide - Ubuntu Manpage
    GFS2 is a clustered filesystem, designed for sharing data between multiple nodes connected to a common shared storage device. It can also be used as a local ...
  23. [23]
    Global File System 2 | Red Hat Enterprise Linux | 7
    GFS2 is a 64-bit cluster file system providing a shared namespace and managing coherency between nodes sharing a common block device.
  24. [24]
    Global File System 2 | Red Hat Enterprise Linux | 6
    This book provides information about configuring and maintaining Red Hat GFS2 (Red Hat Global File System 2) for Red Hat Enterprise Linux 6.
  25. [25]
    What is a journal replay on a gfs2 filesystem?
    Aug 6, 2024 · Issue. Logs display Replaying journal messages similar to ones shown below. Raw. Nov 9 21:41:27 node1 kernel: GFS2: fsid=prod:gfsprod01 ...
  26. [26]
    Chapter 2. GFS2 Configuration and Operational Considerations
    However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB and the current supported maximum size of a GFS2 file system for ...
  27. [27]
    gfs2_jadd(8): Add journals to GFS2 filesystem - Linux man page
    The defaults to 32MB (the minimum size allowed is 8MB). If you want to add journals of different sizes to the filesystem, you'll need to run gfs2_jadd once for ...
  28. [28]
    B.3. Glocks | Global File System 2 | Red Hat Enterprise Linux | 7
    Each glock has a 1:1 relationship with a single DLM lock, and provides caching for that lock state so that repetitive operations carried out from a single node ...Missing: DALAM | Show results with:DALAM
  29. [29]
    gfs2-glocks.rst
    ... (EX). Those translate to the following DLM lock modes ... PR (Protected read) DF CW (Concurrent write) EX EX (Exclusive) ...
  30. [30]
    2.9. GFS2 Node Locking | Global File System 2 | Red Hat Enterprise ...
    The glock subsystem provides a cache management function which is implemented using the distributed lock manager (DLM) as the underlying communication layer.
  31. [31]
    Configuring and managing high availability clusters | Red Hat ...
    STONITH is the Pacemaker fencing implementation. It acts as a cluster ... You can set different types of fencing delays, depending on your system requirements.
  32. [32]
    8.4. Configuration Tools | Red Hat Enterprise Linux | 7
    GFS2 uses a global locking mechanism that can require communication between the nodes of a cluster. Contention for files and directories between multiple nodes ...
  33. [33]
    Chapter 5. Configuring a GFS2 File System in a Cluster
    When you create the GFS2 filesystem, it is important to specify a correct value for the -t LockTableName option. The correct format is ClusterName:FSName.
  34. [34]
    What are the file and file system size limitations for Red Hat ...
    Jun 9, 2020 · The GFS2 file system is based on a 64-bit architecture, which can theoretically accommodate an 8 EiB file system. · The Ext4 file system was a ...
  35. [35]
    Glock internal locking rules - The Linux Kernel documentation
    Eventually, we hope to make the glock “EX” mode locally shared such that any local locking will be done with the i_mutex as required rather than via the glock.
  36. [36]
    1.4.3. GFS2 Performance Improvements | Global File System 2
    GFS2 file systems provide broader and more mainstream support in the following ways: GFS2 is part of the upstream kernel (integrated into 2.6.19). GFS2 ...Missing: 2005 | Show results with:2005
  37. [37]
    B.2. GFS to GFS2 Conversion Procedure - Red Hat Documentation
    It is strongly advised that you: 1. Back up your entire filesystem first. 2. Run gfs_fsck first to ensure filesystem integrity. 3. Make sure the filesystem is ...Missing: backward compatibility
  38. [38]
    gfs2_convert(8) - Linux man page - Die.net
    gfs2_convert is used to convert a filesystem from GFS1 to GFS2. It is required that the GFS1 filesystem be checked and fixed for errors using fsck.gfs2 and ...Missing: backward compatibility
  39. [39]
    3.5. Growing a GFS2 File System - Red Hat Documentation
    The gfs2_grow command is used to expand a GFS2 file system after the device where the file system resides has been expanded.
  40. [40]
    Support Policies for RHEL Resilient Storage - gfs2 with SELinux
    Dec 4, 2021 · This policy guide describes Red Hat's policies around the usage of SELinux with gfs2 filesystems. Users of gfs2 should adhere to these policies ...Missing: ACL integration
  41. [41]
    Chapter 7. Diagnosing and correcting problems with GFS2 file systems
    The GFS2 withdraw function is a data integrity feature of the GFS2 file system that prevents potential file system damage due to faulty hardware or kernel ...Missing: striped layouts
  42. [42]
    3.11. The GFS2 Withdraw Function | Global File System 2
    The GFS2 withdraw function is a data integrity feature of the GFS2 file system that prevents potential file system damage due to faulty hardware or kernel ...Missing: recover striped layouts
  43. [43]
    man gfs2_convert (8): Convert a GFS1 filesystem to GFS2
    The conversion process is performed in-place and does not require any extra disk space so that it is possible to successfully convert a GFS1 filesystem that is ...<|control11|><|separator|>
  44. [44]
    Appendix B. Converting a File System from GFS to GFS2
    Since the Red Hat Enterprise Linux 6 release does not support GFS file systems, you must upgrade any existing GFS file systems to GFS2 file systems with the ...
  45. [45]
    gfs2_tool(8) - Linux man page
    gfs2_tool is an interface to a variety of the GFS2 ioctl/sysfs calls. Some of the functions of gfs_tool have been replaced by standard system tools such as ...Missing: management | Show results with:management
  46. [46]
    mount.gfs2(8): GFS2 mount options - Linux man page - Die.net
    This man page describes GFS2-specific options that can be passed to the GFS2 file system at mount time, using the -o flag.Missing: native enforcement
  47. [47]
    Cluster nodes observe high workload due to large number of gfs2 ...
    Oct 10, 2025 · Issue. Server observes High CPU Load and a large number of kworker/####:##+gfs2-delete kernel worker threads are observed.
  48. [48]
    Excessive CPU Usage by glock_workqueue Process - HPE Support
    FIX: Resolving this problem requires identifying and removing the source of glock contention. Click here to access a Red Hat guide to troubleshooting GFS2 ...
  49. [49]
    4.3. GFS2 File System Hangs and Requires Reboot of All Nodes
    GFS2 file systems will freeze to ensure data integrity in the event of a failed fence. Check the messages logs to see if there are any failed fences at the time ...
  50. [50]
    Chapter 6. Improving GFS2 performance | Red Hat Enterprise Linux | 8
    While there is no defragmentation tool for GFS2 on Red Hat Enterprise Linux, you can defragment individual files by identifying them with the filefrag tool, ...Missing: GFS1 | Show results with:GFS1
  51. [51]
    GFS2 Performance? - Storage - Citrix Community
    Sep 12, 2018 · There will obviously always be some overhead to GFS2 as it's a filesystem (and a distributed one at that) but in general we've seen only about 5-10% ...
  52. [52]
    ELBA-2025-3507-1 - Unbreakable Linux Network - Oracle
    Apr 8, 2025 · - gfs2: Update to the evict / remote delete documentation (Andreas Gruenbacher) [RHEL-35757] - gfs2: Clean up delete work processing ...
  53. [53]
    CVE-2025-38710 Detail - NVD
    Sep 4, 2025 · In the Linux kernel, the following vulnerability has been resolved: gfs2: Validate i_depth for exhash directories A fuzzer test introduced ...
  54. [54]
    Support Policies for RHEL Resilient Storage
    Oct 22, 2024 · In addition, the gfs2 and dlm modules will be removed from the kernel for RHEL 10 or later. For more information see: Resilient Storage Add-On ...
  55. [55]
    Linux_6.17 - Linux Kernel Newbies
    Linux 6.17 was released on Sunday, 28 September 2025. Summary: This release includes an easier way to select CPU bug mitigations, based on chosen attack vectors ...Missing: maintenance | Show results with:maintenance