GFS2
GFS2, or Global File System 2, is a 64-bit symmetric cluster file system designed for Linux environments, enabling multiple nodes in a high-availability cluster to simultaneously access and share a common block storage device—such as those connected via Fibre Channel, iSCSI, or network block devices—while ensuring strict data consistency and coherency across all participants without requiring a dedicated metadata server.[1][2][3] Development of GFS2 traces its roots to the original GFS project, initiated in 1995 by Matt O'Keefe's team at the University of Minnesota as a scalable file system for supercomputing applications, which was later ported to Linux for better code accessibility and initially relied on SCSI reservations for coordination.[3] Commercialization efforts by Sistina Software in the early 2000s introduced network-based locking mechanisms like GULM, but these evolved into the more robust Distributed Lock Manager (DLM) developed by Patrick Caulfield and Dave Teigland; Red Hat acquired Sistina in 2003, integrating GFS into its enterprise offerings.[3] GFS2 itself was architected starting in 2005 by Ken Preslan and others at Red Hat, with its core code accepted into the Linux kernel version 2.6.16 and first appearing in Fedora Core 6 distributions later that year, marking a significant redesign from GFS1 to support 64-bit addressing and enhanced scalability.[3] While not fully on-disk compatible with the original GFS, GFS2 includes tools likegfs2_convert to facilitate in-place upgrades from GFS1 volumes.[1]
At its core, GFS2 employs a symmetric architecture where all cluster nodes are peers, using the DLM—typically managed by user-space components like Corosync or Pacemaker—for distributed locking to coordinate I/O operations and prevent conflicts on shared resources.[1][3] Key features include perfect consistency, where file system changes on one node are immediately visible to others; support for POSIX ACLs, SELinux integration, and hashed directories for efficient lookups; and a metadata file system (metafs) that handles journals, quotas, and resource allocation via resource groups to enable parallel data placement and reduce contention.[1][3] One journal is required per mounting node to manage local transaction logging, and the upstream kernel supports both clustered and local modes—the latter via a lock_nolock parameter for single-node use without cluster overhead, though enterprise distributions like Red Hat Enterprise Linux support it only in limited scenarios such as mounting snapshots.[1] Performance optimizations, such as equal-height metadata trees, fuzzy statfs for quick space reporting, and glock (global lock) enhancements, make it suitable for high-throughput environments like storage area networks (SANs).[3]
GFS2 was integral to enterprise Linux distributions up to Red Hat Enterprise Linux 9 (via the Resilient Storage Add-On) and remains available in SUSE Linux Enterprise High Availability Extension, where it is often deployed in conjunction with clustering software for applications requiring shared data access, such as databases or virtualization pools; however, support for GFS2 was removed starting with Red Hat Enterprise Linux 10 in 2025.[2][4][5] Tools from the gfs2-utils package, such as mkfs.gfs2 for formatting, gfs2_grow for online expansion, and fsck.gfs2 for integrity checks, are essential for administration, emphasizing its focus on reliability in multi-node setups.[1]
Overview and History
Overview
GFS2, or Global File System 2, is a 64-bit symmetric cluster file system designed for Linux environments, enabling multiple nodes to share a common block device while providing a unified namespace and ensuring data coherency across the cluster.[6] In this symmetric architecture, all nodes have equal access to the file system without relying on a dedicated metadata server, allowing concurrent read and write operations that mimic the behavior of a local file system.[7] The primary use cases for GFS2 include high-availability clusters and shared storage environments in enterprise settings, such as those utilizing Red Hat Enterprise Linux (RHEL) for applications like databases or collaborative services that require simultaneous access from multiple servers.[6] It supports scalability for up to 16 nodes on x86 architectures, facilitating reliable data sharing in clustered setups without compromising consistency.[6] Key benefits encompass cluster-wide file locking to prevent conflicts and seamless integration with cluster managers like Pacemaker for resource orchestration and failover handling.[6] In its operational model, GFS2 nodes access the shared storage through block-level protocols, maintaining no single point of failure for metadata operations due to its distributed design. It employs journaling for crash recovery and distributed locking mechanisms to uphold coherency, ensuring robust performance in multi-node scenarios.[7]Development History
The Global File System (GFS), the predecessor to GFS2, originated in the late 1990s when Sistina Software commercialized a clustered file system initially developed for high-performance computing environments.[3] GFS2, a complete rewrite of GFS to address limitations in performance, scalability, and support for 64-bit architectures, began development in early 2005 under the leadership of Ken Preslan at Red Hat following their acquisition of Sistina Software in December 2003 for $31 million.[3][8][9] This acquisition integrated GFS into Red Hat's product portfolio, shifting its focus toward enterprise clustering solutions.[3] Key development milestones for GFS2 included its integration into the Linux kernel version 2.6.19, released in November 2006, enabling broader adoption beyond Red Hat's proprietary distributions.[10] Subsequent enhancements arrived in later kernel versions, such as quota support introduced in kernel 2.6.33 (late 2009) and further refined in the 3.x series for improved cluster-wide enforcement, alongside other scalability improvements like better handling of large file systems.[11] More recent advancements include non-blocking lookups added in Linux kernel 6.8 in March 2024, which reduce latency in directory operations for clustered environments.[12] These updates were primarily driven by Red Hat engineers, including Steven Whitehouse, who served as the primary maintainer and led much of the core implementation.[3] GFS2's evolution marked a transition from a proprietary system under Sistina to an open-source project licensed under the GPL, with Red Hat releasing the source code in June 2004 to foster community contributions and integration with the Linux ecosystem.[13] It played a central role in Red Hat Enterprise Linux (RHEL) starting with version 5 in 2007, where it became part of the Resilient Storage Add-On for high-availability clustering.[14] Management tools like gfs2_utils (including commands such as mkfs.gfs2 and fsck.gfs2) were developed alongside the file system to handle creation, maintenance, and repair tasks in cluster setups.[15] However, in a significant shift, Red Hat announced the deprecation of GFS2 support in RHEL 10, released in 2025, discontinuing the Resilient Storage Add-On. Despite its deprecation in RHEL, GFS2 remains part of the upstream Linux kernel and continues to receive maintenance updates, including security fixes in 2025.[16][1][17]Architecture and Requirements
Hardware Requirements
GFS2 requires a shared block device that is simultaneously accessible by all nodes in the cluster to enable concurrent file system operations while maintaining data consistency. This shared storage must be provided through a storage area network (SAN) or equivalent infrastructure, with Fibre Channel interconnects recommended for optimal performance and reliability, as they have been extensively tested by Red Hat.[18] While GFS2 can function with lower-cost options such as iSCSI or Fibre Channel over Ethernet (FCoE), these may result in reduced performance compared to dedicated SAN solutions.[18] Local disks are not supported for GFS2 deployments, as the file system is designed exclusively for clustered environments with shared access; non-shared configurations would violate coherency guarantees.[7] The cluster interconnect for GFS2 relies on a low-latency, high-bandwidth network to facilitate communication via the Distributed Lock Manager (DLM), which coordinates locking across nodes. Ethernet is fully supported for this purpose, with Gigabit Ethernet or faster links recommended to ensure reliable multicast packet delivery and minimize contention in cluster traffic.[19] InfiniBand can be used in certain configurations for enhanced performance, though it is not supported with redundant ring protocols (RRP) due to limitations with IP over InfiniBand (IPoIB).[19] Higher-quality network equipment, including dedicated interfaces for inter-node communication, is advised to improve overall GFS2 reliability and speed.[18] GFS2 is compatible with standard block device interfaces such as SCSI and SAS, as well as NVMe over Fabrics (NVMe/oF) for high-performance shared storage scenarios.[20] Representative examples of supported storage arrays include high-end systems from vendors like EMC (now Dell EMC) Symmetrix, NetApp, IBM, and Hitachi, which integrate well with multipath configurations for redundancy.[21] On the compute side, each node should feature at least 1 GB of RAM minimum, with 1 GB per logical CPU recommended for balanced performance; larger clusters benefit from multi-core processors (e.g., 4 or more cores per node) to handle DLM overhead and I/O scaling.[22] Cluster scale is limited to a maximum of 16 nodes for x86 architectures in supported Red Hat Enterprise Linux (RHEL) configurations, though fewer nodes are typical for production to maintain performance.[7] Extensions beyond this limit are possible with custom tuning but are not officially supported by Red Hat. For safety, all GFS2 clusters mandate fencing hardware, such as STONITH (Shoot The Other Node In The Head) devices, to isolate failed nodes and prevent data corruption from partition scenarios.[23]Core Components and Design
GFS2 employs a sophisticated metadata structure to manage file system operations in a clustered environment. Central to this are dinodes, which serve as on-disk representations of file inodes, each spanning a single block and containing fields for file attributes, pointers to data blocks, and metadata such as access times and permissions. These dinodes support "stuffed" data for small files directly within the dinode to optimize space, while larger files use pointers organized in height-balanced metadata trees for efficient access. Resource groups (RGs) handle block allocation by dividing the disk into fixed-size slices, each featuring a header and a bitmap that tracks block states—such as free (00), allocated non-inode (01), unlinked inode (10), or allocated inode (11)—enabling localized allocation to reduce contention across nodes. Journals function as special files within the metadata structure, one per node, to log transactions and ensure consistency during concurrent access. This architecture leverages 64-bit addressing, theoretically supporting file systems up to 8 exabytes (EB), though practical limits are lower based on hardware constraints.[3][24] The design principles of GFS2 emphasize symmetric clustering, where all nodes operate as peers with identical software stacks and direct access to shared storage, eliminating master-slave hierarchies for improved scalability and fault tolerance. This peer model relies on a distributed lock manager (DLM) for coordination, but the on-disk format prioritizes locality and efficiency through elements like the superblock—positioned 64 KB from the disk start for compatibility—and the resource index (RI) tables. The RI tables, stored in the metadata file system, map the physical locations of RGs, allowing quick metadata retrieval without scanning the entire disk and enhancing performance in multi-node scenarios. Overall, this format ensures that metadata operations remain localized, minimizing inter-node communication for common tasks.[3] Key subsystems in GFS2 include block allocation managed via RGs, where each group uses per-RG locking to allow parallel allocations from different nodes, thereby minimizing hotspots and supporting high-throughput workloads. Quota enforcement tracks usage at the user and group levels through a dedicated system-wide quota file, which is periodically synchronized across the cluster; when mounted withquota=on, GFS2 maintains accurate accounting even without active limits, enabling enforcement via tools like quotacheck and setquota. Integration with the Linux Virtual File System (VFS) layer ensures POSIX compliance, providing standard semantics for operations like file I/O, permissions, and access control lists (ACLs), while hashed directories and extended attributes align with local file system expectations. For scalability, GFS2 uses height-balanced trees in its metadata structures to maintain constant access depth regardless of file system growth, and multi-level indirect blocks that extend dynamically—adding layers as needed rather than fixed power-of-two pointers—to accommodate large files without compatibility issues from predecessors. Journaling for metadata updates is handled per-node to replay transactions during recovery, ensuring cluster-wide consistency.[3][25][1][26]
Core Functionality
Journaling Mechanism
GFS2 employs a per-node journaling model to maintain data integrity in a clustered environment, where each mounting node maintains its own dedicated journal stored as a regular file within the filesystem's metadata structure. This design allows dynamic addition of journals as new nodes join the cluster without requiring filesystem expansion or downtime. Journals primarily log metadata changes. In ordered mode, file data is written to disk before the metadata transaction is committed, ensuring consistency similar to local filesystems like ext4. Spectator mounts, which provide read-only access without modifications, do not require or use a journal, minimizing overhead for non-participating nodes.[1][26][3] The journaling operation in GFS2 involves asynchronous commits of dirty data to the journal, occurring at regular intervals or triggered by explicit sync operations to balance performance and durability. By default, commits happen every 60 seconds if dirty data is present, though this interval can be tuned via mount options like commit= for specific workloads. Journals support revocable entries, allowing blocks to be withdrawn from the log before finalization, which prevents unnecessary replay during clean unmounts by ensuring the journal is flushed and marked consistent upon proper shutdown. GFS2 offers data ordering modes analogous to ext4, including the default ordered mode (where data is written before metadata) and writeback mode (for higher performance with potential data loss on crash). Additionally, data journaling—where both data and metadata are journaled—can be enabled on specific files or directories for maximum consistency, albeit at higher I/O cost. These modes enable tailored trade-offs between reliability and throughput in clustered scenarios.[27][26][28] In the event of a node failure, GFS2's recovery process ensures filesystem consistency through journal scanning and replay performed by surviving or remounting nodes. Upon detection of a failure, other cluster nodes coordinate to scan the affected journal, identifying uncommitted transactions via journal descriptor blocks that track entry states and transaction boundaries. Replay then applies these changes atomically, restoring metadata (and data in journaled mode) to a consistent state; this multi-node recovery relies on lock grants to serialize access and prevent conflicts during the process. The procedure typically completes quickly for small journals but can extend for larger ones, with logs reporting details such as blocks scanned, transactions replayed, and revoke tags processed. This mechanism contributes to overall coherency by isolating recovery to the failed node's changes without impacting active nodes.[1][29][27] Performance tuning of GFS2 journaling focuses on journal sizing to scale with cluster size and workload demands, as larger journals reduce commit frequency and improve I/O bandwidth utilization. The default journal size is 128 MB when creating the filesystem, though added journals default to 32 MB with a minimum of 8 MB and a maximum configurable up to 1 GB to accommodate high-activity nodes. Scaling involves provisioning one journal per active node, with dynamic growth recommended for expanding clusters to avoid bottlenecks; for instance, insufficient journal space can lead to stalled operations during bursts. Data journaling, while ensuring stronger guarantees, increases write amplification and I/O bandwidth consumption, particularly for medium-to-large files, so it is selectively enabled via attributes like chattr +j for critical directories.[30][27][31]Locking and Coherency Management
GFS2 relies on the Distributed Lock Manager (DLM) to coordinate access to shared resources across multiple nodes in a cluster. The DLM, typically integrated with user-space cluster infrastructure such as corosync or Pacemaker, maintains a distributed lock database replicated on each node, enabling atomic lock operations over the network. Lock requests specify modes including Protected Read (PR) for concurrent read access allowing multiple nodes to cache data, Concurrent Write (CW) for operations like direct I/O where multiple nodes can write without exclusive ownership, and Exclusive (EX) for modifications requiring sole control to prevent conflicts. These modes map directly to GFS2's glocks, which are per-object locks (e.g., inodes or resource groups) that cache DLM states locally to minimize communication overhead.[32][33] Cache coherency in GFS2 is enforced through a lock-based protocol centered on glocks, which manage invalidations to ensure all nodes see consistent data views. When a node acquires an EX lock for writing, it invalidates cached pages on other nodes holding PR locks, forcing them to re-read from disk upon next access; similarly, demoting from EX to PR triggers write-back and invalidation. Directory and inode glocks provide namespace consistency by locking entire objects during operations like renames or attribute updates, serializing metadata access cluster-wide. Unlike GPFS, which employs cache fusion to transfer dirty pages directly between nodes, GFS2 avoids this mechanism, opting for simpler invalidation and disk fetches to maintain coherency without complex inter-node data movement. Lock granularity balances efficiency and consistency: byte-range locks handle fine-grained file data access via the VFS layer, coordinated under the broader inode glock for cluster synchronization, while metadata operations use whole-file or whole-object locks to protect structures like journals and allocation bitmaps. Promotion and demotion of glock states—e.g., from PR to EX—optimize traffic by retaining compatible modes when possible, reducing DLM invocations.[34] To safeguard against split-brain scenarios where nodes lose communication but continue accessing storage, GFS2 mandates integration with node fencing mechanisms, such as STONITH (Shoot The Other Node In The Head) in Pacemaker clusters, ensuring failed or partitioned nodes are isolated before locks are recovered. This prevents concurrent writes that could corrupt data. Cluster quorum, enforced via the votequorum service, requires a majority of nodes to form consensus on membership and lock recovery, blocking operations in minority partitions until resolution. These features, built atop the DLM's reliable messaging, ensure robust consistency even during failures.[35]Comparisons and Differences
Differences from Local Filesystems
GFS2 enables concurrent access and writes from multiple nodes to the same shared block storage, in contrast to local filesystems like ext4, which enforce exclusive single-node access to prevent data corruption without distributed coordination. This clustered access model relies on a distributed lock manager (DLM) over TCP/IP for coherency, introducing network round-trips for lock acquisition that increase latency compared to the direct, low-latency operations of local filesystems.[20][34] Performance in GFS2 is impacted by this locking overhead, resulting in lower throughput for random I/O workloads—typically due to contention in acquiring exclusive locks—while excelling in parallel sequential access across nodes where multiple streams can operate without frequent synchronization. Unlike local filesystems, which assume unchecked local page caching for optimal speed, GFS2 maintains cache coherency via glock states, preventing stale data but adding validation costs that reduce single-node efficiency. For instance, operations involving high contention, such as frequent metadata updates, can degrade performance significantly compared to the streamlined local handling in ext4.[36][3] Metadata management in GFS2 is distributed across nodes using cluster-wide glocks for each inode, differing from the centralized, node-local approach in filesystems like ext4 that avoids inter-node coordination. This design supports shared namespace consistency but can create lock contention hotspots, particularly in directories with heavy concurrent inserts or deletes from multiple nodes.[34][3] For reliability, GFS2 integrates fencing and quorum mechanisms within the cluster infrastructure to isolate failed nodes and maintain consistency, enabling continued operation despite individual node failures without requiring full cluster downtime. Local filesystems, by contrast, depend on hardware-level redundancy like RAID for fault tolerance but offer no inherent support for multi-node failure scenarios, as they operate in isolation.[7][37]Improvements over GFS1
GFS2 introduced significant architectural enhancements over its predecessor, GFS1, primarily through its adoption of a 64-bit architecture, which enables support for much larger filesystems compared to GFS1's 32-bit limitations. While GFS1 was constrained to a maximum filesystem size of 8 TB, GFS2 supports up to 16 TB on 32-bit hardware and theoretically up to 8 EB on 64-bit systems, with practical limits reaching 100 TB for files and filesystems in supported configurations.[27][38][3] This shift allows GFS2 to handle exabyte-scale storage environments more effectively, addressing the scalability bottlenecks inherent in GFS1's design. Additionally, GFS2 simplifies locking mechanisms by eliminating the lm-lockspace used in GFS1, instead relying on a distributed lock manager (DLM) with finer-grained glocks for resource groups, which reduces complexity and improves concurrency across cluster nodes.[3][39] Performance improvements in GFS2 stem from reduced metadata overhead and more efficient resource management, leading to faster operations such as mounts, which can be up to twice as quick due to the absence of metadata generation numbers and a streamlined log manager that no longer tracks unlinked inodes or quota changes. Unlike GFS1, GFS2 provides robust online growth capabilities via gfs2_grow—features absent in GFS1 that required offline operations. These changes result in better overall I/O performance, including faster synchronous writes, cached reads without locking overhead, and reduced kernel memory usage, making GFS2 more suitable for high-throughput cluster workloads.[40] GFS2 adds several key features not present in GFS1, including native quota enforcement (enabled via mount options), POSIX ACL inheritance for streamlined access control, and support for multi-device filesystems using underlying layers like LVM or mdadm for greater flexibility in storage pooling. It also removes the separate "meta" filesystem required in GFS1 for metadata management, enabling direct compatibility with standard Linux tools and a unified namespace. For backward compatibility, GFS2 provides the gfs2_convert utility to perform in-place upgrades from GFS1 filesystems, allowing existing data to be migrated without data loss after ensuring integrity with fsck.gfs, though direct mounting of GFS1 volumes requires this conversion step.[24][41][42]Features and Compatibility
Advanced Features
GFS2 provides robust quota management to control resource usage in clustered environments, supporting both per-user and per-group limits enforced at the kernel level. Quota enforcement is disabled by default but can be enabled via thequota=on mount option, allowing administrators to set soft and hard limits using the edquota command without interrupting cluster operations. The quotacheck utility is used to examine quota-enabled file systems and build a table of current disk usage, updating the quota files; it requires the file system to be unmounted or mounted read-only and is typically run manually after enabling quotas or if inaccuracies are suspected (e.g., after a crash). Ongoing quota changes are synchronized to disk periodically, defaulting to every 60 seconds and adjustable via the quota_quantum mount option, ensuring accurate accounting across multiple nodes.[11]
Dynamic operations in GFS2 enable maintenance and optimization while the filesystem remains mounted and accessible by all cluster nodes. Filesystem growth is supported online using the gfs2_grow command, which extends the filesystem to utilize additional space on the underlying device, such as after expanding a logical volume, thereby accommodating increasing storage needs without downtime. For performance optimization, GFS2 lacks a dedicated defragmentation tool, but fragmentation can be addressed manually by identifying affected files with filefrag, copying them to rewrite contiguously, and renaming to replace the originals, which helps mitigate I/O inefficiencies in heavily accessed directories.[43]
Security features in GFS2 extend POSIX access control lists (ACLs) for granular permissions beyond traditional UNIX modes, mountable with the acl option to enable support including inheritance rules that propagate ACLs to newly created files and subdirectories. Integration with SELinux is available through the context mount option, allowing specification of security contexts (e.g., system_u:object_r:httpd_sys_content_t:s0) to enforce mandatory access controls consistently across cluster nodes, though careful policy configuration is required to avoid conflicts in multi-host scenarios.[44]
Additional enhancements include multi-host fencing mechanisms integrated with the Distributed Lock Manager (DLM), which isolates faulty nodes via STONITH actions (e.g., pcs stonith fence [node](/page/Node)) to maintain data consistency and prevent corruption during failures. The withdraw function serves as a protective mode, halting I/O operations upon detecting kernel or hardware inconsistencies—such as inode errors—and logging details for diagnosis, with recovery achieved by unmounting, running fsck.gfs2 on the device, and remounting to restore functionality. GFS2 also supports striped layouts through resource groups, enabling efficient parallel I/O by distributing data blocks across multiple allocation units, which optimizes throughput in high-performance computing workloads. A recent enhancement, introduced in Linux kernel 6.8 (March 2024), enables non-blocking directory lookups by using a non-blocking global lock flag, improving scalability and reducing latency in lookup-heavy operations.[45][46][12]
Compatibility Mechanisms
GFS2 provides interoperability with legacy GFS1 file systems through an in-place conversion process using the gfs2_convert utility, which transforms the on-disk metadata from the GFS1 format to the GFS2 format without requiring additional disk space.[47] This conversion must be performed offline, with the file system unmounted on all nodes and first verified using gfs_fsck to ensure integrity, as the process is irreversible and any interruptions could lead to data corruption.[48] Unlike GFS1, which relied on a separate metadata structure for certain operations, GFS2 integrates all metadata directly into the primary file system, eliminating the need for auxiliary components during access or conversion.[3] The gfs2-utils package includes tools such as gfs2_convert for the upgrade process and gfs2_tool for subsequent editing, querying, and maintenance of converted GFS2 file systems, including journal management and superblock adjustments.[49] For legacy clusters transitioning to GFS2, a backward compatibility mode is not natively supported in the kernel, but the conversion allows seamless migration by enabling GFS2 nodes to access the upgraded file systems while phasing out GFS1-only environments.[1] GFS2 integrates with cluster management software like CMAN in Red Hat Enterprise Linux 6 and earlier, and Pacemaker with Corosync in RHEL 7 through 9, for high-availability configurations; support for GFS2 was removed in RHEL 10 (2024).[24][16] For non-Red Hat distributions, GFS2 is available through the upstream Linux kernel, allowing deployment with compatible distributed lock managers like DLM without proprietary extensions.[1] However, GFS2 does not support read-only mounting of unconverted GFS1 file systems, as the on-disk formats are incompatible, necessitating full conversion before access.[1] Mixed-mode operations involving both GFS1 and GFS2 nodes on the same file system are not possible post-conversion, and improper handling during the upgrade—such as failing to back up data or encountering hardware issues—can result in permanent data loss.[48]Deployment and Recent Developments
Mounting and Configuration
Before mounting a GFS2 filesystem, the cluster must be initialized using tools like corosync for messaging and Pacemaker for resource management, ensuring all nodes have synchronized clocks via NTP or PTP and access to shared storage such as a logical volume managed by LVM.[6][1] The filesystem itself is created on the shared block device using themkfs.gfs2 command, which requires specifying the locking protocol and resource ID; for example, mkfs.gfs2 -p lock_dlm -t clustername:fsname -j number_of_journals /dev/blockdevice, where the -j option sets the number of journals equal to the expected number of mounting nodes (one per node), with a default journal size of 128 MB.[6][1]
The mounting process uses the mount.gfs2 command on each node, specifying options for cluster locking; a typical command is mount.gfs2 /dev/blockdevice /mountpoint -o lockproto=lock_dlm,locktable=clustername:fsname, where lockproto=lock_dlm enables the Distributed Lock Manager for coherency across nodes, and locktable identifies the cluster and filesystem to the DLM.[6][50] Journals can be added dynamically after initial creation and mounting, using gfs2_jadd -j number /mountpoint to accommodate additional nodes without recreating the filesystem; for instance, gfs2_jadd -j 1 /mygfs2 adds one journal.[6][1]
For persistent mounting, add an entry to /etc/[fstab](/page/Fstab) on each node, such as /dev/vgname/lvname /mountpoint gfs2 [defaults](/page/Default),_netdev,lockproto=lock_dlm,locktable=clustername:fsname 0 0, ensuring the _netdev option delays mounting until the network is available for cluster communication.[6] The locktable for DLM is configured during filesystem creation with the -t option in mkfs.gfs2 (e.g., -t alpha:mydata) and referenced at mount time to scope locks to the specific cluster and filesystem, preventing interference with other resources.[6][50] Basic tuning can be applied via mount options like commit=seconds to adjust the journal commit interval (default 60 seconds) or quota_quantum=seconds for quota checks (default 60 seconds), optimizing for workload-specific behavior.[50]
GFS2 management tools include gfs2_fsck for filesystem integrity checks and repairs, invoked as fsck.gfs2 -y /dev/blockdevice to automatically fix issues when the filesystem is unmounted.[6][1] The gfs2_edit utility allows inspection of metadata, such as viewing the journal index with gfs2_edit -p jindex /dev/blockdevice for debugging purposes.[6] Unmounting is performed with umount.gfs2 /mountpoint or standard umount on each node, ideally managed through Pacemaker resources to ensure clean shutdown and avoid hangs during node fencing.[6][1]