Logical volume management
Logical volume management (LVM) is a storage device management technology in Linux that uses the device mapper framework in the Linux kernel to enable the creation of a layer of abstraction over physical storage devices, allowing administrators to manage logical volumes more flexibly than with conventional disk partitioning.[1] This abstraction pools storage from multiple physical devices into logical units, supporting features like dynamic resizing, snapshots, and data relocation without downtime.[2] At its core, LVM operates through three primary components: physical volumes (PVs), which are the underlying storage devices or partitions initialized for LVM use; volume groups (VGs), which aggregate one or more PVs into a unified pool of storage; and logical volumes (LVs), which are virtual partitions carved from a VG and presented to the operating system as block devices for formatting and mounting.[1] These elements enable advanced capabilities, such as spanning volumes across disks, striping for performance, mirroring for redundancy, thin provisioning to allocate space on demand, and caching to accelerate access to slower storage.[1] The benefits of LVM include greater adaptability to changing storage needs, easier backup and recovery via snapshots, and simplified administration in environments like servers and data centers, where traditional fixed partitions can limit scalability.[2] For instance, LVs can be resized online—expanded or reduced—without reformatting or unmounting, and data can be moved between physical devices using tools likepvmove while applications remain active.[1]
Historically, LVM in Linux originated from IBM's Logical Volume Manager, which was adopted by the Open Software Foundation (OSF) for OSF/1 and later influenced implementations in HP-UX and Digital UNIX; the Linux version was developed starting in the late 1990s by Heinz Mauelshagen and the team at Sistina Software, with initial releases integrated into the kernel around 1998–2001 to provide enterprise-grade storage management.[3] Ongoing evolution has incorporated clustering support, RAID integration, and enhanced metadata handling, making it a standard in distributions like Red Hat Enterprise Linux and Ubuntu.[3]
History and Development
Origins in Unix-like Systems
The development of logical volume management (LVM) in Unix-like systems drew early influences from proprietary storage solutions in the late 1980s and early 1990s, which introduced concepts for managing volumes across multiple disks to overcome single-disk limitations in file systems.[4] IBM's Logical Volume Manager served as a foundational implementation, adopted by the Open Software Foundation (OSF) for its OSF/1 operating system in the early 1990s. This adoption influenced subsequent LVM-like systems in HP-UX, starting with release 9.0 in 1992, and Digital UNIX (later Tru64 UNIX).[3] A pivotal advancement came with the introduction of LVM in IBM's AIX operating system, marking the first commercial implementation in 1990 as part of AIX Version 3.1.[5] This system mandated LVM usage to address partitioning constraints in earlier Unix variants, allowing administrators to create volume groups from physical volumes and allocate logical volumes dynamically for improved scalability and management.[6] AIX LVM's design emphasized integration with the operating system's object data manager, providing a robust framework that influenced subsequent Unix storage tools.[5] During the 1990s, LVM concepts further evolved by incorporating principles from RAID and disk mirroring techniques prevalent in BSD and System V Unix variants. BSD implementations, such as those in 4.4BSD, integrated mirroring for fault tolerance, while System V extensions supported striping and redundancy to enhance data availability across disks. These borrowings enabled LVM to support software-based redundancy without hardware dependencies, adapting RAID levels like mirroring (RAID 1) into volume management operations. The open-source movement brought LVM to Linux with version 1 in 1998, developed by Heinz Mauelshagen at Sistina Software and integrated via kernel patches, with support for the 2.2 series released in January 1999.[7][3] Linux LVM1 focused on basic volume spanning, allowing multiple physical devices to be combined into a single logical volume for extended storage capacity.[7] This implementation provided an accessible alternative to proprietary systems, emphasizing simplicity in user-space tools for volume creation and resizing.Evolution and Key Milestones
Logical volume management (LVM) in Linux saw a significant advancement with the introduction of LVM2 in 2003, coinciding with the release of Linux kernel 2.6. This version replaced the original LVM1 by leveraging the new device-mapper kernel framework, which enabled more flexible mapping of physical storage to logical volumes.[7][8] LVM2 introduced read-write snapshots, allowing point-in-time copies of volumes for backups or testing without disrupting the original data, a feature limited to read-only in prior implementations.[7] These enhancements provided greater abstraction from underlying hardware, facilitating dynamic resizing and mirroring of volumes. Subsequent milestones in Linux LVM development focused on user-space integration and compatibility with emerging storage technologies. In 2018, Stratis emerged as a user-space daemon for simplified storage management, building on device-mapper and XFS to offer thin provisioning, snapshots, and caching while integrating as a systemd service for seamless operation in modern distributions.[9][10] Around 2020, Linux kernel 5.x series incorporated improvements to the device-mapper subsystem, enhancing LVM's performance and reliability on NVMe solid-state drives through optimized I/O handling and multipath support. Beyond Linux, LVM concepts influenced storage frameworks in other operating systems, promoting cross-platform adoption. FreeBSD's GEOM modular disk transformation framework, which supports logical volume-like operations such as striping and mirroring, reached key maturity milestones by 2005, enabling robust software RAID and volume management.[11] Microsoft introduced Storage Spaces in Windows 8 and Windows Server 2012, providing a resilient, scalable storage pool abstraction akin to LVM for pooling disks into virtual spaces with parity and mirroring. In cloud environments, Azure Disk Storage received 2023 updates, including general availability of incremental snapshots for Premium SSD v2 and Ultra Disks, facilitating efficient volume management and data protection in virtualized setups.[12]Fundamentals
Basic Principles
Logical volume management (LVM) is a storage virtualization technology that operates as a device mapper layer, abstracting physical storage devices into flexible logical units to enable dynamic management of disk space. This abstraction allows administrators to perform operations such as expanding logical volumes online where supported by the filesystem, without reformatting, though shrinking often requires unmounting to ensure safety, providing a higher level of flexibility compared to rigid physical layouts.[1][13] At its core, LVM employs a principle of indirection through metadata that maps logical block addresses to underlying physical storage extents, decoupling the logical view from the physical hardware configuration. This mapping mechanism supports features like spanning logical volumes across multiple physical disks, treating disparate storage devices as a unified resource pool rather than isolated components. The metadata, typically stored on the physical volumes themselves, ensures that data relocation and access remain transparent to applications and the operating system.[14][13][1] In contrast to traditional partitioning schemes, which assign fixed boundaries to physical disks, LVM conceptualizes storage as pooled resources within volume groups, from which logical volumes can be dynamically allocated, extended, or removed online. This pool-based approach facilitates seamless additions or removals of physical volumes without disrupting existing logical structures, enhancing scalability in environments with evolving storage needs.[1][13]Comparison to Traditional Partitioning
Traditional partitioning methods, such as those using tools like fdisk with Master Boot Record (MBR) or GUID Partition Table (GPT), create fixed divisions directly on physical disks that are inherently static and bound to the underlying hardware geometry.[15] These partitions cannot be easily resized or reallocated without significant intervention, often requiring the system to be taken offline, the use of specialized tools for adjustment, and a high risk of data loss if the process fails during reformatting or boundary shifts. In contrast, Logical Volume Management (LVM) introduces an abstraction layer through volume groups that pool storage from multiple physical volumes, enabling dynamic allocation of logical volumes independent of individual disk boundaries.[15] This allows for non-disruptive expansion or contraction of volumes—such as extending a logical volume with commands likelvextend—without the need to repartition entire drives, though unmounting and filesystem-specific support (e.g., for ext4) are typically required to minimize risks. As a result, LVM reduces downtime and enhances manageability in dynamic environments.
Particularly in server settings, LVM's ability to aggregate resources across drives facilitates seamless growth, such as combining multiple terabyte disks into a unified pool for scalable storage, whereas traditional partitioning confines volumes to single-disk limits and complicates multi-device setups.[16] For instance, three 1 TB disks can form a 3 TB volume group in LVM, supporting volumes up to petabyte scales through extensive pooling, far exceeding the constraints of conventional methods tied to individual drive capacities.[15]
Architecture and Components
Physical and Logical Elements
Physical volumes (PVs) serve as the foundational storage units in logical volume management (LVM), consisting of initialized block devices such as entire disks or disk partitions designated for LVM use.[17] To create a PV, thepvcreate command is employed, which writes an LVM identifier label—typically in the second 512-byte sector—and a small metadata area shortly thereafter, marking the device for LVM operations.[18] This metadata, stored in ASCII format, is compact, with a default size of approximately 1 MB per volume, and by default includes one copy at the beginning of the device, though up to two copies can be configured for redundancy.[19] The metadata records essential details like the device's UUID and size, enabling LVM to recognize and manage the PV across system reboots.[17]
Logical volumes (LVs) represent the user-accessible abstractions in LVM, functioning as virtual block devices that appear to applications and file systems much like traditional disk partitions.[19] These LVs are formed from aggregated storage resources, providing a contiguous address space that can be mounted, formatted, and utilized similarly to physical partitions, but with the key advantage of dynamic resizing without downtime.[15] For instance, an LV can be extended using the lvextend command to increase its capacity by appending additional storage, followed by resizing the associated file system if needed.[19] Shrinking an LV is also possible but requires careful handling, as it may result in data loss in the reduced portion, necessitating backups and file system adjustments beforehand.[19]
The mapping process in LVM translates physical storage into logical space by dividing PVs into fixed-size chunks and combining them to form a unified, contiguous area for LVs.[15] This abstraction allows LVs to span multiple PVs seamlessly, presenting a linear block interface to users while hiding the underlying physical distribution.[17]
Volume Groups and Extents
In logical volume management, volume groups serve as the central organizational layer, aggregating one or more physical volumes into a unified pool of storage capacity that can be dynamically allocated to logical volumes. This pooling mechanism allows administrators to treat disparate physical storage devices as a single, cohesive resource, enabling flexible resizing and redistribution of space without regard to individual device boundaries. Volume groups can be activated to make their contained logical volumes accessible to the operating system or deactivated to isolate them for maintenance, such as during system migrations or backups, thereby enhancing manageability in complex storage environments.[20] The fundamental units of allocation within a volume group are extents, which divide the storage into manageable chunks. Physical extents represent the smallest contiguous areas on physical volumes that can be assigned, with a default size of 4 MiB in Linux LVM implementations, though this can be configured during volume group creation to balance granularity and overhead. Logical extents, in turn, are the corresponding units allocated to logical volumes, mapping directly one-to-one with physical extents to ensure efficient space utilization across the volume group. This 1:1 correspondence maintains data integrity while allowing logical volumes to span multiple physical volumes seamlessly.[20] Allocation of logical extents from physical extents follows policies designed to optimize performance and space efficiency. The contiguous policy is preferred, as it assigns extents in adjacent blocks to minimize seek times and improve I/O throughput, but falls back to fragmented allocation—distributing extents non-contiguously across available physical volumes—when free space constraints prevent contiguous placement. For enhanced performance in scenarios like striped volumes, extents can be allocated using round-robin distribution across multiple physical volumes, interleaving data stripes to leverage parallelism and reduce bottlenecks.[21]Core Features
Snapshots and Copy-on-Write
In logical volume management (LVM), snapshots enable the creation of point-in-time copies of logical volumes (LVs) without interrupting ongoing operations, primarily through a copy-on-write (COW) mechanism. This approach preserves the original data by redirecting any subsequent writes on the source LV to new storage extents, while the snapshot LV retains references to the unchanged original extents via the underlying extent mapping system. The COW process ensures that the snapshot initially shares all data blocks with the origin LV, consuming minimal additional space at creation, and only allocates new space for modified blocks as changes occur on the origin.[22][23] The creation of an LVM snapshot is instantaneous, facilitated by the device-mapper framework in the Linux kernel, which sets up the necessary metadata mappings without copying data upfront. Administrators typically use thelvcreate --snapshot command, specifying the desired snapshot size and naming it relative to the origin LV within the same volume group; for example, lvcreate --snapshot --size 10G --name snap1 /dev/vg0/origin. This results in a new LV that appears as a fully populated block device, ready for immediate use, though it relies on the volume group's free extents for the COW storage. Over time, however, the snapshot's performance can degrade due to fragmentation, as the scattered allocation of COW extents leads to increased I/O overhead during reads and writes.[22][24]
Space requirements for the snapshot LV are determined by the anticipated data modifications on the origin, estimated as the product of the original LV size, the expected change rate (as a fraction of the volume), and the duration the snapshot will be maintained. For instance, a low-change volume like /usr might require only 3-5% of its size for short-term snapshots, while high-change volumes such as /home could demand 30% or more to avoid exhaustion. Traditional COW snapshots are read-only by default to maintain point-in-time integrity, and if the allocated COW space fills completely, the snapshot becomes invalid, potentially suspending I/O on the origin LV until resolved by extension or removal. Proper management, such as monitoring via lvs and periodic resizing with lvextend, is essential to prevent such failures.[25][24]
Snapshots find practical application in scenarios requiring data preservation during potentially disruptive activities, such as creating backups of active filesystems or testing system upgrades in isolated environments. For backups, the snapshot provides a consistent, quiesced view of the LV that can be mounted and archived without affecting the live system. In upgrade testing, it allows rollback by merging changes or simply discarding the snapshot if issues arise, minimizing downtime. These use cases leverage the efficiency of COW to avoid full data duplication, though careful sizing based on workload patterns remains critical to ensure reliability.[23][25]
Hybrid and Thin Volumes
Hybrid volumes in logical volume management combine faster storage media, such as solid-state drives (SSDs), with slower but higher-capacity hard disk drives (HDDs) to optimize performance and cost. This tiering approach dynamically promotes frequently accessed "hot" data blocks to the SSD tier for low-latency operations while demoting less-used "cold" data to the HDD tier, based on access patterns monitored by the system. In Linux LVM implementations, the device-mapper cache (dm-cache) facilitates this by creating a cache logical volume that overlays an origin volume on slower storage, automatically migrating blocks between tiers to improve overall I/O throughput.[26] Thin volumes extend logical volume management by enabling over-allocation of storage space, where logical volumes can exceed the physical capacity of the underlying pool, such as provisioning 10 TB logically from 5 TB physically. Allocation occurs on-demand as data is written, with a thin pool—composed of physical extents from a volume group—tracking usage through dedicated metadata that maps virtual to physical blocks. This metadata, typically allocated at a small percentage of the pool (e.g., via a 1000:1 data-to-metadata ratio), ensures efficient space utilization but requires monitoring to prevent exhaustion.[23][26] Overcommitment in thin provisioning introduces risks, including potential storage pool depletion if demand exceeds physical limits, leading to write failures or system outages; administrators must actively monitor pool usage and expand capacity proactively.[23]Implementations
Linux LVM
Logical Volume Management (LVM) in Linux is primarily implemented through the LVM2 suite, an open-source userspace toolset that enables flexible storage management by abstracting physical storage devices into logical volumes.[27] LVM2 builds on the kernel's device-mapper framework, which has been integrated since Linux kernel version 2.6, allowing for dynamic mapping of physical extents to logical ones without requiring kernel patches.[8] This implementation supports advanced features like resizable volumes and snapshots, making it a standard component in major Linux distributions such as Red Hat Enterprise Linux and Ubuntu.[19] The core tools in the LVM2 suite facilitate the creation and manipulation of LVM components. Thepvcreate command initializes physical volumes (PVs) on block devices, marking them for use in LVM. Once PVs are prepared, vgcreate combines them into a volume group (VG), pooling their storage capacity. Logical volumes (LVs) are then allocated from the VG using lvcreate, which specifies size, type, and other parameters, while lvresize allows online resizing of existing LVs to expand or shrink storage as needed. These tools operate via the lvm command wrapper, ensuring consistent metadata handling across operations.[28]
By default, LVM2 configures physical extents (PEs) at 4 MiB, which balances manageability. Unlike LVM1, LVM2 supports logical volumes up to 8 EiB without the former extent cap limiting size at default settings, though larger extents can be specified if needed.[13] For redundancy, LVM supports RAID levels 0, 1, and 10 through hybrid configurations with the mdadm tool, where mdadm arrays serve as underlying PVs for LVM, or via LVM's built-in RAID targets in device-mapper that leverage the kernel's Multiple Devices (MD) drivers for striping and mirroring.[29] This hybrid approach enables fault-tolerant setups without proprietary hardware.
Management of LVM in Linux occurs primarily through command-line interfaces provided by the lvm2 package, which includes utilities for scanning, displaying, and activating components.[30] Volume groups are activated using vgchange, which makes their LVs available to the system, often during boot via initramfs hooks. For graphical management, tools like system-config-lvm offer a user-friendly interface for creating and resizing volumes, though it is considered legacy in newer distributions and may not support all advanced features.[31] Overall, these elements integrate seamlessly with the general LVM architecture of physical and logical extents as outlined in core documentation.[27]
Non-Linux Systems
In Windows operating systems, logical volume management has evolved from the legacy Dynamic Disks feature to the more advanced Storage Spaces. Dynamic Disks, introduced in Windows 2000, allowed for the creation of spanned, striped, mirrored, and RAID-5 volumes using a Logical Disk Manager database to handle noncontiguous extents, providing flexibility beyond basic partitioning.[32] However, Dynamic Disks have been deprecated for all uses except mirror boot volumes since Windows 8 and Server 2012, due to issues like irreversible conversions and performance limitations, with Microsoft recommending alternatives for resilient storage.[32] Storage Spaces, introduced in Windows 8 and Windows Server 2012, serves as the primary modern equivalent, aggregating physical disks into storage pools to create virtualized volumes with resiliency options such as two-way or three-way mirroring (tolerating one or two drive failures) and single or dual parity (for efficient capacity with fault tolerance).[33] It supports storage tiers, including SSD caching layers for improved performance on HDDs, thin provisioning, and dynamic resizing, enabling scalable logical volume management without hardware-specific dependencies.[33] In BSD variants like FreeBSD, the GEOM framework provides a modular approach to logical volume management through disk transformation layers. GEOM, integrated into FreeBSD since version 5.0 in 2004, acts as a stackable framework that transforms block device access, allowing providers (such as physical disks) to be layered with classes for operations like striping, mirroring, and parity-based RAID configurations.[11] This modular stacking enables flexible logical volumes by composing transformations— for example, using gmirror for RAID-1 mirroring or gstripe for RAID-0 striping—without a rigid hierarchy, supporting auto-discovery and directed configuration for dynamic storage adjustments.[11] macOS employs the Apple File System (APFS) Container as its logical volume management structure, succeeding the earlier Core Storage. Introduced in macOS High Sierra (version 10.13) in 2017, APFS Containers function as shared storage pools that house multiple volumes, dynamically allocating space on demand across them for efficient utilization similar to volume groups.[34] APFS supports native snapshots since its debut, creating read-only point-in-time copies of volumes within the container using copy-on-write mechanisms, which facilitate backups and data recovery without full duplication.[34] This design allows seamless addition, deletion, or resizing of volumes inside the container, enhancing flexibility for system and user data management.[35] Beyond these, proprietary and open-source solutions offer LVM-like capabilities in other environments. Veritas Volume Manager (VxVM), a commercial tool available for Solaris and AIX since the late 1990s, provides a logical layer over physical disks and LUNs, enabling volume groups, dynamic resizing, mirroring, and striping to overcome hardware partitioning limits.[36] In illumos-based systems (a successor to OpenSolaris), ZFS volumes (zvols) deliver block-device logical volumes atop ZFS pools, supporting features like snapshots, cloning, and thin provisioning for use as swap or raw devices. In cloud contexts, Amazon Web Services (AWS) Elastic Block Store (EBS) volumes emulate logical volumes with API-driven snapshot capabilities, creating incremental point-in-time backups stored in S3 for replication and restoration across availability zones.[37]Advantages and Limitations
Benefits for Storage Management
Logical Volume Management (LVM) provides significant flexibility in storage administration by enabling online resizing of logical volumes without requiring system downtime or reformatting of underlying devices. Administrators can extend or reduce volume sizes using commands such aslvextend or lvreduce, allowing seamless adaptation to changing storage needs in dynamic environments like virtualized infrastructures. This capability supports live data migration via tools like pvmove, which relocates extents between physical volumes while the system remains operational, facilitating hardware upgrades or maintenance without interrupting services.
LVM enhances scalability by pooling multiple physical storage devices into volume groups, from which large logical volumes can be created to manage expansive datasets efficiently and reduce administrative overhead in data centers. This abstraction layer simplifies the handling of petabyte-scale storage by treating disparate disks as a unified resource, enabling easier expansion as capacity demands grow.[19] Additionally, thin provisioning allows for the allocation of virtual storage space on demand, optimizing utilization and lowering initial acquisition costs by avoiding the need to pre-allocate full physical capacity.[23] As detailed in core features, thin volumes defer physical allocation until data is written, further promoting efficient resource use across workloads.
For redundancy, LVM incorporates built-in mirroring to ensure fault tolerance, where data is duplicated across multiple physical volumes to protect against disk failures, offering a software-based alternative to hardware RAID configurations. Commands like lvconvert --type raid1 --mirrors 2 create mirrored logical volumes that maintain data integrity by synchronously replicating writes, simplifying setup and management compared to manual RAID assembly. This approach enhances system reliability in production environments by enabling quick recovery from hardware issues without complex external tools.[19]
Potential Drawbacks and Challenges
Logical volume management systems, while offering flexibility in storage configuration, can introduce external fragmentation issues, particularly through repeated use of snapshots and volume resizes. Snapshots rely on copy-on-write mechanisms that allocate space in chunks, leading to scattered data placement over time and degrading I/O performance, especially on hard disk drives (HDDs) where seek times are significant. For instance, in benchmark tests involving small file operations under snapshot mode, random read performance exhibited up to a 20% regression after substantial write activity, attributed to increased fragmentation and metadata overhead.[38][39] The inherent complexity of LVM metadata management presents a notable challenge, requiring administrators to navigate a steep learning curve to effectively handle volume groups, physical extents, and logical volumes. This abstraction layer, while powerful, demands precise command-line operations for tasks like extending or mirroring volumes, increasing the risk of misconfiguration for less experienced users. Recovery from volume group (VG) metadata corruption is particularly arduous; such corruption can render entire storage pools inaccessible, necessitating manual restoration from backups stored in locations like /etc/lvm/backup or /etc/lvm/archive. Although tools like vgcfgrestore facilitate recovery, the process is not foolproof and often requires downtime, underscoring the critical need for regular, verified metadata backups.[40][41][42] LVM introduces additional resource overhead due to the indirection in block mapping, consuming extra CPU cycles and memory for translating logical addresses to physical ones during I/O operations. This overhead can be significant for workloads involving small files. Memory usage is also impacted, as LVM maintains metadata structures in RAM, potentially straining systems with limited resources during intensive operations like snapshot creation. Furthermore, LVM's reliance on kernel modules for activation can lead to boot incompatibilities with certain configurations; for example, GRUB requires an initramfs to load LVM support and activate volumes before mounting the root filesystem, as the bootloader alone cannot fully handle LVM without this intermediate step.[43][38][44][45]Integration and Advanced Topics
Compatibility with Filesystems
Logical Volume Management (LVM) serves as an abstraction layer between physical storage and filesystems, enabling seamless integration with various filesystem types for dynamic volume operations. This compatibility allows administrators to create, resize, and manage logical volumes (LVs) while leveraging filesystem-specific features for data storage.[46] LVM integrates effectively with Linux-native filesystems such as ext4, XFS, and Btrfs, particularly through support for online resizing that minimizes downtime. For ext4, after extending an LV using thelvextend command, the filesystem can be grown online with resize2fs to utilize the additional space without unmounting.[47][48] XFS similarly supports online expansion via xfs_growfs following LV growth, making it suitable for large-scale deployments where volumes may need frequent adjustments.[46] Btrfs enhances this compatibility by allowing both online growing and shrinking with the btrfs filesystem resize command, and its native subvolumes provide filesystem-level logical partitioning atop an LV, enabling efficient snapshotting and quota management without additional LVM overhead.[13][49]
In contrast, filesystems like FAT and NTFS exhibit limitations when used on LVM due to their rigid structures and lack of native online resize support in Linux. FAT volumes require unmounting and tools such as fatresize to adjust size before or after LV operations, as online changes risk data corruption from the filesystem's fixed cluster allocation.[50] NTFS resizing on LVM demands the ntfsresize utility, which also necessitates unmounting the volume, restricting flexibility compared to ext4 or XFS and complicating cross-platform scenarios.[51] Furthermore, layering ZFS over LVM is discouraged owing to the resulting double abstraction, which incurs unnecessary management overhead and performance penalties, as ZFS natively incorporates volume management features that overlap with LVM's role.[52][53]
For optimal performance in enterprise environments, aligning LVM physical extents (PEs) with the underlying filesystem's block size is a recommended best practice to avoid read/write inefficiencies. The default PE size of 4 MiB aligns naturally with common 4 KiB filesystem blocks, since 4 MiB equals 1,024 × 4 KiB, ensuring contiguous allocation and reducing fragmentation on advanced storage hardware.[20][54] This alignment is particularly beneficial when creating LVs for high-throughput workloads, promoting efficient I/O patterns across the stack.[55]