Fact-checked by Grok 2 weeks ago

Logical volume management

Logical volume management (LVM) is a management technology in that uses the device mapper framework in the to enable the creation of a layer of over physical devices, allowing administrators to manage logical volumes more flexibly than with conventional . This pools from multiple physical devices into logical units, supporting features like dynamic resizing, snapshots, and data relocation without downtime. At its core, LVM operates through three primary components: physical volumes (PVs), which are the underlying storage devices or partitions initialized for LVM use; volume groups (VGs), which aggregate one or more PVs into a unified pool of storage; and logical volumes (LVs), which are virtual partitions carved from a VG and presented to the operating system as block devices for formatting and mounting. These elements enable advanced capabilities, such as spanning volumes across disks, striping for performance, mirroring for redundancy, to allocate space on demand, and caching to accelerate access to slower storage. The benefits of LVM include greater adaptability to changing storage needs, easier and via snapshots, and simplified in environments like servers and data centers, where traditional fixed partitions can limit scalability. For instance, LVs can be resized online—expanded or reduced—without reformatting or unmounting, and data can be moved between physical devices using tools like pvmove while applications remain active. Historically, LVM in originated from IBM's Logical Volume Manager, which was adopted by the (OSF) for and later influenced implementations in and Digital UNIX; the Linux version was developed starting in the late 1990s by Heinz Mauelshagen and the team at Sistina Software, with initial releases integrated into the kernel around 1998–2001 to provide enterprise-grade storage management. Ongoing has incorporated clustering support, integration, and enhanced handling, making it a standard in distributions like and .

History and Development

Origins in Unix-like Systems

The development of logical volume management (LVM) in systems drew early influences from proprietary storage solutions in the late 1980s and early , which introduced concepts for managing volumes across multiple disks to overcome single-disk limitations in file systems. IBM's Logical Volume Manager served as a foundational implementation, adopted by the (OSF) for its operating system in the early . This adoption influenced subsequent LVM-like systems in , starting with release 9.0 in 1992, and Digital UNIX (later Tru64 UNIX). A pivotal advancement came with the introduction of LVM in IBM's AIX operating system, marking the first commercial implementation in 1990 as part of AIX Version 3.1. This system mandated LVM usage to address partitioning constraints in earlier Unix variants, allowing administrators to create volume groups from physical volumes and allocate logical volumes dynamically for improved and management. AIX LVM's design emphasized integration with the operating system's object data manager, providing a robust framework that influenced subsequent Unix storage tools. During the 1990s, LVM concepts further evolved by incorporating principles from and techniques prevalent in BSD and System V Unix variants. BSD implementations, such as those in 4.4BSD, integrated for , while System V extensions supported striping and redundancy to enhance data availability across disks. These borrowings enabled LVM to support software-based redundancy without hardware dependencies, adapting RAID levels like (RAID 1) into volume management operations. The open-source movement brought LVM to with version 1 in 1998, developed by Heinz Mauelshagen at Sistina Software and integrated via kernel patches, with support for the 2.2 series released in January 1999. LVM1 focused on basic volume spanning, allowing multiple physical devices to be combined into a single logical volume for extended storage capacity. This implementation provided an accessible alternative to proprietary systems, emphasizing simplicity in user-space tools for volume creation and resizing.

Evolution and Key Milestones

Logical volume management (LVM) in saw a significant advancement with the introduction of LVM2 in 2003, coinciding with the release of 2.6. This version replaced the original LVM1 by leveraging the new device-mapper kernel framework, which enabled more flexible mapping of physical storage to logical volumes. LVM2 introduced read-write snapshots, allowing point-in-time copies of volumes for backups or testing without disrupting the original data, a feature limited to read-only in prior implementations. These enhancements provided greater abstraction from underlying hardware, facilitating dynamic resizing and of volumes. Subsequent milestones in Linux LVM development focused on user-space integration and compatibility with emerging storage technologies. In 2018, Stratis emerged as a user-space daemon for simplified storage management, building on device-mapper and to offer , snapshots, and caching while integrating as a service for seamless operation in modern distributions. Around 2020, 5.x series incorporated improvements to the device-mapper subsystem, enhancing LVM's performance and reliability on NVMe solid-state drives through optimized I/O handling and multipath support. Beyond , LVM concepts influenced storage frameworks in other operating systems, promoting cross-platform adoption. FreeBSD's GEOM modular disk transformation framework, which supports logical volume-like operations such as striping and mirroring, reached key maturity milestones by 2005, enabling robust software and volume management. introduced Storage Spaces in and , providing a resilient, scalable storage pool abstraction akin to LVM for pooling disks into virtual spaces with parity and mirroring. In cloud environments, Disk Storage received 2023 updates, including general availability of incremental snapshots for Premium SSD v2 and Ultra Disks, facilitating efficient volume management and data protection in virtualized setups.

Fundamentals

Basic Principles

Logical volume management (LVM) is a technology that operates as a layer, abstracting physical storage devices into flexible logical units to enable dynamic management of disk space. This abstraction allows administrators to perform operations such as expanding logical volumes online where supported by the filesystem, without reformatting, though shrinking often requires unmounting to ensure safety, providing a higher level of flexibility compared to rigid physical layouts. At its core, LVM employs a principle of through that maps logical block addresses to underlying physical extents, decoupling the logical view from the physical . This mechanism supports features like spanning logical volumes across multiple physical disks, treating disparate devices as a unified resource pool rather than isolated components. The , typically stored on the physical volumes themselves, ensures that data relocation and access remain transparent to applications and the operating system. In contrast to traditional partitioning schemes, which assign fixed boundaries to physical disks, LVM conceptualizes storage as pooled resources within volume groups, from which logical volumes can be dynamically allocated, extended, or removed online. This pool-based approach facilitates seamless additions or removals of physical volumes without disrupting existing logical structures, enhancing scalability in environments with evolving storage needs.

Comparison to Traditional Partitioning

Traditional partitioning methods, such as those using tools like with (MBR) or (GPT), create fixed divisions directly on physical disks that are inherently static and bound to the underlying hardware geometry. These partitions cannot be easily resized or reallocated without significant intervention, often requiring the system to be taken offline, the use of specialized tools for adjustment, and a high risk of if the process fails during reformatting or boundary shifts. In contrast, Logical Volume Management (LVM) introduces an through volume groups that pool storage from multiple physical s, enabling dynamic allocation of logical s independent of individual disk boundaries. This allows for non-disruptive expansion or contraction of s—such as extending a logical with commands like lvextend—without the need to repartition entire drives, though unmounting and filesystem-specific support (e.g., for ) are typically required to minimize risks. As a result, LVM reduces downtime and enhances manageability in dynamic environments. Particularly in server settings, LVM's ability to aggregate resources across drives facilitates seamless growth, such as combining multiple terabyte disks into a unified pool for scalable storage, whereas traditional partitioning confines volumes to single-disk limits and complicates multi-device setups. For instance, three 1 TB disks can form a 3 TB volume group in LVM, supporting volumes up to petabyte scales through extensive pooling, far exceeding the constraints of conventional methods tied to individual drive capacities.

Architecture and Components

Physical and Logical Elements

Physical volumes (PVs) serve as the foundational storage units in logical volume management (LVM), consisting of initialized block devices such as entire disks or disk partitions designated for LVM use. To create a PV, the pvcreate command is employed, which writes an LVM identifier —typically in the second 512-byte sector—and a small metadata area shortly thereafter, marking the device for LVM operations. This , stored in ASCII format, is compact, with a default size of approximately 1 MB per volume, and by default includes one copy at the beginning of the device, though up to two copies can be configured for . The metadata records essential details like the device's UUID and size, enabling LVM to recognize and manage the PV across system reboots. Logical volumes (LVs) represent the user-accessible abstractions in LVM, functioning as virtual block devices that appear to applications and file systems much like traditional disk partitions. These LVs are formed from aggregated storage resources, providing a contiguous that can be mounted, formatted, and utilized similarly to physical partitions, but with the key advantage of dynamic resizing without . For instance, an LV can be extended using the lvextend command to increase its capacity by appending additional storage, followed by resizing the associated if needed. Shrinking an LV is also possible but requires careful handling, as it may result in in the reduced portion, necessitating backups and file system adjustments beforehand. The mapping process in LVM translates physical storage into logical space by dividing PVs into fixed-size chunks and combining them to form a unified, contiguous area for LVs. This allows LVs to span multiple PVs seamlessly, presenting a linear block interface to users while hiding the underlying physical distribution.

Volume Groups and Extents

In logical volume management, volume groups serve as the central organizational layer, aggregating one or more physical volumes into a unified of storage capacity that can be dynamically allocated to logical volumes. This pooling mechanism allows administrators to treat disparate physical storage devices as a single, cohesive resource, enabling flexible resizing and redistribution of space without regard to individual device boundaries. Volume groups can be activated to make their contained logical volumes accessible to the operating system or deactivated to isolate them for maintenance, such as during system migrations or backups, thereby enhancing manageability in complex environments. The fundamental units of allocation within a volume group are extents, which divide the storage into manageable chunks. Physical extents represent the smallest contiguous areas on physical volumes that can be assigned, with a size of 4 in LVM implementations, though this can be configured during volume group creation to balance and overhead. Logical extents, in turn, are the corresponding units allocated to logical volumes, mapping directly one-to-one with physical extents to ensure efficient space utilization across the volume group. This 1:1 correspondence maintains while allowing logical volumes to span multiple physical volumes seamlessly. Allocation of logical extents from physical extents follows policies designed to optimize and . The contiguous policy is preferred, as it assigns extents in adjacent blocks to minimize seek times and improve I/O throughput, but falls back to fragmented allocation—distributing extents non-contiguously across available physical volumes—when free constraints prevent contiguous placement. For enhanced in scenarios like striped volumes, extents can be allocated using distribution across multiple physical volumes, interleaving data stripes to leverage parallelism and reduce bottlenecks.

Core Features

Snapshots and Copy-on-Write

In logical volume management (LVM), snapshots enable the creation of point-in-time copies of logical volumes (LVs) without interrupting ongoing operations, primarily through a (COW) mechanism. This approach preserves the original data by redirecting any subsequent writes on the source LV to new storage extents, while the snapshot LV retains references to the unchanged original extents via the underlying extent mapping system. The COW process ensures that the snapshot initially shares all data blocks with the origin LV, consuming minimal additional space at creation, and only allocates new space for modified blocks as changes occur on the origin. The creation of an LVM snapshot is instantaneous, facilitated by the device-mapper framework in the , which sets up the necessary mappings without copying data upfront. Administrators typically use the lvcreate --snapshot command, specifying the desired snapshot size and naming it relative to the origin LV within the same volume group; for example, lvcreate --snapshot --size 10G --name snap1 /dev/vg0/origin. This results in a new LV that appears as a fully populated block device, ready for immediate use, though it relies on the volume group's free extents for the COW storage. Over time, however, the snapshot's performance can degrade due to fragmentation, as the scattered allocation of COW extents leads to increased I/O overhead during reads and writes. Space requirements for the snapshot LV are determined by the anticipated data modifications on the , estimated as the product of the original LV size, the expected change rate (as a of the volume), and the duration the snapshot will be maintained. For instance, a low-change volume like /usr might require only 3-5% of its size for short-term , while high-change volumes such as /home could demand 30% or more to avoid exhaustion. Traditional COW snapshots are read-only by default to maintain point-in-time integrity, and if the allocated COW space fills completely, the snapshot becomes invalid, potentially suspending I/O on the LV until resolved by extension or removal. Proper management, such as monitoring via lvs and periodic resizing with lvextend, is essential to prevent such failures. Snapshots find practical application in scenarios requiring data preservation during potentially disruptive activities, such as creating backups of active filesystems or testing system upgrades in isolated environments. For backups, the snapshot provides a consistent, quiesced view of the LV that can be mounted and archived without affecting the live system. In upgrade testing, it allows by merging changes or simply discarding the snapshot if issues arise, minimizing . These use cases leverage the efficiency of COW to avoid full data duplication, though careful sizing based on workload patterns remains critical to ensure reliability.

Hybrid and Thin Volumes

Hybrid volumes in logical volume management combine faster storage media, such as solid-state drives (SSDs), with slower but higher-capacity hard disk drives (HDDs) to optimize performance and cost. This tiering approach dynamically promotes frequently accessed "hot" data blocks to the SSD tier for low-latency operations while demoting less-used "cold" data to the HDD tier, based on access patterns monitored by the system. In LVM implementations, the device-mapper (dm-cache) facilitates this by creating a logical volume that overlays an origin volume on slower storage, automatically migrating blocks between tiers to improve overall I/O throughput. Thin volumes extend logical volume management by enabling over-allocation of storage space, where logical volumes can exceed the physical of the underlying , such as provisioning 10 TB logically from 5 TB physically. Allocation occurs on-demand as data is written, with a thin —composed of physical extents from a volume group—tracking usage through dedicated that maps virtual to physical blocks. This , typically allocated at a small percentage of the (e.g., via a 1000:1 data-to- ratio), ensures efficient space utilization but requires to prevent exhaustion. Overcommitment in thin provisioning introduces risks, including potential storage pool depletion if demand exceeds physical limits, leading to write failures or system outages; administrators must actively monitor pool usage and expand capacity proactively.

Implementations

Linux LVM

Logical Volume Management (LVM) in Linux is primarily implemented through the LVM2 suite, an open-source userspace toolset that enables flexible storage management by abstracting physical storage devices into logical volumes. LVM2 builds on the kernel's device-mapper framework, which has been integrated since Linux kernel version 2.6, allowing for dynamic mapping of physical extents to logical ones without requiring kernel patches. This implementation supports advanced features like resizable volumes and snapshots, making it a standard component in major Linux distributions such as Red Hat Enterprise Linux and Ubuntu. The core tools in the LVM2 suite facilitate the creation and manipulation of LVM components. The pvcreate command initializes physical volumes (PVs) on block devices, marking them for use in LVM. Once PVs are prepared, vgcreate combines them into a volume group (VG), pooling their storage capacity. Logical volumes (LVs) are then allocated from the VG using lvcreate, which specifies size, type, and other parameters, while lvresize allows online resizing of existing LVs to expand or shrink storage as needed. These tools operate via the lvm command wrapper, ensuring consistent handling across operations. By default, LVM2 configures physical extents (PEs) at 4 , which balances manageability. Unlike LVM1, LVM2 supports logical volumes up to 8 EiB without the former extent cap limiting size at default settings, though larger extents can be specified if needed. For redundancy, LVM supports levels 0, 1, and 10 through hybrid configurations with the tool, where mdadm arrays serve as underlying PVs for LVM, or via LVM's built-in targets in device-mapper that leverage the kernel's Multiple Devices () drivers for striping and mirroring. This hybrid approach enables fault-tolerant setups without proprietary hardware. Management of LVM in occurs primarily through command-line interfaces provided by the lvm2 package, which includes utilities for scanning, displaying, and activating components. Volume groups are activated using vgchange, which makes their LVs available to the , often during via initramfs hooks. For graphical , tools like system-config-lvm offer a user-friendly interface for creating and resizing volumes, though it is considered legacy in newer distributions and may not support all advanced features. Overall, these elements integrate seamlessly with the general LVM architecture of physical and logical extents as outlined in core documentation.

Non-Linux Systems

In Windows operating systems, logical volume management has evolved from the legacy Dynamic Disks feature to the more advanced Storage Spaces. Dynamic Disks, introduced in Windows 2000, allowed for the creation of spanned, striped, mirrored, and RAID-5 volumes using a Logical Disk Manager database to handle noncontiguous extents, providing flexibility beyond basic partitioning. However, Dynamic Disks have been deprecated for all uses except mirror boot volumes since Windows 8 and Server 2012, due to issues like irreversible conversions and performance limitations, with Microsoft recommending alternatives for resilient storage. Storage Spaces, introduced in Windows 8 and Windows Server 2012, serves as the primary modern equivalent, aggregating physical disks into storage pools to create virtualized volumes with resiliency options such as two-way or three-way mirroring (tolerating one or two drive failures) and single or dual parity (for efficient capacity with fault tolerance). It supports storage tiers, including SSD caching layers for improved performance on HDDs, thin provisioning, and dynamic resizing, enabling scalable logical volume management without hardware-specific dependencies. In BSD variants like , the GEOM framework provides a modular approach to logical volume management through disk transformation layers. GEOM, integrated into since version 5.0 in 2004, acts as a stackable framework that transforms block device access, allowing providers (such as physical disks) to be layered with classes for operations like striping, mirroring, and parity-based configurations. This modular stacking enables flexible logical volumes by composing transformations— for example, using gmirror for RAID-1 mirroring or gstripe for striping—without a rigid hierarchy, supporting auto-discovery and directed configuration for dynamic storage adjustments. macOS employs the (APFS) Container as its logical volume management structure, succeeding the earlier Core Storage. Introduced in (version 10.13) in 2017, APFS Containers function as shared storage pools that house multiple volumes, dynamically allocating space on demand across them for efficient utilization similar to volume groups. APFS supports native snapshots since its debut, creating read-only point-in-time copies of volumes within the container using mechanisms, which facilitate backups and data recovery without full duplication. This design allows seamless addition, deletion, or resizing of volumes inside the container, enhancing flexibility for system and user . Beyond these, proprietary and open-source solutions offer LVM-like capabilities in other environments. Volume Manager (VxVM), a commercial tool available for and AIX since the late 1990s, provides a logical layer over physical disks and LUNs, enabling volume groups, dynamic resizing, mirroring, and striping to overcome hardware partitioning limits. In illumos-based systems (a successor to ), ZFS volumes (zvols) deliver block-device logical volumes atop ZFS pools, supporting features like snapshots, cloning, and for use as swap or raw devices. In cloud contexts, (AWS) Elastic Block Store (EBS) volumes emulate logical volumes with API-driven snapshot capabilities, creating incremental point-in-time backups stored in S3 for replication and restoration across availability zones.

Advantages and Limitations

Benefits for Storage Management

Logical Volume Management (LVM) provides significant flexibility in storage administration by enabling online resizing of logical volumes without requiring system downtime or reformatting of underlying devices. Administrators can extend or reduce volume sizes using commands such as lvextend or lvreduce, allowing seamless adaptation to changing storage needs in dynamic environments like virtualized infrastructures. This capability supports live via tools like pvmove, which relocates extents between physical volumes while the system remains operational, facilitating hardware upgrades or maintenance without interrupting services. LVM enhances by pooling multiple physical devices into volume groups, from which large logical volumes can be created to manage expansive datasets efficiently and reduce administrative overhead in data centers. This simplifies the handling of petabyte-scale by treating disparate disks as a unified resource, enabling easier expansion as capacity demands grow. Additionally, allows for the allocation of virtual space on demand, optimizing utilization and lowering initial acquisition costs by avoiding the need to pre-allocate full physical capacity. As detailed in core features, thin volumes defer physical allocation until is written, further promoting efficient resource use across workloads. For , LVM incorporates built-in to ensure , where data is duplicated across multiple physical volumes to protect against disk failures, offering a software-based alternative to hardware configurations. Commands like lvconvert --type raid1 --mirrors 2 create mirrored logical volumes that maintain by synchronously replicating writes, simplifying setup and management compared to manual RAID assembly. This approach enhances system reliability in production environments by enabling quick recovery from hardware issues without complex external tools.

Potential Drawbacks and Challenges

Logical volume management systems, while offering flexibility in storage configuration, can introduce external fragmentation issues, particularly through repeated use of and volume resizes. rely on mechanisms that allocate space in chunks, leading to scattered data placement over time and degrading I/O , especially on hard disk drives (HDDs) where seek times are significant. For instance, in tests involving small file operations under snapshot mode, random read exhibited up to a 20% after substantial write activity, attributed to increased fragmentation and overhead. The inherent complexity of LVM metadata management presents a notable challenge, requiring administrators to navigate a steep to effectively handle volume groups, physical extents, and logical volumes. This , while powerful, demands precise command-line operations for tasks like extending or volumes, increasing the risk of misconfiguration for less experienced users. Recovery from volume group (VG) corruption is particularly arduous; such can render entire pools inaccessible, necessitating restoration from stored in locations like /etc/lvm/backup or /etc/lvm/. Although tools like vgcfgrestore facilitate , the process is not foolproof and often requires , underscoring the critical need for regular, verified metadata . LVM introduces additional resource overhead due to the in block mapping, consuming extra CPU cycles and for translating logical addresses to physical ones during I/O operations. This overhead can be significant for workloads involving small files. usage is also impacted, as LVM maintains structures in RAM, potentially straining systems with limited resources during intensive operations like creation. Furthermore, LVM's reliance on modules for activation can lead to boot incompatibilities with certain configurations; for example, requires an initramfs to load LVM support and activate volumes before mounting the filesystem, as the alone cannot fully handle LVM without this intermediate step.

Integration and Advanced Topics

Compatibility with Filesystems

Logical Volume Management (LVM) serves as an abstraction layer between physical storage and filesystems, seamless with various filesystem types for dynamic volume operations. This compatibility allows administrators to create, resize, and manage logical volumes (LVs) while leveraging filesystem-specific features for . LVM integrates effectively with Linux-native filesystems such as ext4, XFS, and Btrfs, particularly through support for online resizing that minimizes downtime. For ext4, after extending an LV using the lvextend command, the filesystem can be grown online with resize2fs to utilize the additional space without unmounting. XFS similarly supports online expansion via xfs_growfs following LV growth, making it suitable for large-scale deployments where volumes may need frequent adjustments. Btrfs enhances this compatibility by allowing both online growing and shrinking with the btrfs filesystem resize command, and its native subvolumes provide filesystem-level logical partitioning atop an LV, enabling efficient snapshotting and quota management without additional LVM overhead. In contrast, filesystems like and exhibit limitations when used on LVM due to their rigid structures and lack of native online resize support in . volumes require unmounting and tools such as fatresize to adjust size before or after LV operations, as online changes risk from the filesystem's fixed cluster allocation. resizing on LVM demands the ntfsresize utility, which also necessitates unmounting the volume, restricting flexibility compared to or and complicating cross-platform scenarios. Furthermore, layering over LVM is discouraged owing to the resulting double abstraction, which incurs unnecessary management overhead and performance penalties, as ZFS natively incorporates volume management features that overlap with LVM's role. For optimal performance in enterprise environments, aligning LVM physical extents (PEs) with the underlying filesystem's block size is a recommended to avoid read/write inefficiencies. The default PE size of 4 aligns naturally with common 4 KiB filesystem blocks, since 4 equals 1,024 × 4 KiB, ensuring contiguous allocation and reducing fragmentation on advanced storage hardware. This alignment is particularly beneficial when creating LVs for high-throughput workloads, promoting efficient I/O patterns across the stack.

Security and Reliability Aspects

Logical volume management (LVM) integrates with device mapper crypt (dm-crypt) to enable encryption of logical volumes, providing full-disk encryption capabilities for data at rest. This setup typically involves formatting a logical volume with LUKS (Linux Unified Key Setup) using the cryptsetup luksFormat command, which initializes the volume as an encrypted container and prompts for a passphrase to derive the master key. Once initialized, the encrypted logical volume can be opened with cryptsetup luksOpen to map it as a decrypted block device, allowing filesystem creation and mounting on top. LUKS supports multiple key slots for passphrase management, storing encrypted headers directly on the underlying physical volumes (PVs) or logical volumes (LVs) to facilitate secure key derivation without exposing plaintext data. This approach ensures that encryption occurs transparently at the block level, with minimal performance overhead on modern hardware, as dm-crypt leverages kernel-level AES encryption. For reliability, LVM enhances through metadata duplication across physical volumes within a volume group, configurable to maintain up to three identical copies to mitigate corruption risks from hardware failures. By default, LVM stores one metadata copy per PV, but administrators can specify additional copies using the --metadatacopies option during volume group creation (e.g., vgcreate -m 2), distributing replicas to separate PVs for without impacting routine I/O operations. In clustered environments, LVM supports high-availability failover via integration with , enabling active/passive configurations where logical volumes are managed as cluster resources that automatically migrate to healthy nodes upon failure detection. High-availability LVM (HA-LVM) or clustered LVM (CLVM) ensures exclusive access to shared , preventing metadata conflicts through distributed lock . Recovery from metadata loss relies on tools like vgcfgrestore, which restores volume group configurations from archived text backups generated by vgcfgbackup, allowing reconstruction of PVs, volume groups, and LVs even if primary metadata areas are damaged. Despite these features, LVM exhibits limitations in native cloud environments as of 2025, particularly with services like (EBS), where encryption integration requires manual configuration of on encrypted EBS volumes without automated LVM-specific handling. While EBS supports server-side encryption at rest using AWS Key Management Service (KMS), LVM operations on these volumes demand explicit setup for LUKS containers, and multi-region mirroring remains a process involving EBS snapshot replication rather than built-in LVM volume group synchronization. This gap necessitates additional scripting or third-party tools for resilient, distributed deployments across regions, as LVM's metadata and mechanisms are optimized for on-premises or single-region shared storage rather than cloud-native elasticity.

References

  1. [1]
    Chapter 1. Overview of logical volume management
    Logical Volume Manager (LVM) creates a layer of abstraction over physical storage, which helps you to create logical storage volumes.
  2. [2]
    Configuring storage - Ubuntu installation documentation
    Apr 15, 2025 · Logical Volume Manager (LVM)​​ LVM is a system of managing logical volumes, or file systems, that is more advanced and flexible than the ...
  3. [3]
    None
    ### Summary of Logical Volume Manager (LVM) History and Development in Linux
  4. [4]
    File Systems and Volume Managers: History and Usage
    Mar 3, 2003 · Volume management was developed in the late 1980s to enable the creation and management of file systems larger than a single disk, which offered ...
  5. [5]
    [PDF] AIX Logical Volume Manager, From A to Z: Introduction and Concepts
    How it performs this function is in a topic vast enough to fill two books. This first volume, AIX Logical Volume Manager, from A to Z: Introduction and.
  6. [6]
    [PDF] Understanding AIX Logical Volume Management - Cloudfront.net
    The 1989 release of AIXv3 introduced the mandatory use of LVM to address these limitations. Next we will examine how this introduction addressed the problems ...
  7. [7]
    LVM HOWTO - The Linux Documentation Project
    The kernel driver for LVM 1 is included in the 2.4 series kernels, but this does not mean that your 2.4.x kernel is up to date with the latest version of LVM.
  8. [8]
    Device-mapper Resource Page - Sourceware
    Introduction. The Device-mapper is a component of the linux kernel (since version 2.6) that supports logical volume management. It is required by LVM2 and EVMS.Missing: 2003 snapshots
  9. [9]
    Stratis: Easy local storage management for Linux - LWN.net
    May 29, 2018 · Stratis is a new local storage-management solution for Linux. It can be compared to ZFS, Btrfs, or LVM. Its focus is on simplicity of ...Missing: systemd | Show results with:systemd
  10. [10]
    Chapter 35. Managing storage devices | Red Hat Enterprise Linux | 8
    Nov 8, 2018 · Stratis runs as a service to manage pools of physical storage devices, simplifying local storage management with ease of use while helping you ...
  11. [11]
    Chapter 21. GEOM: Modular Disk Transformation Framework
    Aug 29, 2025 · This chapter covers the use of disks under the GEOM framework in FreeBSD. This includes the major RAID control utilities which use the framework ...
  12. [12]
    What's new in Azure Disk Storage - Microsoft Learn
    Azure Disk Storage regularly receives updates for new features and enhancements. This article provides information about what's new in Azure Disk Storage.Missing: logical management
  13. [13]
    LVM configuration | Storage Administration Guide | SLES 15 SP7
    This chapter describes the principles behind Logical Volume Manager (LVM) and its basic features that make it useful under many circumstances.
  14. [14]
    Logical Volume Manager - IBM
    The LVM controls disk resources by mapping data between a more simple and flexible logical view of storage space and the actual physical disks.
  15. [15]
    Logical Volume Manager (LVM) versus standard partitioning in Linux
    Dec 7, 2020 · This article compares standard storage management and partitioning to Logical Volume Manager (LVM). It also demonstrates some basic commands for each approach.
  16. [16]
  17. [17]
    Chapter 3. LVM Components | Red Hat Enterprise Linux | 6
    To use the device for an LVM logical volume the device must be initialized as a physical volume (PV). Initializing a block device as a physical volume places a ...<|control11|><|separator|>
  18. [18]
  19. [19]
    Configuring and managing logical volumes | Red Hat Enterprise Linux
    Logical Volume Manager (LVM) is a storage virtualization software designed to enhance the management and flexibility of physical storage devices.
  20. [20]
    Chapter 3. Managing LVM volume groups | Red Hat Enterprise Linux
    When you create a logical volume (LV) within a VG, LVM allocates physical extents on the PVs. The logical extents within the LV correspond one-to-one with ...
  21. [21]
    Chapter 11. Controlling LVM allocation | Red Hat Enterprise Linux | 8
    When an LVM operation must allocate physical extents for one or more logical volumes (LVs), the allocation proceeds as follows: The complete set of unallocated ...
  22. [22]
    kernel documentation on device-mapper snapshot support
    In the first two cases, dm copies only the chunks of data that get changed and uses a separate copy-on-write (COW) block device for storage. For snapshot merge ...
  23. [23]
    Chapter 4. Basic logical volume management | 8
    With the Logical Volume Manager (LVM), you can manage disk storage in a flexible and efficient way that traditional partitioning schemes cannot offer.
  24. [24]
    LVM Volume Snapshots | Storage Administration Guide | SLES 12 SP5
    The amount of space that you allocate for a snapshot volume can vary, depending on the size of the source logical volume, how long you plan to keep the snapshot ...Missing: rate | Show results with:rate
  25. [25]
    3.3.6. Snapshot Volumes | Red Hat Enterprise Linux | 6
    The LVM snapshot feature provides the ability to create virtual images of a device at a particular instant without causing a service interruption.
  26. [26]
    5.4.7. Creating LVM Cache Logical Volumes - Red Hat Documentation
    A cache logical volume uses a small logical volume consisting of fast block devices (such as SSD drives) to improve the performance of a larger and slower ...Missing: tiering HDD
  27. [27]
    bcache
    Mar 23, 2018 · Bcache is a Linux kernel block layer cache. It allows one or more fast disk drives such as flash-based solid state drives (SSDs) to act as a cache for one or ...
  28. [28]
    4 Multi-tier caching for block device operations - SUSE Documentation
    4.3 bcache # ... bcache is a Linux kernel block layer cache. It allows one or more fast disk drives (such as SSDs) to act as a cache for one or more slower hard ...
  29. [29]
    [PDF] Architecture Overview of Oracle ZFS Storage Appliance
    This caching, or auto-tiering, approach is referred to as the Hybrid Storage Pool architecture. Hybrid Storage Pool is an exclusive feature of the Oracle ZFS ...
  30. [30]
    Thin Provisioning | TrueNAS Documentation Hub
    Jun 21, 2024 · Thin provisioning is a storage management technique used to optimize storage utilization and efficiency. It allows administrators to allocate storage capacity ...<|control11|><|separator|>
  31. [31]
    LVM2 Resource Page - Sourceware
    Jun 30, 2023 · Introduction. LVM2 refers to the userspace toolset that provide logical volume management facilities on linux.
  32. [32]
    lvm(8) - Linux manual page - man7.org
    The Logical Volume Manager (LVM) provides tools to create virtual block devices from physical devices.
  33. [33]
    Chapter 9. Configuring RAID logical volumes | 8
    RAID level 1, or mirroring, provides redundancy by writing identical data to each member disk of the array, leaving a mirrored copy on each disk. Mirroring ...
  34. [34]
    About Logical Volume Management (LVM) - Ubuntu documentation
    Logical Volume Management, or LVM, provides a method of allocating and managing space on mass-storage devices that is more advanced and flexibleMissing: principles | Show results with:principles
  35. [35]
    How To Use GUI LVM Tools - Linux Journal
    Dec 23, 2020 · Some GUI-based LVM tools to help users manage LVM more easily. Let's review them here to see the similarities and differences between individual tools.
  36. [36]
    Basic and Dynamic Disks - Win32 apps | Microsoft Learn
    Jul 8, 2025 · For all usages except mirror boot volumes (using a mirror volume to host the operating system), dynamic disks are deprecated. For data that ...Types of disks · Partition styles
  37. [37]
    Storage Spaces overview in Windows Server - Microsoft Learn
    May 12, 2025 · This parity information allows Storage Spaces to reconstruct data if a single drive fails (single parity) or two drives fail (dual parity).Understanding Storage Spaces · Storage Bus CacheMissing: tiers | Show results with:tiers
  38. [38]
    Apple File System Guide
    Jun 4, 2018 · APFS replaces HFS+ as the default file system for iOS 10.3 and later, and macOS High Sierra and later. Prerequisites. To understand how your ...
  39. [39]
    Add, delete, or erase APFS volumes in Disk Utility on Mac
    Apple File System (APFS) allocates disk space on demand. When a single APFS container (partition) has multiple volumes, the container's free space is shared and ...
  40. [40]
    [PDF] Veritas™ Volume Manager Administrator's Guide Solaris
    VxVM overcomes restrictions imposed by hardware disk devices and by LUNs by providing a logical volume management layer. This allows volumes to span ...
  41. [41]
    Amazon EBS snapshots
    Amazon EBS snapshots are point-in-time, incremental backups of Amazon EBS volumes. Use snapshots as a point-in-time backup copy of an Amazon EBS volume.Snapshot lifecycle · Snapshot lock · Fast snapshot restore · How snapshots work
  42. [42]
    Disaster: LVM Performance in Snapshot Mode - Percona
    LVM snapshots can cause a 3x to 6x performance penalty, with one test showing 6 times lower performance compared to no snapshot.Missing: management resizes
  43. [43]
    LVM2 snapshot performance problems - Nikhef
    Mar 17, 2014 · LVM2 snapshots slow down due to copy-on-write, turning async writes into sync writes, and causing seeking, especially on the same disk as the ...<|separator|>
  44. [44]
    LVM dangers and caveats - linux - Server Fault
    Jun 12, 2011 · The main LVM risks and issues I'm aware of are: Vulnerable to hard disk write caching due to VM hypervisors, disk caching or old Linux kernels, and makes it ...<|separator|>
  45. [45]
    6.4. Recovering Physical Volume Metadata - Red Hat Documentation
    You may be able to recover the data the physical volume by writing a new metadata area on the physical volume specifying the same UUID as the lost metadata.Missing: complexity curve
  46. [46]
    Recovering overwritten LVM metadata - Unix & Linux Stack Exchange
    Aug 22, 2012 · Most LVM commands backup the LVM metadata to /etc/, and there is a way to recover that data to the disk.Where does LVM store data? - Unix & Linux Stack ExchangeLVM Volume Group MetaData Corruption – Please HelpMore results from unix.stackexchange.comMissing: complexity learning
  47. [47]
    (PDF) LVM in the Linux environment: Performance examination
    Dec 21, 2015 · ... For the cases of 512-1024GB data write, containerization brings 3.6% average overhead, while LVM brings 8.5% average overhead. Referring to ...
  48. [48]
    LVM - Debian Wiki
    Oct 20, 2025 · All tools to manage an LVM volume are available in the lvm2 package which can be installed by running apt install lvm2. List of LVM commands.<|separator|>
  49. [49]
    Booting Linux from LVM Volumes - SystemRescue
    Booting from LVM is complex because the kernel doesn't know LVM. It uses initramfs to mount the root filesystem, which is loaded by the boot loader.
  50. [50]
    Configuring and managing logical volumes | Red Hat Enterprise Linux
    XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system ...
  51. [51]
    5.3. Resizing an ext4 File System | Storage Administration Guide
    Use the appropriate resizing methods for the affected block device. An ext4 file system may be grown while mounted using the resize2fs command.
  52. [52]
    Extending Mounted Ext4 File System on LVM in Linux - SysTutorials
    Extend the ext4 file system. The resize2fs (resize2fs manual) can resize an ext4 file system on-line to use all available disk capacity. We can resize the ...
  53. [53]
    btrfs(5) — BTRFS documentation
    Summary of each segment:
  54. [54]
    How can I resize a fat32 file system in an LVM partition? [closed]
    Dec 24, 2010 · Use fatresize (manpage) and then proceed with lvresize. To avoid truncating the FS, you should first shrink the VFAT volume a few hundreds (to be safe) ...
  55. [55]
    Resize NTFS Volume LVM - Ask Ubuntu
    Nov 21, 2012 · Within the LVM there is a NTFS drive containing Windows. I'd like do resize the NTFS drive within the LVM to give Windows more space without ...How can I resize an LVM partition? (i.e: physical volume) - Ask UbuntuHow can I resize an active LVM partition? - Ask UbuntuMore results from askubuntu.com
  56. [56]
    LXD ZFS storage pool on lvm filesystem?
    Mar 15, 2023 · In theory you can have ZFS running on top of LVM, LVM in top of ZFS or both running side by side on different parts of your storage.<|separator|>
  57. [57]
    ZFS on Linux - Proxmox VE
    Aug 5, 2025 · ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with Proxmox VE 3.4, the native Linux kernel port of the ZFS ...
  58. [58]
    Aligning partitions with RAID and LVM on drives with 4 kB sectors
    May 13, 2010 · Then, I created the volume group with an extent size of 4MB. 4 MB is also divisible by 4096. Logical volumes are created as multiples of the ...Missing: practices | Show results with:practices
  59. [59]
    Physical and logical extents size - Unix & Linux Stack Exchange
    May 15, 2015 · Physical extent size (default 4 MiB) doesn't impact I/O. Logical extent size specifies logical volume size. Ensure alignment for best ...How do I calculate Linux physical volume and volume group ...Purpose of Physical Extents - Unix & Linux Stack ExchangeMore results from unix.stackexchange.com
  60. [60]
    Chapter 9. Encrypting block devices using LUKS | Security hardening
    With LUKS, you can encrypt block devices and enable multiple user keys to decrypt a master key. For bulk encryption of the partition, use this master key.
  61. [61]
    Encrypting volumes by using dm-crypt - IBM
    dm-crypt provides transparent encryption of block devices. To encrypt a volume, configure LVM, create a LUKS container, and then mount the file system.
  62. [62]
    Disk Encryption User Guide - Fedora Docs
    Encrypting block devices using dm-crypt/LUKS. LUKS (Linux Unified Key Setup) is a specification for block device encryption. It establishes an on-disk format ...
  63. [63]
    Appendix E. LVM Volume Group Metadata - Red Hat Documentation
    You can specify the size of metadata area with the --metadatasize option of the pvcreate command. The default size may be too small for volume groups that ...
  64. [64]
    Configuring and managing high availability clusters | Red Hat ...
    High availability LVM volumes (HA-LVM) in active/passive failover configurations in which only a single node of the cluster accesses the storage at any one time ...
  65. [65]
    vgcfgrestore(8) - Linux manual page - man7.org
    vgcfgrestore restores the metadata of a VG from a text back up file produced by vgcfgbackup(8). This writes VG metadata onto the devices specified in back up ...
  66. [66]
    Amazon EBS encryption - AWS Documentation
    Use Amazon EBS encryption as a straight-forward encryption solution for your Amazon EBS resources associated with your Amazon EC2 instances.How EBS encryption works · Enable encryption by default · ExamplesMissing: LVM mirroring
  67. [67]
    AWS EBS Backup and Recovery Guide 2025: Snapshots, DLM, and ...
    Jan 9, 2025 · Master AWS EBS backup strategies in 2025: Learn automated snapshots with DLM, AWS Backup integration, cross-region replication, lifecycle ...