Fact-checked by Grok 2 weeks ago

mdadm

mdadm is a command-line utility for creating, managing, and monitoring software devices in , utilizing the Multiple Devices (MD) driver within the to aggregate multiple physical block devices into a single with optional for . It supports a range of levels, including LINEAR, RAID0 (striping for ), RAID1 (mirroring for ), RAID4, RAID5, RAID6 (with for ), RAID10 (combining and striping), as well as specialized modes like MULTIPATH (deprecated for new installations), FAULTY, and CONTAINER. Originally developed by Neil Brown in 2001 as the primary tool for software , mdadm replaced older utilities like raidtools and has been maintained as an open-source project under the GNU General Public License version 2, with its source code originally hosted on and now primarily on . It enables key operations such as assembling arrays from existing components, building new arrays with metadata superblocks, growing or reshaping arrays, and managing device addition, removal, or failure states. mdadm also includes monitoring capabilities via a daemon mode to detect and report array degradation or failures, supporting hot-plug environments through incremental assembly. The utility handles various metadata formats for configuration, with version 1.2 as the default, alongside support for older formats like 0.90 and 1.0, as well as external formats such as Matrix Storage Manager (IMSM) and the Common Disk Data Format (DDF) in maintenance mode. At boot time, mdadm facilitates automatic array assembly by scanning for partitions with type 0xfd or using parameters for non-persistent setups, ensuring seamless integration with the kernel's MD . It requires a minimum version of 3.10 and is included in major distributions, often as part of packages like mdadm or mdraid.

Introduction

Overview

mdadm is a command-line utility for administering and monitoring software arrays in , serving as the primary tool for the md (multiple devices) driver stack in the . It enables the creation, , , and of devices composed from multiple physical or components, supporting configurations for data protection and performance enhancement. Introduced in the early , mdadm replaced legacy tools such as raidtools, which were used for earlier Linux software implementations but lacked support for modern features like flexible formats. By the mid-, it had become the standard in major distributions, providing a unified interface for array operations that addressed limitations in the older utilities. mdadm constructs arrays from whole disks, disk partitions, or devices, allowing flexible integration with existing storage setups. It is licensed under License version 2 or later. The current stable version is 4.4, released on August 19, 2025.

History

mdadm was first released in 2001 by Neil Brown, a developer at Labs, as a modern replacement for the older raidtools utility, providing enhanced management capabilities for software arrays. This initial version addressed limitations in prior tools by offering a unified for creating, assembling, and monitoring MD devices, quickly gaining adoption within the community. Following its inception, mdadm transitioned to community-driven maintenance after Brown's time at , becoming integrated into major distributions such as , , and , where it replaced legacy management software. Key milestones in mdadm's development include the addition of support for partitionable arrays in the Linux kernel 2.6 series (2003–2004), enabling RAID devices to be partitioned like regular block devices for greater flexibility in storage configurations. In 2008, with kernel 2.6.27, external metadata formats were introduced, allowing mdadm to interoperate with hardware RAID controllers by storing RAID information outside the array data area. TRIM support for SSDs arrived in kernel 3.7 (2012), permitting discard commands to propagate through RAID layers to optimize solid-state drive performance and longevity. Neil Brown, who had maintained mdadm for over two decades, stepped back from active involvement, leading to a transition in leadership. On December 14, 2023, Mariusz Tkaczyk was announced as the new lead contributor and maintainer, ensuring continued development and bug fixes. Deprecations marked shifts in focus, with linear mode—used for simple of devices—marked for deprecation in the due to low usage and redundancy with alternatives like dm-linear, and the md-linear module fully removed in 6.8 (March 2024). To facilitate ongoing collaborative development, the project shifted its primary repository to the md-raid-utilities organization on in late 2023, incorporating and broader community contributions. Version 4.4 (August 2025) introduced features like custom device policies and improved self-encrypting drive () support for IMSM metadata.

Configurations

RAID Levels

mdadm supports several standard RAID levels through the Linux md driver, enabling software-based redundancy and performance optimization across multiple block devices. These levels include for striping, for mirroring, with dedicated parity, and for distributed parity, and combining striping and mirroring. Each level balances capacity, performance, and differently, with mdadm handling array creation, management, and . The following table summarizes the key characteristics of these RAID levels as implemented in mdadm:
RAID LevelDescriptionMinimum DevicesCapacityFault Tolerance
RAID 0Striping without redundancy for maximum 2Sum of all device capacitiesNone
RAID 1 for data duplication across devices2Capacity of the smallest deviceUp to 1 failure per mirror set
RAID 4Striping across data devices with a dedicated parity disk3(N-1) × capacity of smallest data device, where N is the number of devices1 failure
RAID 5Striping with distributed across all devices3(N-1) × capacity of smallest device, where N is the number of devices1 failure
RAID 6Striping with double distributed parity for enhanced protection4(N-2) × of smallest device, where N is the number of devices2 failures
RAID 10Striping of mirrored pairs for combined and redundancy4(N/2) × capacity of smallest device, where N is the number of devicesUp to 1 failure per mirror pair
In RAID 0, data is distributed evenly across devices to achieve high throughput but offers no protection against failures, making it suitable for temporary or non-critical storage. RAID 1 provides full by duplicating data, ensuring if one device fails, though at the cost of half the total capacity in a two-device setup. RAID 4 dedicates one device to calculations, allowing striping on the remaining devices while tolerating a single failure, but the parity disk can become a bottleneck during writes. RAID 5 distributes parity information across all devices, improving write performance over RAID 4 and supporting a minimum of three devices for efficient overhead distribution. 6 extends this by using dual , enabling of two concurrent failures and requiring at least four devices, which is valuable for large-capacity drives prone to multiple errors. 10 stripes data across mirrored pairs, delivering RAID 0-like performance with RAID 1 , though it requires even numbers of devices starting from four. Chunk size, a configurable in mdadm for levels involving striping ( 0, 4, 5, 6, 10), influences performance and must be a power of 2, with defaults around 512 KiB. To create an array, mdadm uses the --create option with parameters specifying the level, number of devices, and components; for example, a 5 array on three devices can be created as mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc], optionally including --chunk=512 for stripe size. Similar syntax applies to other levels, adjusting --level and --raid-devices accordingly, such as --level=10 --raid-devices=4 for 10. mdadm RAID arrays integrate seamlessly with modern filesystems like or , allowing RAID-aware setups where the filesystem is created directly on the assembled /dev/mdX device for optimized storage management.

Non-RAID Modes

mdadm supports non-RAID modes that provide basic device aggregation without or striping, contrasting with traditional levels focused on performance or . These modes include , multipath aggregation, and the faulty mode for testing, which are useful for simple storage extension, path redundancy, or simulating failures in specific scenarios.

Linear Mode

Linear mode in mdadm concatenates multiple block devices into a single logical volume, presenting them as one contiguous where data is written sequentially from the first device to the last. The total capacity equals the sum of the individual device sizes, with no overhead for or , making it suitable for extending storage volume without redundancy, such as combining drives of varying sizes to create a larger filesystem. For example, it can span data across disks in a JBOD-like setup for archival purposes where on failure is acceptable. To create a linear array, the command mdadm --create /dev/md0 --level=linear --raid-devices=2 /dev/sda /dev/sdb assembles the devices, writing superblocks to enable across reboots. This mode requires at least one device but supports multiple, and it operates without striping, so read/write performance remains limited to the speed of the active device. Linear mode has been deprecated due to lack of active development and maintenance, with the md-linear removed in version 6.8 released in 2024. As a result, new linear arrays cannot be created or assembled on 6.8 and later, rendering the mode fully phased out for modern systems; alternatives like device-mapper linear targets are recommended for concatenation needs.

Multipath Mode

Multipath mode aggregates multiple identical paths to the same underlying physical storage device, providing by routing I/O through available paths and failing over on individual path errors. It requires a minimum of two devices representing redundant paths, such as in or setups, and ensures continuous access by marking failed paths as spare while using active ones. This mode is particularly useful for high-availability environments where path redundancy prevents downtime from cable or controller failures, without providing data-level protection. Creation follows a similar syntax to other modes, for instance, mdadm --create /dev/md0 --level=multipath --raid-devices=2 /dev/sda /dev/sdb, which initializes the array with identifying the paths. The handles path selection and transparently, but the mode does not support striping or redundancy beyond path aggregation. Multipath mode was deprecated in mdadm, with the md-multipath removed in 6.8 (March 2024). New installations should use the more robust device-mapper multipath (DM-Multipath) subsystem, which offers advanced and broader hardware compatibility.

Faulty Mode

Faulty mode in mdadm is a special personality designed for testing and development, which immediately marks the device as faulty upon array creation to simulate failures in configurations. It requires at least one device and does not provide any or redundancy; instead, it allows users to test failure handling, such as how arrays respond to degraded states or spare activation. For example, it can be used to verify resync processes or alert mechanisms without risking real . To create a faulty , the command mdadm --create /dev/md0 --level=faulty --raid-devices=1 /dev/sda initializes the device in faulty state, writing appropriate . This mode supports configurable behaviors, such as transient or persistent read/write errors, but offers no practical storage utility beyond testing. Faulty mode has been deprecated due to limited use and lack of maintenance, with the kernel module md-faulty removed in version 6.8 (March 2024). As a result, faulty arrays cannot be created or assembled on kernels 6.8 and later; for testing scenarios, alternatives like manual device marking via mdadm or device-mapper tools are recommended.

Features

Metadata Management

mdadm manages for RAID arrays through superblocks that store critical data, enabling array identification, assembly, and reconstruction after device failures or system reboots. This includes details such as the array's UUID, level, device roles, and synchronization states, which allow the kernel's MD driver to recognize and activate arrays without relying on external files. Internal formats are the default, using superblock versions ranging from 0.90 to 1.2, while external formats provide compatibility with hardware controllers. The original metadata version 0.90, introduced with early implementations, places the at the end of each component device, requiring 64 KiB of reserved space at the end of the device to accommodate the 4 KiB in a 64 KiB aligned and support up to 28 devices per with a 2 limit per device. Version 1.0 improves upon this by also locating the at the device end but adds support for checkpointing during resynchronization and removes some legacy restrictions, making it suitable for larger s. Version 1.1 shifts the to the start of the device for better with partitioned disks, while version 1.2—the current default—positions it 4 KiB from the start, offering enhanced flexibility with fewer size limits and explicit support for write-intent bitmaps. These internal s are typically compact, with version 1.x formats using a fixed 4 KiB size to minimize data displacement. External formats, supported since 2.6.27, store configuration data separately from the devices, often on a dedicated or . This approach enhances with firmware-based "Fake " systems, such as 's Matrix Storage Manager (IMSM), and conforms to the Disk (DDF) standard for enterprise environments, allowing mdadm to manage created by hardware controllers without proprietary tools. DDF, though deprecated in favor of IMSM for platforms, enables -based where resides externally, facilitating migration between software and hardware setups. To inspect and decode , mdadm provides the --examine option, which parses s on component devices to display details like UUID, level, and member states, aiding in verification and . For with , the --examine-bitmap flag extracts information, such as dirty block locations. support, integrated in version 1.2 and later, tracks modified blocks during unclean shutdowns using an internal write-intent stored within the and replicated across devices; this optimizes resynchronization by resuming only from the last checkpoint, with a chunk of 64 that can be adjusted for performance. can be added or removed post-creation and are essential for reducing times in large .

Booting and Initialization

Mdadm enables booting from software arrays by supporting the assembly of arrays early in the boot process, primarily through integration with the initramfs and configurations. For BIOS-based systems, the /boot partition is typically configured on a array to ensure compatibility, as bootloaders like can read individual member partitions as standard filesystems without needing full RAID awareness. This setup allows the and initramfs to be loaded from mirrored devices, providing redundancy for critical boot files. Integration with the initramfs is essential for automatic array assembly during boot. The /etc/mdadm.conf file specifies array configurations for auto-assembly, and tools like dracut (on and RHEL derivatives) or initramfs-tools (on and ) include hooks to scan devices, include the mdadm binary, and activate arrays using commands like mdadm --assemble --scan before mounting the root filesystem. For example, in initramfs-tools, hook scripts in /usr/share/initramfs-tools/hooks/ copy mdadm and its dependencies into the initramfs image, ensuring arrays are assembled prior to proceeding with the boot sequence. Regenerating the initramfs after changes to mdadm.conf is required to incorporate updated configurations. In legacy systems with kernels prior to version 2.6, special handling was necessary for booting from arrays, often requiring kernel command-line parameters like md= to manually specify devices and levels without relying on superblock metadata. Modern kernels since 2.6.9 automatically detect and assemble arrays via embedded metadata during boot, provided the md driver is compiled into the or loaded as a module, simplifying initialization without explicit parameters. For UEFI systems, mdadm supports booting from RAID arrays using GPT partitioning, where the EFI System Partition (ESP) can be mirrored in RAID 1 with internal metadata formats such as version 0.90 or 1.0 placed at the end of devices to avoid interfering with the partition table. This external metadata also facilitates handoff from Fake RAID (hardware-assisted) configurations, allowing mdadm to take over array management post-bootloader. Bootloaders like or are configured similarly to setups, referencing the assembled md device (e.g., /dev/md0) or UUIDs for the root and boot partitions. Troubleshooting boot issues with degraded arrays often involves forcing assembly in the initramfs rescue shell using mdadm --assemble --scan --run, which starts the despite missing or failed devices, enabling the system to boot in a reduced state for subsequent repairs. This option overrides default safety checks that prevent starting degraded arrays, but it should be used cautiously to avoid . During scanning, mdadm relies on metadata formats like those detailed in the Metadata Management section to identify and validate .

Monitoring Capabilities

mdadm provides robust tools for runtime monitoring of software RAID arrays, enabling administrators to assess health, detect issues, and receive timely alerts. The mdadm --detail command displays comprehensive status information for an active array, including its operational state (such as active or degraded), the count of active devices, failed disks, and available spares, as well as progress metrics for ongoing processes like resynchronization or rebuilding. For examining individual components without an assembled array, mdadm --examine retrieves from device superblocks, revealing array UUID, RAID level, and role assignments to verify consistency and potential faults. These commands are essential for manual status checks and scripting periodic health verifications. Kernel-level insights are accessible via the /proc/mdstat interface, which exposes real-time statistics for all MD arrays, including device membership, activity levels, and detailed progress (e.g., complete and speed). Executing cat /proc/mdstat yields output like active disks, recovery status, and bitmap usage, making it a lightweight method for integration into monitoring scripts or dashboards without invoking user-space tools. Continuous event-based monitoring is handled by mdadm --monitor, which operates as a background daemon to poll specified arrays (supporting levels 1, 4, 5, 6, and 10) for state changes such as device failures, spares activation, or degradation. Alerts can be configured for delivery via by setting the MAILADDR option in /etc/mdadm.conf (e.g., MAILADDR admin@[example.com](/page/Example.com)), triggering notifications for events like Fail, DegradedArray, RebuildStarted, or DeviceDisappeared. For advanced alerting, the --program or --alert options invoke custom scripts, which may integrate with systems like SNMP by generating traps upon detection of issues. Syslog output is also supported via the --syslog flag for logging events to system logs. The mdmpd daemon, once part of mdadm for monitoring multipath device failures and path , is deprecated since kernel 2.6.10-rc1 in 2004 and has been superseded by Multipath (DM-Multipath) for such functionality. mdadm complements disk-level monitoring tools like , where the latter's smartd daemon performs predictive failure analysis on individual drives (e.g., via S.M.A.R.T. attributes), allowing early detection of issues that could impact array integrity before mdadm reports degradation. Write-intent bitmaps, enabled with mdadm --grow --bitmap=internal or external files, track modified regions during unclean shutdowns to accelerate resyncs, with their status and effectiveness observable in /proc/mdstat during operations.

Usage and Management

Command-Line Interface

The mdadm provides a flexible syntax for managing software , following the general structure mdadm [mode] [options] <devices>, where the mode specifies the primary operation, options modify behavior, and devices list the relevant block devices or array identifiers. Common modes include --create for initializing a new array with metadata superblocks and activating it, --assemble for scanning and activating existing arrays from component devices, and --manage for runtime operations such as adding or removing devices from an active array. This structure allows users to perform array creation, assembly, and in a single tool, reducing the need for multiple utilities. Key global options enhance usability across modes, such as --verbose (or -v), which increases output detail and can be specified multiple times for greater verbosity; --config (or -c), which points to a like the default /etc/mdadm.conf for array definitions and scanning rules; and --help (or -h), which displays general usage information or mode-specific details when invoked with a mode. These options apply regardless of the selected mode, enabling consistent control over , configuration sourcing, and quick reference access. The configuration file /etc/mdadm.conf uses a simple, keyword-based format to define arrays and settings for automatic detection and monitoring, with sections starting with keywords like ARRAY or MAILADDR. The ARRAY section specifies an array's device path and identity tags, such as ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371, allowing mdadm to identify and assemble components during scans. The MAILADDR section configures email notifications for monitoring, limited to a single address like MAILADDR [email protected], which mdadm uses in --monitor mode with --scan for auto-detection of issues. Comments begin with #, and lines can span with leading whitespace, ensuring readability and flexibility. Installation of mdadm is straightforward on most distributions via package managers; for example, on Debian-based systems, it is available as apt install mdadm, while on Red Hat-based systems, yum install mdadm or dnf install mdadm suffices. For custom builds, the source code can be compiled from the official repository using make commands, including make install-bin for binaries and make install-systemd for service integration. Error handling in mdadm includes standardized codes to indicate operation outcomes, varying by mode; for instance, in miscellaneous modes, 0 denotes normal success, 1 indicates a failed device, 2 signals an unusable array, and 4 represents a , such as invalid arguments. These codes allow scripts and administrators to detect and respond to issues programmatically, with verbose output providing additional diagnostic details when enabled.

Assembly and Disassembly

Assembly of an MD activates an existing from its component , making it available as a block for use. The primary command for manual assembly is mdadm --assemble followed by the target MD name and the component , such as mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1. This mode verifies that the components match the 's before activating it. To identify and assemble using unique identifiers, the --uuid option can be specified, allowing assembly even when paths have changed; for example, mdadm --assemble /dev/md0 --uuid=12345678-1234-1234-1234-1234567890ab /dev/sda1 /dev/sdb1 assembles the by matching the provided UUID from the . In cases of disks, the missing keyword can substitute for unavailable , enabling degraded assembly if permits, as in mdadm --assemble /dev/md0 --uuid=12345678-1234-1234-1234-1234567890ab /dev/sda1 missing. Automatic assembly occurs at runtime through the --assemble --scan option, which scans for arrays defined in the /etc/mdadm.conf or detects them via device metadata and udev rules for hotplug events. This mode assembles all eligible arrays without explicit device listing, relying on the config file's ARRAY lines that specify device names, UUIDs, or other identifiers. For degraded arrays where fewer components are available than required for full , the --run option forces if data accessibility is possible, such as starting a RAID1 with only one disk or a RAID5 with one missing; for instance, mdadm --assemble --run /dev/md0 /dev/sda1 overrides safety checks to begin operation in degraded mode, potentially using spare devices if configured. The --force option can also be used alongside --run to assemble despite mismatched or outdated . Disassembly deactivates an active , stopping its operation and releasing the underlying devices. The command mdadm --stop /dev/md0 safely stops the specified if it is not in use by the . For arrays that are stuck or mounted, the --force option can compel the stop, as in mdadm --stop --force /dev/md0, though this risks data inconsistency if filesystems remain active. To stop all active , --stop --scan can be employed, mirroring the scan-based assembly process. Boot-time assembly procedures build on these runtime mechanisms but incorporate initramfs for early activation.

Maintenance Operations

Mdadm provides several commands for maintaining arrays after initial setup, allowing administrators to modify array composition, repair issues, and optimize performance without downtime in many cases. These operations leverage the kernel's MD driver capabilities and require the array to be active unless specified otherwise. For instance, adding or removing devices can be performed hot-plug style on redundant arrays like or . To add a device to an active , the --add option is used, which integrates the new either as a or by re-adding a previously removed one, triggering a resync if necessary. The command mdadm /dev/md0 --add /dev/sdd adds the device /dev/sdd to the /dev/md0, provided the has to tolerate potential inconsistencies during . Similarly, removing a failed or uses --remove, as in mdadm /dev/md0 --remove /dev/sdd, which marks the device as faulty and removes it from the metadata without affecting data availability in redundant configurations. Reshaping arrays enables changes to the level or capacity expansion, using the --grow option, which requires specific support for the desired transformation. For example, converting a RAID5 array to RAID6 involves mdadm --grow /dev/md0 --level=6 --backup-file=/backup, where the backup file stores critical to ensure safe operation during the reshape . Size expansion, such as growing the array by adding devices and extending the filesystem, also uses --grow --size=, but this demands versions supporting the feature, such as 2.6.17 or later for RAID5 growth. These operations proceed incrementally, allowing continued array access, though performance may degrade temporarily. Repair and resynchronization maintain by reintegrating devices or synchronizing contents across members. The --re-add option reintegrates a previously removed device, leveraging write-intent if enabled to accelerate : mdadm /dev/md0 --re-add /dev/sdd. Automatic resync occurs upon assembly if discrepancies are detected, ensuring all devices reflect the current data state. Bitmap management enhances efficiency by tracking unsynchronized regions, and can be enabled or modified during growth operations. To add an internal bitmap, the command mdadm --grow /dev/md0 --bitmap=internal is issued, which stores the bitmap within the to minimize resync times after unclean shutdowns. Disabling it uses --bitmap=none, useful for arrays without frequent interruptions. Scrubbing verifies array consistency by checking for data corruption, with mdadm --wait /dev/md0 monitoring the progress of full array checks or ongoing resyncs until completion. This operation, often scheduled periodically, integrates with monitoring tools to report resync progress.

Technical Details

Device Naming Conventions

mdadm employs specific naming conventions for RAID devices to ensure consistent identification across system operations and reboots. By default, non-partitioned RAID arrays are named using the format /dev/md<n>, where <n> is a decimal number ranging from 0 to 255, corresponding to the minor device number assigned by the kernel. For persistent superblock-based naming, particularly with version 1.0 or later metadata that supports UUIDs, arrays use /dev/md_d<n>, where <n> matches the minor number to provide a stable reference independent of the order of device discovery. For partitionable arrays, which require metadata version 1.0 or higher, partitions are denoted by appending p<m> to the device name, resulting in formats such as /dev/md_d<n>p<m> or /dev/md<n>p<m>, where <m> indicates the partition number. This allows standard partitioning tools like fdisk or parted to operate on the array as if it were a single disk. Additionally, custom persistent names can be assigned during array creation with the --name= option, leading to device paths like /dev/md/home under the /dev/md/ directory. Each RAID array is assigned a unique 128-bit UUID upon creation, randomly generated unless specified with --uuid=, and an optional (name) stored in the for version-1 . These identifiers facilitate reliable assembly and configuration; for instance, the command mdadm --detail --scan outputs ARRAY lines in a format suitable for /etc/mdadm.conf or /etc/fstab, such as ARRAY /dev/md0 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx name=home, enabling persistent references by UUID or rather than volatile paths. To mitigate the volatility of numeric device names like /dev/md0, which can shift based on assembly order, mdadm integrates with for creating symbolic . Udev rules generate symlinks in directories such as /dev/md/by-uuid/<UUID> pointing to the actual node, allowing applications to reference arrays by stable identifiers. Similarly, in /dev/md/by-label/<name> provide access by label. This mapping ensures robustness in dynamic environments. mdadm's naming scheme is compatible with layered storage tools, such as LVM, where devices can serve as physical volumes (PVs) using their persistent names or UUID symlinks for volume group creation. Device-mapper can also stack on mdadm arrays, treating them as underlying block devices in multipath or snapshot configurations.

1 Implementation Specifics

mdadm implements 1 as a scheme where data is duplicated bit-for-bit across all member devices in the array, ensuring redundancy by maintaining identical copies on each. This supports configurations with 2 to N devices, where N is practically limited by and I/O constraints rather than a hard cap. The process occurs synchronously during writes, with the 's MD driver handling the replication to all active legs of the array. The synchronization process in RAID 1 begins with an initial resync upon array assembly if the devices are not already in sync, copying data sector-by-sector from a reference device to the others starting from sector 0. This can be manually initiated or forced, for example using the command mdadm --assemble --update=resync /dev/md0, which marks the array as dirty to trigger the operation. For ongoing maintenance, incremental resyncs leverage write-intent bitmaps—stored on the devices or a separate one—to track modified regions, allowing recovery to focus only on discrepancies after interruptions like power failures, significantly reducing resync time compared to full scans. Failure handling in mdadm's RAID 1 operates automatically: upon detecting errors via I/O timeouts or failures, the kernel marks the affected as faulty in /sys/block/mdX/md/dev-YYY/state and redirects operations to the surviving mirrors, maintaining availability in degraded . Recovered or replacement devices can be reintegrated using mdadm --re-add /dev/md0 /dev/sdZ, which initiates a targeted resync to rebuild the mirror without full , provided a is enabled for efficiency. Performance characteristics of RAID 1 in mdadm include a write penalty, where each write operation is replicated to all mirrors—typically doubling the I/O load for a two-device —potentially halving write throughput relative to a single device. Reads, however, benefit from balancing across all in-sync devices, distributing requests to maximize aggregate and reduce , with the selecting legs based on heuristics like recency and queue depth. Edge cases in RAID 1 implementation address recovery and , such as using bitmaps (configured via mdadm --grow /dev/md0 --bitmap=internal) to enable crash-safe incremental resyncs, limiting full scans to rare scenarios. For multi-device arrays, rates are throttled to prevent overwhelming the , adjustable through /sys/block/mdX/md/sync_speed_min and /sys/block/mdX/md/sync_speed_max (in KiB/s), with defaults balancing speed and stability; for instance, the minimum rate ensures progress even under load.

References

  1. [1]
    mdadm(8): manage MD devices aka Software RAID - Linux man page
    Linux Software RAID devices are implemented through the md (Multiple Devices) device driver. Currently, Linux supports LINEAR md devices, RAID0 (striping), ...
  2. [2]
    md-raid-utilities/mdadm: Manager of Linux Software RAID ... - GitHub
    mdadm is a utility used to create and manage software RAID devices implemented through Multiple devices driver (MD) in kernel. It supports following RAID ...
  3. [3]
    RAID arrays - The Linux Kernel documentation
    When md is compiled into the kernel (not as module), partitions of type 0xfd are scanned and automatically assembled into RAID arrays. This autodetection may be ...
  4. [4]
    Chapter 20. Managing RAID | Red Hat Enterprise Linux | 8
    You can create a software Redundant Array of Independent Disks (RAID) on an existing system using the mdadm utility. Prerequisites. The mdadm package is ...
  5. [5]
    mdadm 4.4-2 (x86_64) - Arch Linux
    Last Packager: Lukas Fleischer. Build Date: 2025-09-19 10:42 UTC. Signed By: Lukas Fleischer. Signature Date: 2025-09-19 10:45 UTC. Last Updated: 2025-09-21 11: ...
  6. [6]
    Howto rebuilding a RAID array after a disk fails - nixCraft
    So you will not find raidhotadd on any modern distro including CentOS and co. In short you can replicate the raidhotadd command with –add option mdadm /dev ...
  7. [7]
    Index of /pub/linux/utils/raid/mdadm/ - The Linux Kernel Archives
    ... mdadm-4.3.tar.gz 29-Feb-2024 17:14 605K mdadm-4.3.tar.sign 29-Feb-2024 17:14 383 mdadm-4.3.tar.xz 29-Feb-2024 17:14 455K mdadm-4.4.tar.gz 19-Aug-2025 05:10 ...
  8. [8]
    mdadm(8) - Linux manual page
    Summary of each segment:
  9. [9]
    Neil Brown Sends In His Final MD Pull Request - Phoronix
    Jan 15, 2016 · This is Neil Brown's last MD pull request as he announced a few weeks ago he's resigning. He wrote much of the original MD/mdadm code while at ...Missing: history Labs
  10. [10]
    neilbrown/mdadm: Management tool for Linux md/raid - GitHub
    mdadm is a utility used to create and manage software RAID devices implemented through Multiple devices driver (MD) in kernel. It supports following RAID ...Missing: Labs | Show results with:Labs
  11. [11]
    Linux_2_6_27 - Linux Kernel Newbies
    Dec 30, 2017 · Linux 2.6.27 adds support for those devices. Code: (commit). 1.7 ... Support adding a spare to a live md array with external metadata.Missing: mdadm | Show results with:mdadm
  12. [12]
    ATA Trim command - Thomas-Krenn-Wiki-en
    May 5, 2020 · Status as of November 2012. ATA Trim with Linux Software RAID is supported since Linux Kernel 3.7. Status as of August 2012. Shaohua ...
  13. [13]
    None
    **Summary:**
  14. [14]
    mdadm - Wikipedia
    mdadm is a Linux utility used to manage and monitor software RAID devices. It is used in modern Linux distributions in place of older software RAID utilities.
  15. [15]
    Expanding the capacity of a Red Hat Enterprise Linux MD raid, part 3.
    Oct 5, 2017 · Today's scenario finds us wanting to expand a md raid5 array formatted with an XFS filesystem. Grow a MD Raid5 array consisting of five devices ...
  16. [16]
    mdadm(8) - Linux manual page - man7.org
    This page is part of the mdadm (Tool for managing md arrays in Linux) project. Information about the project can be found at ⟨http://neil.brown.name/blog/mdadm⟩ ...Missing: history | Show results with:history
  17. [17]
    Linux_6.8 - Linux Kernel Newbies
    May 5, 2024 · md: Remove deprecated CONFIG_MD_LINEAR commit. md: Remove deprecated CONFIG_MD_FAULTY commit. md: Remove deprecated CONFIG_MD_MULTIPATH commit.
  18. [18]
    Chapter 10. Deprecated functionalities | Red Hat Enterprise Linux | 9
    The following MD RAID kernel modules have been deprecated and will be removed in a future major RHEL release: CONFIG_MD_LINEAR or md-linear to concatenate ...
  19. [19]
    GNU GRUB Manual 2.12
    This is the documentation of GNU GRUB, the GRand Unified Bootloader, a flexible and powerful boot loader program for a wide range of architectures.
  20. [20]
    initramfs-tools(7) - bookworm - Debian Manpages
    Apr 26, 2025 · initramfs-tools has one main script and two different sets of subscripts which will be used during different phases of execution.<|control11|><|separator|>
  21. [21]
    md(4) - Linux manual page - man7.org
    md also supports a number of pseudo RAID (non-redundant) configurations including RAID0 (striped array), LINEAR (catenated array), MULTIPATH (a set of different ...
  22. [22]
    mdmpd(8) - Linux man page - Die.net
    mdmpd is a daemon that monitors MD multipath devices, checking for failed paths and polling them until they work again.Missing: mdadm deprecation
  23. [23]
    mdadm.conf(5) - Linux manual page - man7.org
    mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux. Some common tasks, such as assembling all arrays, can be ...
  24. [24]
    mdadm.conf(5) — Arch manual pages
    mdadm is a tool for creating, managing, and monitoring RAID devices using the md driver in Linux. Some common tasks, such as assembling all arrays, can be ...