Fact-checked by Grok 2 weeks ago

Shingled magnetic recording

Shingled magnetic recording (SMR) is a hard disk drive (HDD) technology designed to increase areal density by writing data tracks that partially overlap adjacent tracks, resembling the overlapping arrangement of shingles on a roof. This method employs a wider write head to overlap track edges while using a narrower read head to retrieve data precisely, thereby overcoming limitations in scaling track widths for conventional perpendicular magnetic recording (PMR). Data is organized into logical bands or zones to manage overlaps, enabling sequential writes that boost storage efficiency without requiring major changes to existing HDD architectures. The primary advantage of SMR lies in its capacity to enhance areal density by 10-25% per platter compared to non-overlapping PMR drives, enabling HDDs with capacities up to 32 TB and beyond as of 2025 and reducing the cost per of storage. By maintaining strong magnetic write fields through wider heads, SMR addresses the challenges of narrowing tracks in traditional recording, supporting sustained growth in demands for , archival, and applications. It also improves reliability by potentially using fewer heads and platters in higher-capacity designs. A key challenge with SMR is the inability to perform efficient random writes or in-place updates, as overwriting one destroys in overlapping adjacent tracks, often requiring the entire band to be read, modified, and rewritten. To address this, SMR implementations incorporate management techniques such as drive-managed systems (which handle sharding, over-provisioning, and collection internally), host-managed approaches (where the operating system or application sequences writes), and models that share responsibilities between drive and host. These strategies make SMR suitable for sequential workloads like backups and analytics but may introduce latency for random-access patterns without optimization. SMR technology emerged in the late as a bridge to extend the viability of magnetic recording beyond PMR limits, with Seagate becoming the first to commercially ship SMR-based drives in , initially targeting capacities up to 5 TB. By the mid-2010s, it was integrated with advancements like two-dimensional magnetic recording (TDMR) to further mitigate inter-track interference during reads. As of 2025, SMR is widely adopted in high-capacity and nearline HDDs from major manufacturers, contributing to exabyte-scale data centers, with commercial integration of (HAMR) enabling capacities up to 30 TB while hybrid systems combining SMR with SSD caching provide balanced performance.

Overview

Principles of Operation

Shingled magnetic recording (SMR) is a technology that increases areal density by writing successive data tracks such that they partially overlap, resembling the layout of on a . This approach allows tracks to be narrower than in conventional recording while using existing perpendicular magnetic recording (PMR) media and heads, enabling up to a 25% increase in storage capacity without major manufacturing changes. In SMR, the write head is wider than the read head, producing a that overlaps the previous by 50-90% of its width during sequential writes, thereby squeezing more tracks into the same radial space. The narrower read head can then access the non-overlapped portion of each without from adjacent , preserving read comparable to PMR. This geometry trades random write capability for density, as overwriting a would corrupt downstream overlaps, necessitating sequential operations. To manage overlaps, the disk surface is divided into fixed-size , typically 256 bands of consecutive tracks treated as sequences. Within a zone, supports sequential writes starting from a write pointer at the zone's beginning, filling the zone contiguously until full; reuse requires a zone or to relocate data and reclaim space. These zones prevent widespread from random updates by confining shingling effects. Compared to conventional PMR, where tracks are written non-overlapping with equal-width read and write heads, SMR achieves higher areal density through overlap but requires adapted data placement to avoid read-modify-write cycles for updates. This results in density gains of 20-30% in practical implementations, extending the scalability of magnetic recording beyond PMR limits.

Advantages and Challenges

Shingled magnetic recording (SMR) enables significant increases in areal density by overlapping adjacent tracks, allowing for up to 25% higher storage capacity compared to conventional perpendicular magnetic recording (PMR) drives of similar physical size. This density advantage has driven SMR adoption in high-capacity hard disk drives, with commercial products reaching 32 TB as of 2024, supporting transitions from earlier 10 TB PMR models to meet growing data demands. Consequently, SMR reduces the cost per terabyte, offering approximately 20% savings in for hyperscale storage systems through efficient use of existing manufacturing processes. In data centers, SMR's higher density minimizes the number of drives required for equivalent storage volumes, leading to lower material consumption and improved energy efficiency by reducing overall power draw and cooling needs. This scalability supports exabyte-scale deployments for and archival , where sequential workloads predominate, and the global SMR market is projected to grow from $4.15 billion in 2024 to $8.3 billion by 2033 at a of 8.4%. Such benefits position SMR as a cost-effective for sustainable growth in energy-constrained environments. Despite these gains, SMR introduces , where updating data in a necessitates reading, modifying, and rewriting the entire —often shifting downstream data—to avoid corrupting overlapping tracks. This process can amplify write operations by factors proportional to size, exacerbating in random write scenarios. Random write performance in SMR drives is significantly slower than in PMR equivalents, with sustained speeds dropping to 40 MB/s or less after exhaustion, compared to 80–160 MB/s for PMR, potentially up to 4–10 times slower in worst-case zone management. Read performance remains generally comparable to PMR drives, as the shingled geometry imposes minimal constraints on retrieval, though occasional latency arises from zone-level and potential fragmentation during heavy write activity. These trade-offs highlight SMR's suitability for write-once or sequential-access applications while necessitating careful workload alignment to mitigate operational bottlenecks.

History

Early Research and Development

Shingled magnetic recording (SMR) was proposed in the late as a technique to extend the areal density limits of perpendicular magnetic recording (PMR) by allowing tracks to overlap like shingles on a , thereby addressing the superparamagnetic trilemma of , thermal stability, and writability without requiring major changes to existing or heads. The concept was introduced by researchers at Global Storage Technologies (), with the seminal paper "The Feasibility of Magnetic Recording at 10 Terabits Per on Conventional Media" published in February 2009 by Roger Wood and colleagues, demonstrating that shingled writing combined with two-dimensional could achieve densities up to 10 Tb/in² on conventional granular . This approach aimed to sidestep the narrowing write head constraints in PMR by using wider heads for sequential writes, enabling higher track densities while maintaining readability through advanced readback methods. Research on SMR accelerated from around , with laboratory prototypes developed between and showcasing areal density gains of 20-30% over conventional PMR drives through overlapping track geometries. Key milestones included HGST's 2010 spinstand demonstrations of shingled write/read operations achieving over 800 Gb/in² in controlled settings, alongside contributions from researchers and academic collaborations on head designs and algorithms. These efforts involved institutions like the , which co-authored foundational simulations showing viable error rates at high densities. Early laboratory work focused on mitigating challenges from overlapping tracks, such as inter-track interference and the need for sequential access, through the development of banding algorithms to group tracks into rewrite units and enhanced error correction codes tailored for shingled geometries. HGST and Seagate filed initial patents in 2010 addressing these issues; for instance, HGST's US Patent 8,537,481 (filed August 2011, priority 2010) described methods to minimize far-track erasure in shingled bands, while Seagate's US Patent 8,896,961 (filed April 2010) covered reader positioning for overlapping tracks. By 2011, the transition to potential commercialization prompted early discussions within the INCITS T10 committee on zoned storage interfaces to support SMR's sequential-write semantics, laying the groundwork for the Zoned Block Commands (ZBC) standard. These efforts emphasized device-level management of zones to handle the non-overwritable nature of shingled tracks, influencing subsequent host-aware implementations.

Commercial Introduction and Adoption

The commercial introduction of shingled magnetic recording (SMR) technology marked a significant milestone in (HDD) development, beginning with Seagate's announcement in September 2013 of the first drives utilizing SMR to achieve up to a 25% increase in areal density, enabling capacities such as 2-4 TB in the Archive HDD series for nearline storage applications. These initial products targeted cost-sensitive, sequential-write workloads like archiving, providing a practical boost in capacity without immediate widespread disruption to conventional perpendicular magnetic recording (PMR) designs. Following closely, (now part of ) launched the Ultrastar Archive Ha10, the industry's first enterprise-class 10 TB helium-filled SMR drive for archive storage, in June 2015, leveraging seven platters to set a new benchmark for high-capacity, low-power storage in cool/cold environments. Early adoption faced hurdles, notably a 2021 class-action lawsuit against alleging undisclosed SMR implementation in WD Red NAS drives, which resulted in a $2.7 million and prompted industry-wide improvements in drive labeling and transparency standards to address compatibility issues in and random-write scenarios. By the early , SMR gained traction in specialized sectors, evolving from niche archive use to broader integration in enterprise storage. also introduced SMR drives in the mid-2010s, contributing to diversified adoption across manufacturers. Recent advancements as of 2025 underscore SMR's maturation, with introducing a 24 TB SMR model in the S300 series for video applications on November 5, 2025, doubling prior capacities in this segment to support AI-driven analytics and continuous recording. Seagate has advanced toward 40 TB HAMR drives using SMR, with engineering samples shipped in 2025 for qualification, combining with shingled enhancements for hyperscale demands, though full production is slated for 2026. Market projections indicate robust growth, with the SMR sector expected to expand from $1.65 billion in 2024 to approximately $7.33 billion by 2032 at a 20.5% CAGR, driven by capacity needs in exabyte-scale environments. Adoption trends reflect increasing acceptance, particularly in cloud storage where hyperscalers such as AWS, Google, and Meta have incorporated SMR drives by 2024 to achieve about 20% higher capacity per unit compared to conventional HDDs, optimizing costs for archival and sequential workloads. In surveillance and enterprise settings, SMR elements now feature in a growing share of new HDDs amid surging data from AI and video analytics.

Technical Fundamentals

Track Geometry and Zoning

In shingled magnetic recording (SMR), the track geometry is designed to exploit the difference between write and read head widths to achieve higher areal . The write head is significantly wider than the read head, typically around twice as wide, allowing new to overlap previously written ones while the narrower read head can still access the residual data in the non-overlapped portion of the older . For example, early conceptual designs featured a write width of approximately 70 nm compared to a conventional read width of 25 nm, enabling overlaps that reduce the effective and yield a multiplier of 1.3 to 1.5 times over perpendicular magnetic recording (PMR) in models, though commercial implementations typically achieve 10-25% gains. This geometry inherently introduces adjacent interference (ATI), where the magnetic fields from overlapping writes can degrade neighboring , but it is mitigated through precise head positioning and techniques. SMR drives organize tracks into , which serve as sequential write namespaces to manage the overwrite constraints imposed by shingling. A typical SMR is divided into thousands of such —for instance, a multi-terabyte might contain over 30,000 —each functioning as a contiguous where data must be written sequentially from the inner to outer edges to avoid disturbing prior data. Each commonly holds around 500 tracks, though this varies with and , ensuring that random overwrites are prevented by treating the as an structure until it is . are separated by guard bands of non-shingled tracks to isolate interference and allow independent management. Banding variations in SMR include fixed-band and variable-band configurations, adapting to different management needs. Fixed-band SMR uses uniform zone sizes across the drive, such as 256 per , which simplifies firmware handling in device-managed implementations but may lead to internal fragmentation in zones with varying track densities due to the drive's zoned . Variable-band SMR, in contrast, allows flexible zone capacities ranging from 32 to 512 , accommodating host-aware or host-managed schemes where zone boundaries can align with application workloads for better efficiency. As of 2024, modern commercial drives, like those from and Seagate, predominantly employ fixed-band zoning with 256 to standardize compliance with protocols like Zoned Block Commands (ZBC). Error management in SMR track geometry incorporates built-in to counter ATI, with shingled-specific (ECC) enhancing reliability. Stronger ECC schemes, such as low-density (LDPC) codes tailored for two-dimensional , are applied across multiple adjacent tracks during reads to correct from partial overlaps without frequent rewrites. This adds a small overhead but ensures bit error rates remain below 10^{-15}, critical for applications, by integrating inter-track and iterative decoding algorithms.

Read and Write Mechanisms

In shingled magnetic recording (SMR), the write process relies on sequential appending of tracks within defined zones, where each new track partially overlaps the previous one to maximize areal . This is achieved using a write head wider than the target width, typically offset to overlap only the trailing edge of the prior track in a one-directional manner, eliminating inter-track gaps and allowing track pitches as narrow as 10-20 nm. Overwriting an individual track is not possible without corrupting adjacent ; instead, it necessitates a read-modify-write operation that rewrites the entire affected zone, resulting in factors scaling with zone size (often hundreds of tracks), typically leading to 100x or more for random single-track updates in commercial drives. The read process in SMR is non-destructive and supports , leveraging narrower read heads (e.g., 10-20 width) that can isolate the readable portion of a shingled despite overlaps from adjacent writes. These heads exploit the between writing and reading capabilities, where reading thin tracks is feasible even if writing them directly is challenging due to magnetic field requirements. While random reads are viable, sequential reads are preferred for efficiency, as they minimize head seeks and reduce inter-track interference effects, potentially mitigated further by techniques like two-dimensional magnetic recording (TDMR) using multiple offset read elements. Head technology in SMR distinguishes between one-sided shingled writing, the predominant approach where overlaps occur unidirectionally, and experimental two-sided that enable shingling on both edges for potentially higher densities but increased in track management. Seagate's implementations, for instance, primarily use one-sided writing with optimized head designs, yielding areal density gains of 7-15% over conventional perpendicular recording in capacities around 24 TB as of 2023. To isolate zones and accommodate active without spillover, zones—implemented as bands or dedicated unshingled rewrite areas—separate shingled regions, typically accounting for 1-5% of total drive capacity.

Data Management Approaches

Device-Managed SMR

Device-managed shingled magnetic recording (SMR), also referred to as drive-managed SMR, is an where the hard disk drive's internally manages the sequential-write constraints and data shingling process to emulate the behavior of a conventional perpendicular magnetic recording (PMR) drive. This approach provides full with existing host systems and applications, as the drive exposes a standard block interface without requiring any modifications to host software or operating systems. The handles zone management, data rewriting, and track overlapping transparently, absorbing the complexities of SMR to maintain random read and write accessibility from the host's perspective. In terms of implementation, device-managed SMR relies on a shingled translation layer within the drive firmware to logical addresses to physical shingled tracks, often utilizing an on-drive composed of conventional (non-shingled) CMR regions to random writes and mitigate . For instance, Seagate's early device-managed SMR drives, such as the 8 TB HDD, incorporate an "E-region" persistent equivalent to about 1% of the drive capacity (approximately 80 GB), alongside a 128 MB for immediate write buffering and zone mapping. This setup allows the drive to handle write amplification factors up to 10 times higher than PMR drives during intense random workloads by rewriting affected shingled bands in the background, without exposing these operations to . Performance in device-managed SMR drives closely mirrors that of PMR drives for sequential reads and writes when operating within the capacity, achieving sustained throughput rates comparable to conventional HDDs, such as around 150-200 /s for sequential operations. However, under heavy random I/O workloads that exceed the limits, performance can degrade due to internal band rewrites and garbage collection, resulting in sustained write speeds dropping to 100-200 /s or lower, depending on drive utilization and efficiency. Advanced schemes, like the track-based translation layer proposed in research, can reduce to as low as 0-6% in optimized scenarios, helping maintain near-PMR performance up to 90% drive fill levels. Examples of device-managed SMR drives include Seagate's Archive HDD series, which began shipping in 2013 with capacities up to 8 TB, designed for archival and nearline without necessitating host-side changes.

Host-Managed SMR

Host-managed shingled magnetic recording (SMR) refers to an implementation where the host operating system or application directly controls the management of zones on the drive, issuing specialized zoned commands to append sequentially to zones and explicitly resetting zones as needed, without any drive-level mechanisms to conceal the effects of track shingling. This approach shifts the responsibility for ensuring and sequential write constraints entirely to the host, allowing the drive to expose its internal zone structure transparently via standards such as Zoned Block Commands (ZBC) for or Zoned ATA Commands (ZAC) for interfaces. Unlike other SMR variants, host-managed drives reject non-sequential writes to zones, enforcing strict behavior to maintain performance and avoid internal garbage collection delays. In typical workflows, applications interact with the drive as a zoned namespace (ZNS), querying zone states and capacities before appending data streams to open zones, which are fixed-size regions (often 256 MB) optimized for sequential filling. For instance, in archival backup or write-once storage scenarios like cloud object storage, the host sequentially fills zones with immutable data, minimizing overwrites and leveraging the drive's full areal density potential without random access interruptions. When a zone becomes full, the host must reset it—effectively erasing and preparing it for reuse—before appending new data, often coordinating this across multiple drives in distributed systems. Real-world deployments, such as Dropbox's exabyte-scale adoption of 14 TB host-managed SMR drives, demonstrate this by aligning data chunks sequentially within zones to support predictable ingestion of large, append-heavy workloads. Examples also include Western Digital's Ultrastar Archive Ha10 (10 TB, introduced 2015) and Ultrastar DC HC650 (20 TB, as of 2023), designed for enterprise nearline storage with host optimization. The primary benefits of host-managed SMR include enhanced sustained write performance for sequential workloads, potentially achieving up to twice the throughput of conventional perpendicular magnetic recording (PMR) drives in optimized setups, with examples reaching 200–250 MB/s for large sequential appends due to the elimination of drive-side buffering and cleaning overheads. This leads to lower total cost of ownership through higher capacity per drive (e.g., up to 30% density gains) and reduced power consumption in data centers. However, it demands zoned-aware software stacks; Linux kernel support for zoned block devices, usable with ext4 via the dm-zoned device mapper for sequential emulation, was introduced in version 4.10 (2017), enabling compatibility for host-managed drives in environments like archival storage. Limitations arise in workloads requiring frequent updates, as host-managed SMR performs poorly with random overwrites—common in databases—since filling a zone necessitates host-orchestrated resets and full rewrites, potentially causing spikes and inefficiency. Without application-level awareness, attempts at in-zone updates can lead to zone exhaustion and forced migrations, amplifying overhead compared to transparent drive-managed alternatives. Thus, it excels in append-dominant use cases but requires adaptations to avoid pitfalls.

Host-Aware SMR

Host-Aware Shingled Magnetic Recording (HA-SMR) represents a model in which the drive exposes key zoning information, such as zone types, write pointers, and status (e.g., empty or full), to the host through standardized queries like the REPORT ZONES command, enabling collaborative optimization while the drive internally manages buffering for non-sequential operations. This approach builds on the Zoned Block Commands (ZBC) and Zoned Commands (ZAC) standards, allowing the host to direct sequential writes to appropriate zones for efficiency, while the drive redirects random writes to a media cache and performs background migration and cleaning to maintain performance. Unlike fully host-managed systems, HA-SMR provides with conventional interfaces, making it suitable for gradual adoption in existing storage ecosystems. Implementation of HA-SMR typically supports a limited number of open zones—up to 128 sequential and 8-16 for random writes—beyond which sequential write throughput can degrade by approximately 57% due to increased cleaning overhead. The drive combines device-managed caching for handling bursts of non-sequential I/O without immediate performance penalties, with host-provided hints to align writes and minimize rewrites. For instance, drives, including models evaluated in storage research, incorporate HA-SMR features with partial zone exposure, allowing hosts to query and reset write pointers via the command for optimized zone lifecycle management. This integration often involves host-side indirection buffers, such as circular-log structures spanning multiple zones, to further enhance I/O predictability. HA-SMR excels in workloads with mixed sequential and bursty patterns, such as archival or hierarchical systems where it serves as a cost-effective second tier, delivering performance comparable to perpendicular magnetic recording (PMR) drives under light random write loads due to effective caching. In scenarios involving irregular I/O, like data ingestion in pipelines, the host's awareness of states reduces spikes from rewrites, though sustained random writes may still incur cleaning delays of 1-30 seconds per . Overall, optimized HA-SMR configurations can achieve higher and more predictable throughput than unoptimized device-managed SMR, making it viable for general-purpose enterprise applications. The concept of HA-SMR emerged around 2016 as a superset of drive-managed and host-managed models, formalized through the INCITS T10 ZBC and T13 ZAC standards released in 2015-2016 to standardize zoned interfaces. As of 2025, HA-SMR and related zoned SMR technologies, primarily host-managed variants, have gained traction in data centers for capacities exceeding 30 TB, where shingled recording contributes to higher areal densities in enterprise drives from vendors like Seagate and , supporting the surge in and demands. Host-managed SMR dominates high-capacity deployments, with host-aware seeing use in hybrid scenarios.

Protocols and Standards

Zoned Block Commands

Zoned Block Commands (ZBC) and Zoned Device ATA Command Set (ZAC) provide the standardized protocol interfaces for host systems to interact with zoned devices, including those using shingled magnetic recording (SMR). The ZBC (INCITS 536-2016), developed by the T10 under INCITS, defines commands for SCSI-based zoned devices, while the semantically equivalent ZAC (INCITS 537-2016), from the T13 , applies to /SATA interfaces. These protocols ensure by specifying how hosts query zone layouts, manage write pointers, and perform sequential writes, with ZBC and ZAC being mandatory for host-managed SMR drives since their ratification in 2016. Central to these standards are commands for zone reporting and manipulation. The REPORT ZONES command retrieves detailed information about all zones on the device, including their starting logical block addresses (LBAs), sizes, types (conventional or sequential-write required), and current states, allowing hosts to map the device's zoned geometry. The RESET WRITE POINTER command repositions the write pointer of one or more specified zones back to the zone's starting LBA, erasing prior data and enabling fresh sequential writes; this is essential for host-managed garbage collection and zone reuse. For efficient sequential writing, the Zone Append command (introduced in later revisions) allows data to be appended directly to a zone's current write pointer without the host needing to track the exact LBA, reducing overhead in append-only workloads. Zone management is handled through a set of commands that control zone activation and finalization. The OPEN ZONE command explicitly transitions an empty or closed sequential-write zone to an open state, allocating device resources for writing; CLOSE ZONE deactivates an open zone while preserving the write pointer for later resumption; and FINISH ZONE marks a zone as read-only, preventing further writes. These operations, along with WRITE POINTER, form the core of zone lifecycle . Zones operate in defined s to reflect their : empty (write pointer at start, no written), implicitly open (automatically opened by a write command), explicitly open (opened via host command), closed (inactive but writable later), full (write pointer at end), read-only (writes prohibited but reads allowed), and offline (unavailable due to media errors). State transitions occur in response to writes, reads, or commands, ensuring predictable patterns. Interface-specific constraints differ between ZBC and ZAC. Under ZBC for , the REPORT ZONES command supports partial reporting options to retrieve descriptors for subsets of zones, managing response times for devices with large numbers of zones. ZAC for provides equivalent reporting capabilities without fixed limits on the total number of zones. Command latencies vary by operation and device. These standards have seen updates to accommodate evolving storage needs, with ZAC-2 (INCITS 549-2022) extending support for larger zone sizes—up to 1 TB or more—facilitating denser SMR and zoned SSD implementations while maintaining . ZBC-2 (INCITS 550-2023) similarly incorporates enhancements for expanded zone capacities.

NVMe Zoned Namespaces

The NVMe Zoned Namespace (ZNS) command set, defined in the NVMe 2.0 specification (2021), provides a zoned storage protocol over the NVMe interface, applicable to SMR HDDs and zoned SSDs. ZNS aligns with ZBC/ZAC models, supporting host-managed zoning with commands for zone management (e.g., Zone Open, Close, , Finish) and the Zone Append command for efficient writes. It enables up to 65,536 zones per namespace and is increasingly used in enterprise environments for high-performance zoned storage.

Drive Identification and Reporting

Shingled magnetic recording (SMR) drives are identified and their capabilities reported to the host using standard SCSI and ATA protocol commands, enabling the host to detect zoned storage support and configure accordingly for compatibility. In SCSI interfaces, SMR drives supporting Zoned Block Commands (ZBC) are detected via the INQUIRY command, which retrieves vital product data (VPD) pages indicating zoned capabilities; for example, the Block Device Characteristics VPD page provides details on whether the device is zoned and the number of logical blocks per physical block. The REPORT ZONES command (opcode 0xA8) further reports zone statistics, including the total zone count, zone types (e.g., sequential write required or conventional), zone capacity, and the maximum logical block address (LBA) per zone; a representative SMR drive might report over 50,000 sequential zones, each with a 256 MB capacity, allowing the host to map data placement optimally. In interfaces, SMR drives supporting the Zoned Device ATA Command Set (ZAC) are identified through the IDENTIFY DEVICE command ( 0xEC), which returns a 512-byte containing capability flags, including support for ZAC commands and zoned operation; the host can then issue the REPORT ZONES EXT command to retrieve similar zone details as in , such as zone descriptors listing start LBAs, lengths, and write pointer positions. This reporting ensures the host can distinguish SMR from conventional drives and adjust I/O patterns to avoid performance degradation. Following concerns and post-2021 practices, SMR drives are explicitly flagged by manufacturers through model listings and indicators; for instance, Seagate maintains an classifying drives as conventional magnetic recording (CMR) or SMR based on model numbers, aiding and without relying solely on command responses. Utilities such as for ATA IDENTIFY DEVICE output and sg_inq/sg_rep_zones from the sg3_utils package for SCSI INQUIRY and REPORT ZONES enable users to verify SMR presence and zone configurations directly on systems. In SMR drives, which combine conventional and shingled zones, reporting can dynamically reflect the current zone allocation via extended REPORT ZONES responses, supporting adaptive host management.

Applications and Software

Storage Use Cases

Shingled magnetic recording (SMR) technology excels in archival and applications, where data is typically written once and accessed infrequently, aligning with the sequential write patterns that minimize shingling overhead. These workloads, such as backups and long-term retention, benefit from SMR's higher areal , which enables greater per without proportional increases in or requirements. In hyperscale data centers, SMR drives are deployed for petabyte-scale tiers, supporting massive datasets in environments by reducing through fewer drives and lower infrastructure demands. As of 2025, the SMR market is projected to grow from USD 2.8 billion to USD 18.5 billion by 2035 at a CAGR of 20.7%. For surveillance and media storage, SMR drives are optimized for continuous, sequential video recording in 24/7 environments, where high-capacity needs outweigh demands. Toshiba's S300 series, incorporating SMR, provides dense storage for multi-camera feeds, enabling efficient retention of high-resolution in systems like video archives and analytics servers. These drives handle workloads up to 180 TB per year while supporting up to 64 camera streams, with sequential transfer rates reaching approximately 184 /s, making them suitable for sequential media ingestion without frequent overwrites. In analytics, SMR integrates well with distributed file systems like Hadoop's HDFS, where log files and sequential datasets dominate, allowing clusters to leverage higher capacities for cost-effective scaling. Research demonstrates that SMR-adapted Hadoop environments can process large-scale sequential workloads with minimal performance penalties, enabling exabyte deployments to achieve 20-30% lower cost per TB compared to conventional recording drives. Emerging use cases include training datasets, which often involve infrequent updates to vast, static corpora stored in data centers. SMR's capacity advantages make it ideal for these write-once scenarios.

Operating System and Support

Shingled magnetic recording (SMR) drives require specific adaptations in operating systems and file systems to handle their sequential-write nature and zoned architecture, particularly for host-managed and host-aware modes that expose zones to the host for better control. In , support for zoned block devices, including SMR, was introduced in kernel version 4.10 through the blkzoned driver, which enables the kernel to recognize and manage zoned storage semantics like sequential writes and zone resets. File systems have built on this foundation: added zoned mode support starting with kernel 5.12 in 2021, allowing it to allocate data in append-only zones while maintaining compatibility with conventional modes. Similarly, (Flash-Friendly File System) includes native zoned support for host-managed SMR, optimizing for flash-like sequential patterns on HDDs. can be used on zoned block devices via the dm-zoned target, which emulates a conventional block device while enforcing sequential writes. Support in other operating systems remains more limited as of 2025. Windows, including , primarily handles SMR drives in device-managed mode through standard drivers, with no native host-managed zoned support in consumer editions, though Windows Server versions enable Zoned Device ATA Command (ZAC) passthrough for advanced zoning control. macOS similarly relies on device-managed SMR via generic drivers in and later, without built-in host-aware features for zoned operations. At the application level, and software have adapted to zoned namespaces (ZNS), a related NVMe standard that emulates SMR-like zoning on SSDs but influences SMR optimizations. , for instance, incorporates ZNS-aware logging and compaction to reduce on zoned media. Ceph, the distributed system, has optimized its OSD (Object Storage Daemon) for ZNS since 2023, enabling efficient zone management in clusters. In 2025, NVMe over Fabrics (NVMe-oF) received extensions for zoned command sets, allowing SMR emulation in networked environments to bridge legacy HDDs with modern zoned protocols.

Advanced Features

Dynamic Hybrid SMR

Dynamic Hybrid SMR refers to a reconfigurable feature in shingled magnetic recording (SMR) drives that enables individual zones to switch between conventional magnetic recording (CMR)-like operation and shingled modes during runtime, providing greater flexibility for workload adaptation. This capability was proposed in academic research in 2019 and by around 2018 via proposals to the T10 committee, building on zoned storage standards to allow firmware-level reconfiguration without hardware changes. The mechanism relies on zoned block commands, specifically the REPORT ZONE and ZONE SET commands defined in the Zoned Block Commands (ZBC) standard, to query zone status and reformat selected zones between CMR and SMR formats. For instance, a drive might initially operate all zones in CMR mode for high random I/O performance, then reconfigure approximately 20% of zones to SMR to expand capacity for sequential workloads, with each zone typically sized at 256 for granular control. This dynamic allocation supports mixed access patterns by dedicating CMR zones to bursty random writes while using SMR for archival data. By enabling runtime adjustments, Dynamic Hybrid SMR adapts to varying workloads, balancing capacity gains from shingling (up to 25% density increase) with CMR's superior random I/O throughput, thus mitigating the issues inherent in pure SMR configurations. In evaluations, this approach has demonstrated reduced I/O latency and improved overall efficiency for scenarios, such as data centers handling both active and . It remains an optional extension in current specifications, including those evolving through the and T10 standards as of 2025, with limited commercial adoption. Reconfiguration introduces overhead, with zone conversions taking approximately 16 seconds for a 2 GiB block (equivalent to several zones), during which I/O operations may be suspended or delayed to migrate valid data.

Integration with Emerging Technologies

Shingled magnetic recording (SMR) synergizes with heat-assisted magnetic recording (HAMR) by leveraging localized heating to reduce track width and enable tighter shingling overlaps, thereby boosting areal density beyond conventional limits. In HAMR, the thermal assist allows media grains to be written more precisely, facilitating greater track overlap in SMR configurations without excessive inter-track interference. Seagate's Mozaic 3 platform integrates this approach, with 40TB HAMR-SMR drives sampled to data center customers in 2025 and volume production slated for 2026, achieving areal densities of approximately 2.6 Tb/in² through 10-platter helium-sealed designs. As of November 2025, these drives are in sampling phase. Microwave-assisted magnetic recording (MAMR) similarly enhances SMR by generating oscillating magnetic fields that strengthen write signals for overlapping tracks, improving recording efficiency in shingled bands. employs energy-assisted perpendicular magnetic recording (ePMR) alongside SMR in its Ultrastar series, as seen in the 28TB DC HC680 drive ramped up in late , which uses shingled ePMR with elements for up to a 20% capacity uplift over non-shingled counterparts. This integration supports sequential workloads in data centers, with overlap efficiencies contributing to higher effective densities in nearline . Projections position SMR as essential for scaling to 50TB+ drives by , combining with helium-filled enclosures to enable more and lower power draw for energy-efficient exascale storage in hyperscale environments. Both Seagate and roadmaps emphasize SMR's role in HAMR and ePMR hybrids to reach 44TB in and 50TB by 2028, addressing the exponential data growth in and infrastructures. Challenges in these integrations include thermal management in shingled HAMR zones, where uneven heat application can induce track curvature and degrade , leading to higher bit error rates. Advanced error correction codes, such as low-density parity-check algorithms tailored for interference, are thus critical to maintain reliability and read performance in overlapping regions.

Criticisms and Limitations

Performance and Reliability Issues

One of the primary performance challenges in shingled magnetic recording (SMR) drives arises from the constraints of their zoned , which favors sequential over random writes. Sequential write speeds can reach up to 270 /s under optimal conditions, comparable to conventional magnetic recording (PMR) drives. However, random operations degrade markedly to 10-50 /s or lower due to the necessity of read-modify-write cycles across entire zones, leading to substantial . In benchmarks, random 4K writes on a 25TB host-managed SMR drive can fail under standard test conditions, achieving only 2 . This inefficiency manifests in elevated , with benchmarks showing SMR random write latencies significantly higher than PMR equivalents—often 2-10 times greater, with PMR around 10-12 and SMR exceeding this under mixed workloads due to zone cleaning and when open zones accumulate—primarily from zone cleaning and when open zones accumulate. Reliability concerns in SMR stem largely from adjacent track (ATE), where writes to overlapping tracks can degrade on neighboring tracks, resulting in higher bit rates (BER) than in non-shingled systems. While the industry-standard BER target remains 10^{-15} for error correction viability, ATE introduces elevated risks that demand robust coding and to maintain data fidelity. Enterprise SMR drives typically carry MTBF ratings of 2-2.5 million hours, aligning with PMR standards, but zone-full conditions exacerbate wear through repeated read-modify-write operations, potentially shortening effective lifespan under heavy update workloads. Empirical testing underscores these issues; for instance, 2023 benchmarks by StorageReview on a 25TB host-managed SMR drive showed random write performance failing under mixed loads. Drive firmware plays a critical role in mitigation, with optimizations like background zone reorganization and media caching reducing write amplification by buffering non-sequential updates and scheduling cleans during idle periods. Nonetheless, unoptimized deployments—such as those involving frequent random overwrites without adequate idle time—can double power consumption during intensive rewrite phases relative to baseline operations, due to heightened head activity and platter seeks.

Industry Transparency and Adoption Challenges

The lack of transparency regarding the use of shingled magnetic recording (SMR) technology in hard disk drives (HDDs) came to a head between 2019 and 2021, when faced significant backlash for incorporating undisclosed SMR drives into its WD Red product line. These drives, marketed for applications, led to performance degradation and failures in configurations due to SMR's sequential write limitations, which conflicted with the random write patterns typical in such setups. This controversy culminated in a $2.7 million in 2021, requiring to disclose SMR usage on affected products for at least four years and provide refunds to eligible purchasers. In October 2025, launched an investigation into elevated failure rates in its 2020-era 2-6 TB SMR-based WD Blue and Red drives, attributed to fundamental flaws in the technology, reigniting concerns from the 2021 lawsuit. Adoption of SMR technology has been hindered by incomplete support across operating systems and file systems, particularly in enterprise environments where random write-heavy workloads predominate. For instance, systems like in have shown limited compatibility, prompting warnings against SMR use in high-availability setups and slowing the shift from conventional magnetic recording (CMR) drives. As of 2023, SMR HDDs represented a modest portion of the overall market, with shipments reflecting growing but not dominant acceptance amid ongoing concerns over . The SMR market is projected to grow from USD 2.8 billion in 2025 to USD 18.5 billion by 2035 at a 20.7% CAGR, driven by capacity demands, though enterprise hesitation persists due to integration barriers. Critics have highlighted misleading marketing practices, such as claims of full compatibility with CMR-based systems without clear SMR disclosure, which eroded consumer trust during the incident. Industry observers have called for mandatory reporting of Zoned Block Commands (ZBC/ZAC) in drive specifications to enable better informed purchasing decisions, especially as SMR proliferates in mixed-use scenarios. Recent developments show incremental progress in and targeted adoption. released a 24 TB SMR model in its S300 surveillance HDD series in 2025, suited for sequential video workloads in AI-driven systems, demonstrating viability in niche applications. However, resistance remains strong in database environments, where SMR's can disrupt transactional integrity and query performance, limiting broader enterprise uptake.

References

  1. [1]
    [PDF] Shingled Magnetic Recording - CMU School of Computer Science
    Jun 22, 2013 · SMR drives get that increased density by writing overlapping sectors, which means sectors cannot be written randomly without destroy- ing the ...
  2. [2]
    Shingled Magnetic Recording (SMR) and Two-Dimensional ...
    Aug 7, 2025 · SMR was developed as a response to the difficulty of maintaining fields while writing very narrow tracks with a conventional magnetic head. By ...
  3. [3]
    [PDF] Four Key Technologies That Will Help Steer the Storage Industry ...
    However, SMR drives efficiently write tracks that are divided into logical zones or bands, where the shingling rewriting process appropriately stops, thereby ...
  4. [4]
    First To Ship Hard Drives Using Next-Generation Shingled Magnetic ...
    Sep 9, 2013 · “Shingled magnetic recording technology is a solution that leverages existing drive architecture to help close the gap in these growth rates ...
  5. [5]
    [PDF] Shingled Magnetic Recording disks for Mass Storage Systems - CORE
    Heidi Williams, School of. Engineering, for reviewing the paper, for valuable comments and feedback. ... 3.1 Shingled Magnetic Recording . ... Operating Systems ...
  6. [6]
    [PDF] UC Berkeley
    Jan 1, 2018 · With SMR, the areal density can be increased by 25% ... “Shingled magnetic recording: Areal density increase requires new data management”.<|control11|><|separator|>
  7. [7]
    [PDF] Shingled Magnetic Recording for Big Data Applications
    May 29, 2012 · Most system calls end up writing data to the buffer cache, which then coalesces multiple writes and asynchronously writes the data back to disk.
  8. [8]
    Reflecting on the Past 17 Years of Shingled Magnetic Recording for ...
    Jun 20, 2025 · Shingled magnetic recording (SMR) is a data storage recording technology used in modern hard disk drives (HDDs) to increase the areal ...
  9. [9]
    After four years of SMR storage, here's what we love—and what ...
    Mar 8, 2023 · Shingling increases the capacity of SMR drives by about 20% over conventional perpendicular magnetic recording (PMR) drives. More recent SMR ...
  10. [10]
    How Shingled Magnetic Recording (SMR) Drives Up Data Center ...
    Jul 24, 2024 · SMR drives can increase storage density up to 30% (in some cases) while keeping costs low. For hyperscalers looking for high-capacity drives by ...
  11. [11]
    Toshiba Reveals CMR 24 TB and SMR 28 TB Hard Disk Drives
    Sep 10, 2024 · The Mx11 family includes the MG11 Series, which provides capacities of up to 24 TB using conventional magnetic recording (CMR), and the MA11 Series which ...
  12. [12]
    SMR: What we learned in our first year - Dropbox Tech Blog
    Jul 30, 2019 · We're able to store roughly 10 to 20 percent more data on an SMR drive than on a PMR drive of the same capacity at little to no cost difference.Missing: per | Show results with:per
  13. [13]
    Shingled Magnetic Recording: Unlocking Scalable and Sustainable ...
    Feb 26, 2025 · Shingled magnetic recording (SMR) increases hard drive capacity by overlapping data tracks, allowing for higher storage density while reducing costs and energy ...
  14. [14]
    Shingled Magnetic Recording (SMR) Market Size, Demand ...
    Shingled Magnetic Recording (SMR) Market size was valued at $ 4.15 Bn in 2024 and is projected to reach $ 8.3 Billion by 2033, exhibiting a CAGR of 8.4% ...
  15. [15]
    Shingled Magnetic Recording Market | Global Market Analysis Report
    Jul 17, 2025 · Shingled magnetic recording technology provides help for high capacity, power-efficient storage solutions required by digital media content ...
  16. [16]
    Western Digital Fesses Up: Some Red HDDs Use Slow SMR Tech ...
    Apr 14, 2020 · SMR drives are also incredibly slow at random write performance, which is a specification that WD doesn't reveal in its spec sheet. Notably, the ...<|separator|>
  17. [17]
    [PDF] Performance Evaluation of Host Aware Shingled Magnetic ...
    For read operations, SMR drives has similar 10th, 50th and 90th latency distribution to that of HDD as the shingled constraint has less impact on read ...
  18. [18]
    Reflecting on the Past 17 Years of Shingled Magnetic Recording for ...
    Shingled magnetic recording (SMR) is a data storage recording technology used in modern hard disk drives (HDDs) to increase the areal density capacity (ADC) of ...
  19. [19]
  20. [20]
    Shingled magnetic recording disk drive with minimization of the ...
    A shingled magnetic recording (SMR) hard disk drive (HDD) essentially eliminates the effect of far track erasure (FTE) in the boundary regions of annular ...
  21. [21]
    [PDF] Working Draft Zoned Block Commands (ZBC) - t10.org
    May 8, 2014 · This is an internal working document of T10, a Technical Committee of Accredited Standards Committee INCITS. (InterNational Committee for ...Missing: SMR 2011
  22. [22]
    HGST Announces World's First Enterprise-Class 10TB HDD
    Sep 9, 2014 · Shingled Magnetic Recording (SMR) was named after roof shingles. With SMR, relatively wide tracks are written to the disk and subsequently ...
  23. [23]
    Whopping 10TB disks spin out of HGST – plus 3.2TB flash slabs
    Sep 9, 2014 · ... HGST announcement of a 10TB SMR Helium drive. It says this drive has the industry's lowest $/TB and watt/TB ratings, and its technology will ...
  24. [24]
    Western Digital Hard Drive $2.7M Class Action Settlement
    Consumers who purchased certain WD Red NAS Western Digital hard drives using SMR technology may benefit from a $2.7 million Western Digital class action ...
  25. [25]
  26. [26]
    Shingled Magnetic Recording Market Size, Share and Forecast 2032
    The Shingled Magnetic Recording (SMR) Market is ... This is particularly important in sectors like data centers, where space and energy efficiency are critical.
  27. [27]
  28. [28]
    [PDF] Principles of Operation for Shingled Disk Devices - Parallel Data Lab
    Conventional track width is 25 nanometers, shingled write size is 70 nanometers and guard bands are 5 nanometers. Page 7. 7. 4 Shingled Writing Systems Issues.Missing: pitch | Show results with:pitch
  29. [29]
    [PDF] Shingled Magnetic Recording (SMR) HDD Technology - Digital Assets
    Shingled Magnetic Recording (SMR) HDD Technology. 2. WHITE PAPER. Introduction. Data volumes generated by cloud services, video/video analytics, 5G-connected.
  30. [30]
    [PDF] SMaRT: An Approach to Shingled Magnetic Recording Translation
    Mar 2, 2017 · The E-region size is suggested to be no more than 3% [7, 8, 9, 14]. 3 Related Work. There are a few studies that have been done for I-SMR drives ...Missing: typical | Show results with:typical
  31. [31]
  32. [32]
    Shingled Magnetic Recording Hard Disks - Zoned Storage
    These command interfaces are standards-based and have been developed by the INCITS T10 committee for SCSI hard disks and by the INCITS T13 committee for ATA ...Missing: 2011 | Show results with:2011
  33. [33]
    30TB+ drives gain just 6.6% capacity with SMR: Report
    Jan 26, 2024 · SMR provides about 6.6% higher capacity than CMR at 30+ TB, compared to roughly 16% higher at 24+ TB.Missing: paper | Show results with:paper
  34. [34]
    Shingled hard drives have non-shingled zones for caching writes
    Apr 15, 2020 · SMR disks have multiple levels of caching – DRAM, then some CMR zones and finally shingled zones. In general, writes are to the CMR space.<|control11|><|separator|>
  35. [35]
    [PDF] Evaluating Host Aware SMR Drives - USENIX
    Jun 20, 2016 · Host Managed SMR (HM-. SMR) drives, in contrast, expose internal data layout in- formation to the host such as zone types/states and write.Missing: speed | Show results with:speed
  36. [36]
    [PDF] NVMe® Zoned Namespace SSDs & The Zoned Storage Linux ...
    Zoned Storage and NVMe® Zoned Namespaces. ▫ Specifications overview. ▫ Linux Kernel Zoned Block Device Support. ▫ Zoned Block Device abstraction.Missing: 5.1 | Show results with:5.1
  37. [37]
    Making Host Managed SMR Work for You – Dropbox's Successful ...
    Jun 12, 2018 · With the ability to deliver predictable, consistent performance comparable to what users expect from traditional PMR drives, host-managed SMR is ...
  38. [38]
    [PDF] SMRstore: A Storage Engine for Cloud Object Storage on HM-SMR ...
    Feb 23, 2023 · A possible direction is to use Host Managed SMR (HM-. SMR) ... achieves 374.2MB/s object write throughput, 227.7MB/s ob- ject read ...
  39. [39]
    Linux Kernel Zoned Storage Support
    Zoned block device support was added to the Linux® kernel in version 4.10. Subsequent versions improved this support and added new features.Missing: ext4- | Show results with:ext4-
  40. [40]
    [PDF] Host Aware SMR - Timothy Feldman - - OpenZFS
    An introduction to shingled magnetic recording for. OpenZFS developers that are seeking an understanding of the technology, the standards that support it, and ...
  41. [41]
    Have Data Centers Reached the SMR HDD Tipping Point?
    Nov 21, 2023 · Shingled Magnetic Recording (SMR) hard drives (HDDs) have quietly existed for nearly a decade. Yet, as with many innovations, their journey ...Missing: efficiency | Show results with:efficiency
  42. [42]
    [PDF] Zoned Storage Models - SNIA.org
    Jul 2, 2023 · The state of each zone (I.e., Empty, Open/Closed, Full, Readonly/Offline): a. To manage free space, host-side GC, and other specifics, the ...
  43. [43]
    Zoned Storage Devices Overview
    RESET ZONE WRITE POINTER is the command that host software use to reset the location of a zone write pointer to the beginning of the zone. After this command ...Missing: key | Show results with:key
  44. [44]
  45. [45]
    Ideas for supporting shingled magnetic recording (SMR) - LWN.net
    Apr 2, 2014 · SMR devices will report the zones that are present on the drive, their characteristics (size, sequential-only, ...), and the location of the ...
  46. [46]
    [PDF] ZBC/ZAC Support in Linux - SNIA.org
    ❒ Zone information access. ❒ Cache only or with update from disk. ❒ Zone manipulation. ❒ Reset write pointer, open zone, close zone, finish zone. ❒ Upper ...Missing: 539-2017 | Show results with:539-2017
  47. [47]
    [PDF] Smart Video Recording (SVR) Best Practices for Optimizing Data ...
    Aug 18, 2021 · – Use ATA Identify Device to detect HDD model, DDCM support, and ZAC command support. – Always issue a ZAC REPORT ZONES EXT command to get ...Missing: word | Show results with:word
  48. [48]
    libzbc User Library - Zoned Storage
    Reset the write pointer of sequential zones. zbc_open_zone, Explicitly open a sequential write zone. zbc_close_zone, Close an open sequential write zone.Missing: 539-2017 | Show results with:539-2017
  49. [49]
    CMR and SMR Hard Drives | Seagate US
    Which Drive Has What? ; EXOS M · 28TB, 30TB ; EXOS X · 12TB, 14TB, 16TB, 18TB, 20TB, 24TB ; EXOS E · 1TB, 2TB, 3TB, 4TB, 6TB, 8TB, 10TB ; IronWolf · 1TB, 2TB, 3TB, 4TB, ...
  50. [50]
    The sg3_utils package - The Linux SCSI Generic (sg) Driver
    Aug 1, 2023 · This package contains over 50 utilities, their "man" pages, build files and general documentation. The utilities have a command line interface which in general ...
  51. [51]
    Dynamic Hybrid SMR - Western Digital Corporate Blog
    Nov 13, 2017 · The proposed Hybrid SMR feature combines multiple formats within the HDD with the flexibility to select the amount of storage served by each format.
  52. [52]
    [PDF] White Paper: Implementing SMR - Digital Assets
    Such direct access design can enable improved performance and can be achieved using either passthrough commands or regular IO system calls for systems that have ...Missing: streaming | Show results with:streaming
  53. [53]
    [PDF] Hybrid SMR Disk Drives - Open Compute Project
    ○ Hybrid SMR represents an new opportunity to use. SMR technology. ○ Takes advantage of the unique characteristics of hyperscale cluster file systems.<|separator|>
  54. [54]
    [PDF] Taking Advantage of SMR based HDDs in Surveillance Camera ...
    The S300 series of 3.5-inch SMR HDDs in 4TB and 6TB for sur- veillance camera systems provide customers with high density storage of 2TB per platter. In ...
  55. [55]
    Toshiba introduces new SMR drives for video surveillance
    May 11, 2021 · Further performance data is identical to the other models in the series with 5.56ms latency and a throughput of up to 184MB/s. They also run ...
  56. [56]
    Download the Implementing SMR White Paper | Western Digital
    Shingled Magnetic Recording (SMR) technology offers a path to higher areal density and lower $/TB—ideal for archives, cloud backups, AI datasets, and other ...
  57. [57]
    A Balancing Act: HDDs and SSDs in Modern Data Centers
    Jul 25, 2024 · SMR currently accounts for 50% of Western Digital's shipped data center exabytes. Use cases for HDDs include backup, archiving, and cold ...
  58. [58]
    [PDF] Elastic Data and Space Management for Hybrid SMR Drives - USENIX
    Updating an SMR zone directly using read-modify-write introduces significant performance overhead. ... 4% reduction of average latency with 7% decrease in the ...
  59. [59]
  60. [60]
    FluidSMR: Adaptive Management for Hybrid SMR Drives
    Oct 15, 2021 · Hybrid Shingled Magnetic Recording (H-SMR) drives are the most recently developed SMR drives, which allow dynamic conversion of the ...
  61. [61]
    Exos X Series Hard Drives | Seagate US
    Free delivery over $100 30-day returnsSeagate Exos X Series: High-capacity, robust hard drives for enterprise. Elevate your data storage with reliability & performance. Explore now!Exos X18 · Exos 2X18 · Exos X20 20TB · Exos X24 16TBMissing: Aware SMR 2024
  62. [62]
    The future of data storage technology: Why HAMR is the new ...
    Mar 21, 2025 · HAMR is a breakthrough hard drive technology that boosts areal density to enable higher storage capacities, lower power consumption, and reduced total cost of ...Missing: 40TB SMR hybrid
  63. [63]
    Seagate: It's HAMR Time (NASDAQ:STX) | Seeking Alpha
    May 26, 2025 · ... HAMR technology to achieve 10 TB/in² areal density within the next 15 years, which compares favorably to the ~2.6 TB/in² for a 40TB HAMR drive.
  64. [64]
    First Seagate 40TB HAMR HDD Samples Delivered to Data Center ...
    Jun 2, 2025 · Full-scale production of the 40 TB HAMR HDD is slated to begin in the first half of 2026, following an extensive qualification phase and ...Missing: SMR hybrid
  65. [65]
    Western Digital shipping 24 TB and 28 TB nearline drives
    Nov 16, 2023 · Seagate's 24/28 TB disk drive edge lasted just four weeks because Western Digital is now shipping 24 TB CMR and 28 TB SMR disk drives.Missing: MAMR | Show results with:MAMR
  66. [66]
    Come to MAMR! Western Digital unfurls HDD tech roadmap
    Jan 28, 2020 · The application of Shingled Magnetic Recording (SMR), with partially overlapping write tracks, will add a 20 per cent capacity uplift (24TB).
  67. [67]
    Hard Disk Drives Market Share & Trends [2033]
    Global Hard Disk Drives market size is estimated at USD 19.62 million in 2025 and expected to rise to USD 23.28 million by 2033, experiencing a CAGR of ...By Type · By Application · New Product DevelopmentMissing: AWS | Show results with:AWS
  68. [68]
    Western Digital charges into the future with mass-capacity momentum
    Aug 7, 2025 · With technologies like OptiNAND and UltraSMR already shipping in volume, the company is preparing to introduce HAMR drives at 40TB+ by 2027, ...Missing: integration | Show results with:integration
  69. [69]
    Western Digital still plans to start shipping 36TB HAMR hard drives ...
    May 29, 2025 · At Computex 2025, Western Digital confirmed plans to begin mass production of HAMR-based hard drives in 2027, starting with 36TB, 40TB, and 44TB
  70. [70]
    Challenges and implementation approaches of shingled heat ...
    ... Heat-assisted magnetic recording (HAMR) is demonstrated to induce much severe curvature issue than PMR. HAMR curvature could cause poor bit error rate (BER) ...
  71. [71]
    Review of Error Correction in 2D Shingled Magnetic Detection
    Aug 5, 2025 · This paper presents the review of error correction in 2D shingled magnetic detection. Shingled magnetic recording is one of the techniques ...Missing: management zones
  72. [72]
    Seagate Exos X26z Review - 25TB Host-Managed SMR HDD
    Dec 1, 2023 · This is where host-managed HDDs get a little strange, while the drive-managed SMR HDDs can cope better, albeit with much lower performance.
  73. [73]
    SMR drives: sustained re-write speeds? : r/DataHoarder - Reddit
    Oct 25, 2019 · The SMR disks are decent for write sequentially occasionally, read many roles. I call my pool of 2x 12x 8T SMR /cold and things only go to it ...SMR - worse than you thought. Very slow reads too, not just writes!r/DataHoarder on Reddit: SMR vs CMR vs 'new thing of the year'More results from www.reddit.comMissing: benefits | Show results with:benefits
  74. [74]
    SMR drive detection and benchmarking with fio
    Apr 30, 2020 · It is possible to detect SMR disks using a random write workload of 128k blocks for about 50G. Both raw IO bandwidth and latency are ...
  75. [75]
    Analysis and design of shingled magnetic recording systems
    Mar 12, 2012 · A second way that shingled writing gains, is that adjacent track erasure (ATE) now only occurs from 1 side, and only occurs once. In this ...
  76. [76]
    Skew angle effects in shingled magnetic recording system with ...
    Apr 3, 2014 · SMR also benefits as the adjacent track erasure (ATE) occurs only on one side of the track and happens just once. As the track density ...Missing: reliability | Show results with:reliability
  77. [77]
    Toshiba Announces 24TB CMR and 28TB SMR Enterprise Hard ...
    Sep 10, 2024 · ... MTBF of 2.5 million hours, and an AFR of 0.35%. The MG11 CMR HDD Series enables cloud, data center, and enterprise storage customers ...
  78. [78]
    Shingled magnetic recording(SMR) device managed - Server Fault
    Jun 3, 2020 · It seems that most device-managed SMR disks first write the information on a dedicated area, and then do background updates that hide the write amplification ...
  79. [79]
    Western Digital binds up self-inflicted SMR wound - TrueNAS
    Jun 24, 2020 · With SMR, having to rewrite multiple sectors in order to change the data on a single sector may actually increase power consumption. Last ...
  80. [80]
    WD Red SMR Lawsuit Pays Out Pennies in Settlement Damages
    May 24, 2022 · Western Digital has started sending out cash payments to class members of the WD Red NAS class action lawsuit. The payout varies from $4 to $7 per hard drive.
  81. [81]
    The future of SMR Drives in TrueNAS / ZFS
    Sep 17, 2024 · It isn't “writes” that causes the SMR stalls, it's “rewrites” or “overwrites.” New writes into empty areas, especially done in larger ...Missing: shifting challenges
  82. [82]
    SMR HDD Market Research Report 2033
    According to our latest research, the global SMR HDD market size reached USD 4.6 billion in 2024, reflecting the increasing adoption of high-density storage ...
  83. [83]
    WD Addresses SMR Controversy With New Red Plus Hard Drives
    Jun 25, 2020 · In response to withering criticism, WD has announced a new line to faster CMR hard drives to complement its slower SMR models.
  84. [84]
    1313 (Beware of SMR drives in CMR clothing) - smartmontools
    I suspect regulators are likely to need to step in over misleading marketing claims, but at the end of the day it's clear that SMR drives are here to stay.
  85. [85]
    Panic around the hard drive. Or just well thought out marketing (from ...
    Apr 27, 2020 · When you see “PMR only” in the description of an HDD, you can't distinguish whether the disk has CMR or SMR. And vendors hide such information.
  86. [86]
    Taking Advantage of SMR based HDDs in Surveillance Camera ...
    High-performance capabilities that allow them to record video data simultaneously sent from up to 64 connected cameras to the SMR area on the disk without using ...