Fact-checked by Grok 2 weeks ago

Bad sector

A bad sector is a defective area on a storage device, such as a hard disk drive (HDD) or solid-state drive (SSD), that cannot reliably store, read, or write data due to physical damage, wear, or errors, potentially leading to data loss or corruption. In HDDs, these defects occur on magnetic platters, where sectors are the smallest addressable units of data, typically 512 bytes or 4 KiB in modern drives; in SSDs, the equivalent is bad blocks in NAND flash memory cells, managed through wear-leveling and over-provisioning. Bad sectors are a common issue in storage media and signal potential drive degradation, though modern firmware often mitigates their impact through automatic remapping or retirement. Bad sectors are broadly categorized into two types: physical (or hard) bad sectors and logical (or soft) bad sectors. Physical bad sectors result from irreversible damage to the disk surface or cells, such as scratches, manufacturing flaws, or wear from prolonged use, rendering the sector or permanently unusable. In contrast, logical bad sectors stem from temporary software or errors, like improper data writing during power interruptions or checksum mismatches, and can often be repaired by rewriting the data. While physical defects are inherent to the medium and increase over time—known as "grown defects" in HDDs or NAND degradation in SSDs—logical issues are more recoverable but may indicate broader system problems if recurrent. Regular backups and monitoring via attributes are essential, as accumulating bad sectors often foreshadow complete drive failure.

Fundamentals

Definition

A bad sector is a portion of a storage medium, such as the magnetic in hard disk drives (HDDs), that cannot reliably store or retrieve data due to physical damage or errors. This defect renders the affected area unusable for normal operations, leading to potential in that specific location. In contrast to good sectors, which permit read and write operations within the tolerances of built-in error correction mechanisms, bad sectors consistently exceed these limits, making data access unreliable or impossible. Good sectors maintain data integrity through standard error detection and correction processes, whereas bad sectors fail these checks repeatedly, prompting the storage system to isolate them. The concept of bad sectors originated with the development of magnetic hard drives in the mid-20th century and has evolved alongside advancements in storage technology. Sectors represent the smallest addressable units of storage, traditionally 512 bytes in size on older drives, though modern drives use 4 sectors to improve efficiency and density. A sector is marked as bad when its error-correcting code () fails to compensate for errors during multiple read or write attempts. In solid-state drives (SSDs), the analogous concept involves bad blocks in flash memory, which are larger units that can become unreliable due to wear or defects.

Types

Bad sectors in storage devices are broadly classified into two categories: physical and logical, based on whether the issue stems from damage or software-related errors. Physical bad sectors result from permanent damage to the storage medium, making the affected area permanently unusable for and retrieval. In hard disk drives (HDDs), this can occur due to scratches on the platter surface or head crashes, where the read/write head physically contacts the spinning disk, gouging the magnetic coating. In solid-state drives (SSDs), physical bad blocks arise from worn-out blocks due to repeated /erase cycles or defects, leading to unreliable . These sectors or blocks cannot be repaired and are typically remapped by the drive's to spare areas. Logical bad sectors, in contrast, arise from remediable errors not involving physical damage, such as file system corruption, software bugs, or transient events like sudden power loss during writes. For example, a corrupted entry in the (FAT) of a might mark a healthy sector as unusable, or interrupted operations could leave inconsistent . Unlike physical bad sectors, logical ones do not indicate hardware failure and can often be resolved through file system checks and repairs. The key behavioral difference lies in persistence: physical bad sectors remain problematic across system reboots, operating system reinstallations, or even drive reformatting, as the underlying hardware defect endures. Logical bad sectors, however, typically disappear after corrective software actions, such as running diagnostic utilities that rewrite erroneous metadata. This distinction is crucial for diagnosing storage issues, as it determines whether hardware replacement or software intervention is required.

Causes

Physical Factors

Physical bad sectors arise primarily from hardware imperfections and degradation in storage media, such as hard disk drives (HDDs). These are classified as primary defects, present from , or grown defects, developing over time from use. Manufacturing defects introduce bad sectors during production, including microscopic impurities or flaws in HDD or read/write heads that compromise from the outset. Irregularities in the magnetic coating of platters can prevent reliable . Wear and tear contributes to grown bad sectors through mechanical stress over time, particularly from between the read/write heads and platters. The heads maintain a flying of approximately 1-3 nanometers above the platter surface; any variation, such as due to buildup or surface wear, can cause head-disk contact, damaging the magnetic layer and creating unreadable sectors. This degradation accelerates in the wear-out phase of the drive's lifecycle, where failure rates rise exponentially. Environmental influences exacerbate physical degradation in HDDs. Overheating warps or alters magnetic properties, creating bad sectors, with failure rates doubling approximately every 15°C increase in . Physical shocks, such as drops, can induce head crashes by slamming heads into , scratching the surface and generating debris that damages additional areas. Strong magnetic interference disrupts the alignment of magnetic domains on , corrupting and forming bad sectors. Exposure to promotes of internal components, while dust accumulation causes abrasive wear or head contamination.

Logical Factors

Logical bad sectors, which are not due to physical damage but rather software or operational issues, can manifest when the storage system's incorrectly identifies usable sectors as faulty. File system corruption represents a primary logical cause, where inconsistencies in allocation tables—such as those in or —erroneously mark sectors as bad. These errors frequently stem from improper shutdowns that interrupt ongoing write operations, leaving in an inconsistent state. Transient errors contribute to logical bad sectors through temporary read or write failures induced by external operational conditions. Power fluctuations can disrupt data transfer mid-process, while may alter signals during operations, and overheating can temporarily impair controller performance, causing sectors to be flagged as unreadable until conditions normalize. Firmware bugs in hard drives can lead to misreporting of sectors as bad due to outdated or defective code in the drive's controller. Such issues have been documented in certain older drive models, where firmware flaws trigger false positive detections of sector faults without underlying physical problems. Viral or malicious software poses another logical threat by deliberately overwriting or corrupting sector data structures, such as boot records or file allocation tables, which can result in sectors being deemed bad by the operating system. Destructive malware, including ransomware variants, exacerbates this by targeting storage integrity to encrypt or erase critical data.

Detection

Software Approaches

Software approaches to detecting bad sectors rely on user-accessible utilities and applications that perform disk surface scans and health monitoring at the operating system or application level. These methods focus on identifying sectors that fail read or write operations, often targeting both physical and logical issues by verifying data integrity across the drive. Built-in operating system utilities provide foundational tools for bad sector detection. In Windows, CHKDSK scans the file system and, when run with the /r parameter, performs a thorough read of every sector to locate bad sectors, recovers any readable data from them, and marks the faulty sectors in the file system's bad cluster list to prevent future allocation. In Linux environments, fsck (file system check) integrates with the badblocks utility to detect bad sectors; for ext2/ext3/ext4 file systems, the -c option in e2fsck invokes badblocks for a read-only scan of the device, identifying unreadable blocks and adding them to the file system's bad block inode for exclusion from use. In macOS, for legacy HFS+ volumes, fsck_hfs can scan for I/O errors indicative of bad sectors using the -S option. For APFS volumes, the default file system since macOS High Sierra (2017) and current as of November 2025, Disk Utility's First Aid tool verifies and repairs file system integrity but does not conduct full surface scans for bad sectors; hardware-level detection is handled via S.M.A.R.T. or third-party tools. Third-party tools offer advanced, user-friendly interfaces for more comprehensive detection. HDDScan conducts surface tests to identify bad blocks and sectors by attempting reads and writes across the disk, while also displaying S.M.A.R.T. attributes that signal potential sector failures, such as reallocated sector counts. HDD performs diagnostic scans to detect errors and bad sectors through tests, providing visual maps of problematic areas on the drive surface. CrystalDiskInfo specializes in real-time S.M.A.R.T. monitoring, alerting users to thresholds like pending or reallocated sectors that indicate bad sector presence without full surface scans. Detection processes in these tools generally employ read/write verification tests, systematically accessing each sector to confirm data readability and writability, then logging failures as bad sectors for further analysis. Scans differ in approach based on risk to data: non-destructive scans read sectors only to verify integrity without modification, making them suitable for live systems, whereas destructive scans write test patterns (e.g., patterns or zeros) to sectors, read them back for verification, and restore original data if possible, offering higher accuracy but requiring full backups due to potential data loss.

Hardware Methods

Hardware methods for detecting bad sectors primarily involve firmware-embedded monitoring and specialized diagnostic tools that operate at the device level, focusing on physical defects such as surface imperfections on hard disk drives (HDDs) or NAND flash cell failures in solid-state drives (SSDs). Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) is a built-in feature in most modern storage devices that proactively monitors health indicators to flag potential bad sectors before they cause . Key attributes include the Reallocated Sector Count (S.M.A.R.T. ID 0x05), which tracks the number of bad sectors detected and remapped to spare areas during operation, and the Current Pending Sector Count (S.M.A.R.T. ID 0xC5), which counts unstable sectors pending reallocation due to unrecoverable read errors. These counters provide early warnings of drive degradation, with rising values indicating increasing physical issues like media defects. S.M.A.R.T. data can be queried via ATA commands, allowing to assess sector reliability without host intervention. Manufacturer-provided diagnostic tools enable low-level platter scans on HDDs to identify bad sectors through direct access to the drive's raw sectors. Seagate's SeaTools performs deep diagnostic checks, including bootable low-level sector scans that read every track to detect read errors indicative of physical bad sectors. Similarly, Western Digital's Data Lifeguard Diagnostics uses an extended test to conduct a full media scan, identifying and logging bad sectors by verifying across the entire drive surface. These tools operate independently of the operating system, providing granular reports on sector health for professional diagnostics. For SSDs, detection relies on controller-integrated mechanisms like wear-leveling algorithms and bad block management, which identify failing blocks during read, program, or erase operations using error-correcting code () verification. Wear-leveling distributes write cycles evenly across cells to prevent localized wear that could lead to bad blocks, while bad block management retires detected faulty blocks by remapping them to over-provisioned spares transparently to the user. The TRIM command integrates with these processes by notifying the controller of unused blocks, facilitating efficient garbage collection and indirect support for identifying underperforming regions, though primary detection remains firmware-driven. Advanced hardware testing employs specialized equipment like oscilloscopes to evaluate from HDD read heads, revealing anomalies such as weak or noisy signals that correlate with emerging bad sectors. By probing the output, technicians can measure read channel performance, identifying head misalignment or media defects that manifest as bit errors during sector reads. This method provides precise diagnostics beyond standard tools, often used in labs to pinpoint physical issues.

Handling

Operating System Responses

Operating systems detect and manage bad sectors primarily through their layers during runtime I/O operations, aiming to maintain and system stability without immediate user intervention. When an I/O occurs due to a bad sector, the kernel's I/O subsystem typically retries the operation a limited number of times before propagating the to the driver. This interrupt-driven response allows the OS to isolate the affected area and initiate recovery measures, such as marking the sector unusable, while logging the incident for diagnostics. In Windows, the NTFS file system employs automatic cluster remapping to handle bad sectors transparently. Upon detecting a read or write error from the disk driver, NTFS dynamically reallocates the affected to a spare area on the volume, updates the to redirect future accesses, and records the bad cluster in the BadClus metadata file to prevent reuse. This self-healing mechanism operates at the [file system](/page/File_system) level, ensuring that applications experience minimal disruption as long as spare clusters are available. For instance, if a write operation encounters a bad sector, NTFS marks the cluster as allocated but unusable in the Bitmap file and substitutes a new one, preserving file consistency. Windows further manages these errors through structured logging in the Event Viewer, where the System log captures details like Event ID 55 for file system corruption linked to bad sectors or failed I/O requests, and Event ID 7 for specific bad block detections on a . The driver, upon receiving an unrecoverable error from the storage stack (e.g., via Storport or class drivers), marks the unavailable and may trigger automatic repairs during subsequent volume checks. In , the kernel's block layer handles I/O errors by retrying requests (up to a configurable limit, typically 3-5 times) and propagating failures via error codes like -EIO to the upper layers, where file systems like respond by returning errors to applications. Unlike , does not perform fully automatic runtime remapping; instead, it relies on integration with the utility during file system creation or checks via e2fsck to identify and mark bad blocks in the file system's bad block inode list, preventing allocation to files. This process can be triggered automatically during if the root file system is remounted read-only due to errors, prompting to run and incorporate bad block lists for isolation. The kernel logs these events in or /var/log/messages, detailing the device and error type for . In macOS, the (APFS) and legacy (HFS+) handle bad sectors through disk utility tools and kernel-level error detection. APFS uses container-based management with features to isolate errors, while tools like or fsck_apfs can scan and repair file system issues, marking bad blocks similarly to . Persistent errors may trigger backups to fail, with logs in Console.app detailing I/O failures. Across platforms, operating systems leverage caching mechanisms to mitigate temporary read errors from bad sectors. Write-back (or write-through with caching) page caches store recently accessed data in , allowing subsequent reads to serve from if the initial disk access succeeded before degradation, effectively masking intermittent failures without hitting the faulty sector again. However, for persistent bad sectors, cache misses will still trigger s, prompting the file system's protocols. This buffering layer improves resilience for transient issues but does not resolve underlying physical defects.

Controller-Level Interventions

Disk controllers manage bad sectors through firmware-level mechanisms that operate autonomously from the operating system, primarily addressing physical defects on storage media. In hard disk drives (HDDs), employs sector slipping, where defective sectors identified during manufacturing or operation are skipped by shifting subsequent data to spare areas on the same track, effectively remapping without altering the visible to the host. This technique minimizes performance impacts by avoiding long seeks to distant spare sectors. For solid-state drives (SSDs), over-provisioning reserves a portion of the flash capacity—typically 7-28% depending on the drive model—unavailable to the user, which the controller uses to replace bad blocks dynamically through and garbage collection processes. Error correction and retry protocols further enable controllers to handle marginal sectors before declaring them bad. Controllers apply error-correcting codes (ECC), such as Reed-Solomon algorithms, to detect and correct bit errors in read data, often capable of fixing up to dozens of bits per sector. If initial reads fail, firmware initiates multi-level retry sequences, including signal processing adjustments like gain control or timing recovery, before escalating to remapping or reporting an uncorrectable error. These retries are proprietary but standardized in their intent to maximize data recovery without host intervention. Controllers maintain internal defect lists to track and manage bad sectors throughout the drive's life. The primary defect list (P-list) records sectors deemed unreliable during factory testing, which are excluded from user-accessible space via initial formatting. Grown defect lists (G-lists) dynamically log sectors that degrade in use, prompting automatic remapping to spares, while pending defect lists monitor potentially unstable areas for confirmation. In SSDs, these lists integrate with to handle NAND-specific failures, ensuring transparent substitution from over-provisioned reserves. Compliance with interface standards ensures consistent defect management across devices. ATA/SATA drives support commands like READ DEFECT DATA (opcode 0xB7) to query defect lists and REASSIGN SECTORS for explicit remapping, as defined in the Serial ATA specification. SCSI interfaces use analogous READ DEFECT DATA (opcode 0x37) for similar functionality. For NVMe SSDs, log pages such as the SMART/Health log (Controller Busy Time and Temperature) and Media and Data Integrity Errors track bad blocks and recoveries, with the NVMe specification mandating support for these in firmware. These standards enable controllers to perform defect management without relying on host commands, preserving data integrity and performance.

Recovery Techniques

Non-destructive recovery methods prioritize salvaging accessible data without altering the affected drive, often serving as the first line of intervention for users encountering bad sectors. Tools like , a Linux-based utility, facilitate this by copying data from a failing block device to a healthy one, systematically skipping problematic areas during initial passes and retrying them in subsequent phases to maximize readable content retrieval. This approach minimizes further stress on the drive by avoiding writes to bad sectors and using a progress mapfile to track and avoid redundant operations, making it suitable for both hard disk drives (HDDs) and solid-state drives (SSDs). Sector editing involves low-level manipulation using hex editors to access and potentially reconstruct in damaged sectors, though it carries significant risks. Software such as UltraEdit enables scanning of raw sectors to extract partial or identify file signatures in corrupted areas, allowing users to force reads or writes at the byte level. However, such interventions can exacerbate physical damage by repeatedly stressing faulty components, potentially leading to complete inaccessibility or drive failure if not performed with precise knowledge of storage structures. Logical bad sectors, which stem from software errors rather than physical degradation, generally offer higher recoverability through these methods compared to physical ones. For severe cases, professional services employ advanced techniques in controlled environments to bypass or repair bad sectors. In cleanrooms, technicians for HDDs may perform platter swaps, transplanting undamaged platters from a donor drive into the affected unit to access stored on healthy while avoiding contaminated heads or motors. For SSDs, chip-off recovery involves NAND flash chips, reading them directly with specialized , and reconstructing file systems, which is effective when controller failures or wear-leveling obscure bad blocks. Despite these techniques, recovery success varies, with logical bad sectors generally achieving higher rates through software means than physical , which yields lower outcomes depending on severity, underscoring the of regular backups to prevent reliance on such efforts. Professional interventions, though capable of higher yields in complex scenarios, are not guaranteed and can be costly, emphasizing prevention over post-failure salvage.

Impact and Prevention

Frequency of Occurrence

Bad sectors in hard disk drives (HDDs) were more prevalent in early models before the , with user studies reporting annualized failure rates (AFR) as high as 6% due to higher manufacturing and early-life defects. Modern HDDs benefit from advanced manufacturing processes, and overall AFRs typically ranging from 1% to 2% across large-scale studies as of 2024, with Q1 2025 data showing 1.42%. In solid-state drives (SSDs), bad blocks—analogous to bad sectors—emerge primarily during operation rather than at manufacture. Field studies indicate that 30-80% of SSDs develop at least one bad block within the first four years of deployment, with the affected drive showing 2-4 bad blocks and means up to 1,960 in severe cases. The frequency of bad sectors is influenced by drive age, with failure risks rising after three years for HDDs and showing moderate correlation (0.2-0.4) for SSD bad blocks. Usage intensity plays a key role, as drives under constant high workloads exhibit higher rates than models with intermittent use. differences also contribute, with HDDs more susceptible to physical bad sectors from mechanical wear, while SSDs experience logical bad blocks from cell degradation. Bad sectors account for a notable portion of HDD failures, affecting approximately 9% of drives through reallocation events that elevate AFR by 3-6 times.

Mitigation Strategies

Mitigation strategies for bad sectors emphasize proactive measures to minimize their formation and mitigate their impact on and storage reliability. Central to these efforts is the implementation of robust backup protocols, which ensure and reduce the risk of permanent loss even if sectors degrade. The widely adopted 3-2-1 backup rule recommends maintaining three copies of across two different types of , with at least one copy stored offsite, providing a layered defense against localized failures like bad sectors. This approach has been endorsed by storage experts for its simplicity and effectiveness in safeguarding against hardware degradation. Regular monitoring of drive health through (S.M.A.R.T.) attributes allows users to detect early signs of potential bad sector development, such as increasing reallocated sector counts or error rates. Routine checks, performed monthly via tools integrated into operating systems or third-party software, enable timely intervention before widespread failure. Complementing this, maintaining optimal operating temperatures is crucial, particularly for hard disk drives (HDDs), where temperatures exceeding 40°C accelerate wear and increase the likelihood of physical sector damage; studies indicate that each 1°C reduction in average temperature can extend HDD lifespan by approximately 10%. Adhering to usage best practices further reduces the incidence of bad sectors. For HDDs, avoiding physical shocks—such as drops or vibrations during operation—prevents mechanical misalignment of read/write heads, which can scratch platter surfaces and create defective sectors. Employing uninterruptible power supplies (UPS) ensures stable voltage delivery, mitigating risks from sudden power fluctuations that could interrupt write operations and induce sector corruption. For solid-state drives (SSDs), enabling the TRIM command optimizes garbage collection by informing the controller of unused blocks, thereby distributing write wear evenly and preventing the accumulation of unreliable NAND cells that manifest as bad blocks. Technological advancements enhance mitigation through built-in redundancy and advanced error handling. Redundant Array of Independent Disks (RAID) configurations, such as RAID 1 or 5, distribute data across multiple drives, allowing reconstruction from parity or mirrored copies if a bad sector occurs on one device, thereby maintaining availability without data loss. In modern SSDs, low-density parity-check (LDPC) codes serve as sophisticated error correction mechanisms, capable of recovering data from multiple bit errors per sector—far surpassing older BCH codes—and effectively neutralizing the impact of nascent bad blocks by remapping them transparently. These strategies collectively lower the effective rate of bad sector-related incidents by promoting resilience at both the user and hardware levels.

References

  1. [1]
    What do I do if my drive reports bad sectors? | Seagate US
    Bad sectors can often be corrected by using a spare sector built into the drive. However, any information written to a bad sector is usually lost.
  2. [2]
    SeaTools Bootable User Manual - Help Topic: "Bad Sector Found"
    A bad sector is a small area on the disc drive that is reporting errors and cannot be accessed properly. New bad sectors, sometimes called grown defects, are ...
  3. [3]
    Why Hard Drives Get Bad Sectors and What You Can Do About It
    Oct 9, 2013 · There are two types of bad sectors -- often divided into "physical" and "logical" bad sectors or "hard" and "soft" bad sectors. A physical -- ...
  4. [4]
    What Is a Bad Block? | Definition from TechTarget
    Jun 17, 2024 · A bad block is an area of storage media that is no longer reliable for storing and retrieving data because it has been physically damaged or corrupted.
  5. [5]
    What Are "Bad Sectors" On a Hard Drive? - Datarecovery.com
    Dec 20, 2021 · Bad sectors are when a hard drive cannot read data clusters, and the sector is permanently damaged, making it unusable and causing data loss.
  6. [6]
    What are Bad Sectors - R-Studio Data Recovery Software
    Rating 4.8 (373) Modern data storage devices generally store data within various sectors within the drive itself. Bad sectors occur when one of these sectors are damaged.
  7. [7]
    Transition to Advanced Format 4K Sector Hard Drives | Seagate US
    May 22, 2024 · While the physical size of the sectors on hard drives has shrunk, taking up smaller and smaller amounts of space, media defects have not.Missing: bad | Show results with:bad
  8. [8]
    How to Check and Repair Bad Sectors for Hard Drives or USB Drives?
    This page explains how to check and repair bad sectors for hard drives, external HDDs, USB flash drives, SSDs or SD cards using free bad sector repair tools ...
  9. [9]
    Understanding the Critical Role of Bad Block Management - Exascend
    Jul 23, 2025 · These bad blocks are inherent defects present in the NAND flash memory due to minor imperfections during the fabrication process. These factory ...What Causes Bad Sectors On... · Types Of Bad Blocks · 2. Dynamic Replacement &...
  10. [10]
    How to Repair Bad Sectors on a Hard Drive (3 Proven Methods)
    Rating 3.9 (7) May 23, 2024 · Bad sectors come in two forms: logical and physical. Logical ones are software-related and have a chance of being fixed, while physical ones ...
  11. [11]
    How to Fix Damaged Sectors on Hard Drive & Repair a Failing Disk ...
    Rating 5.0 (1) Aug 12, 2025 · Damage Causes: Physical damage can result from mechanical shocks, manufacturing defects, or surface scratches. Issues such as read/write head ...
  12. [12]
    How to Fix Bad Sectors in HDD & Replace Bad HDD Easily - AOMEI
    Jul 21, 2025 · Physical (hard) bad sectors: Caused by actual damage to the disk surface. This could be due to age, overheating, manufacturing defects, or ...
  13. [13]
    [PDF] Solid State Drives Data Reliability and Lifetime
    Apr 7, 2008 · These defects permit the charge on the floating gate to leak out into the substrate. Over time, more and more defects arise, which lead to a ...<|separator|>
  14. [14]
    [PDF] Hard Disk Drive - Reliability Overview - CERN Indico
    ➢ Head-disk interface failures → when flying at 10nm. ➢ Head – disk interactions, if they occur, will be limited to times of DFH actuation. ➢ Limits ...
  15. [15]
    [PDF] How I Learned to Stop Worrying and Love Flash Endurance - USENIX
    Manufacturer datasheets quote values that range from 10,000-100,000 program/erase (P/E) cy- cles for NAND flash endurance.
  16. [16]
    3 Signs of an Overheating Hard Drive - Datarecovery.com
    Oct 25, 2023 · Excessive heat can cause the surface of the platters to degrade or warp, leading to the creation of bad sectors. Bad sectors are areas on the ...Missing: environmental shock
  17. [17]
    What Causes Hard Drives to Fail? - Rossmann Repair Group Inc.
    Jul 31, 2024 · 12 Reasons - What Causes Hard Drives to Fail? · Mechanical Failures · Electrical Failures · Environmental Factors · Human Error · Logical Failures.
  18. [18]
    Hard Drive Failures Explained: What Causes Them and How to...
    3. Environmental Factors · Humidity: High humidity can cause moisture to build up inside the hard drive, leading to corrosion of the internal components. · Dust: ...
  19. [19]
    Check and Repair Bad Sectors on Hard Disks (3 Ways) - DiskGenius
    Apr 29, 2024 · About bad sectors on hard drives​​ Bad sectors are areas on the hard disk that cannot be read or written correctly, leading to errors when ...What to do when there are bad... · Method 1. Check & repair bad...
  20. [20]
    Logical Errors in File Systems: NTFS, EXT4, XFS, and ZFS
    Rating 5.0 (235) Common Causes of Logical Errors Include: · Sudden Shutdowns: · System Failures or Kernel Panic: · Ransomware or Destructive Malware Attacks: · Incorrect Command ...
  21. [21]
    What Is A Transient Fault? - ITU Online IT Training
    A transient fault in computing is a temporary error that occurs due to various factors such as power fluctuations, electromagnetic interference, or software ...Missing: sectors overheating
  22. [22]
    Show bad sectors from a previous HDD scan - Microsoft Learn
    Feb 25, 2025 · CrystalDiskInfo: This utility provides detailed SMART status reports, including information about reallocated sectors, which typically indicate ...
  23. [23]
    How to Fix or Recover a Corrupted Hard Drive - Secure Data Recovery
    May 31, 2024 · Here are some of the most common causes of corrupted hard drives: Power surges; Virus or malware; Bad sectors; Failing hard drive. Crashed ...Causes and Symptoms of a... · How to Troubleshoot a...
  24. [24]
    chkdsk - Microsoft Learn
    May 26, 2025 · Checking for bad sectors (with /r ) takes longer as every sector's physical integrity is checked and bad ones are replaced if possible. High- ...Syntax · Remarks
  25. [25]
    badblocks(8) - Linux manual page - man7.org
    The `badblocks` command searches for bad blocks on a device, usually a disk partition, and is used to search a device for bad blocks.
  26. [26]
    fsck.ext4(8): check ext2/ext3/ext4 file system - Linux man page
    If any bad blocks are found, they are added to the bad block inode to prevent them from being allocated to a file or directory. If this option is specified ...
  27. [27]
    HDDScan - FREE HDD and SSD Test Diagnostics Software with ...
    The program can test storage device for errors (Bad-blocks and bad sectors), show S.M.A.R.T. attributes and change some HDD parameters such as AAM, APM, etc ...Tutorials · Blog · Download · HDD from inside
  28. [28]
    Официальный сайт программы Victoria HDD/SSD
    лучшая бесплатная программа для диагностики, исследования, тестирования и мелкого ремонта жёстких дисков, SSD-накопителей, карт памяти, а ...Скачать Victoria · Файловый архив · Victoria · Часто задаваемые вопросы...
  29. [29]
    CrystalDiskInfo - Crystal Dew World [en] -
    A HDD/SSD utility software which supports a part of USB, Intel RAID and NVMe. Aoi Edition. Standard Edition. CrystalDiskInfo. Shizuku Edition. CrystalDiskInfo ...S.M.A.R.T. Information · Health Status · Advanced Features · Download
  30. [30]
    How to Run a Disk Check to Fix Bad Sectors | Baeldung on Linux
    Jul 31, 2024 · the non-destructive read-write mode (default option) is the most accurate and safest but also the slowest. It makes a backup of the original ...<|control11|><|separator|>
  31. [31]
    badblocks - ArchWiki
    Nov 2, 2025 · You can use badblock to find bad sectors. Note that badblocks calls sectors "blocks". It supports a few scan modes. There is read-only mode ( ...
  32. [32]
    BadBlocks - Hard Drive Validation or Destructive Wipe - Calomel.org
    Jan 1, 2017 · The destructive test is especially useful when you are getting rid of the disk or returning it to the manufacture for some reason. The non- ...
  33. [33]
  34. [34]
    [PDF] Seagate® Nytro® 1351, 1551 SSD
    Reallocated Sector Count. 5. Count of the number of blocks that have been reallocated, excluding pending sectors. Counter. Power-On-Hours. 9. Count of the ...
  35. [35]
    SeaTools | Seagate US
    SeaTools Bootable. Use this kit to create a bootable USB that uses SeaTools to diagnose hard drives and monitor SSDs.Attempt to repair bad sectors · Seagate USB External drive...
  36. [36]
    Basic Questions / Data Lifeguard Diagnostics for Windows
    Apr 6, 2018 · According to the Western Digital Knowledge Base, if I run the “EXTENDED TEST” this will detect bad sectors and attempt to repair them or mark the damaged ...Missing: scan | Show results with:scan
  37. [37]
    How to use SeaTools for Windows | Seagate US
    SeaTools for Windows has the ability to repair bad sectors using the “Fix All” option under the “Basic Tests” button. Chkdsk can also repair bad sectors.
  38. [38]
    [PDF] Samsung Solid State Drive
    Advanced wear-leveling code ensures that NAND cells wear out evenly (to prevent early drive failure and maintain consistent performance), while garbage ...
  39. [39]
    [PDF] Seagate® IronWolf® 525 SSD
    Seagate implements an efficient bad block management algorithm to detect the factory- produced bad blocks and manages bad blocks that appear with use. This ...
  40. [40]
    Measure a Disk-Drive's Read Channel Signals - EDN Network
    Aug 1, 1999 · Disk-drive engineers use several types of test equipment to measure the write-channel and read-channel signals.Missing: bad | Show results with:bad
  41. [41]
    Hard Drive Head Signals – Data Clinic – Data Recovery Services
    Jan 17, 2013 · We can successfully isolate the faults down to the actual read head that has developed a fault and is preventing the drive from initializing.
  42. [42]
    Handling I/O errors in the kernel - LWN.net
    Jun 12, 2018 · The kernel's handling of I/O errors was the topic of a discussion led by Matthew Wilcox at the 2018 Linux Storage, Filesystem, ...
  43. [43]
    NTFS overview | Microsoft Learn
    Jun 18, 2025 · When a bad sector is detected, NTFS dynamically remaps the affected cluster to a healthy one, marks the original cluster as unusable, and ...
  44. [44]
    Data corruption and disk errors troubleshooting guidance
    Jan 15, 2025 · The file system corruption occurs when one or more of the following issues occur: A disk has bad sectors. I/O requests that are delivered by ...
  45. [45]
    Volume Shadow Copy Service (VSS) - Microsoft Learn
    Jul 7, 2025 · Learn how to use Volume Shadow Copy Service to coordinate the actions that are required to create a consistent shadow copy for backup and ...How VSS works · Complete copy
  46. [46]
    How to Check for Bad Sectors on a Hard Disk in Linux - Tecmint
    Apr 14, 2025 · 1. Check for Bad Sectors Using the badblocks Tool · Step 1: List All Disks and Partitions · Step 2: Scan for Bad Blocks · Step 3: Mark Bad Sectors ...
  47. [47]
    Write-back vs Write-Through caching? - Stack Overflow
    Nov 23, 2014 · The main difference between the two methods is that in write-through method data is written to the main memory through the cache immediately, while in write- ...Write-Back Vs Write-Through... · 8 Comments · 6 CommentsWhat errors can happen when (Windows) system file cache disk ...Why does cache write-back happen the way it does? - Stack OverflowMore results from stackoverflow.com
  48. [48]
    Handling writeback errors - LWN.net
    Apr 4, 2017 · When a writeback error occurs, the counter in address_space would be incremented and the error code recorded. At fsync() or close(), that error ...<|separator|>
  49. [49]
    [PDF] Computer forensics and the ATA interface.
    Modern disks use a technology called Defect Management to handle both kinds of defects. A number of spare sectors are available, and defect sectors can be ...
  50. [50]
    What Is SSD Over-Provisioning? | Seagate US
    Aug 29, 2024 · Over-provisioning significantly enhances an SSD's ability to handle random and sequential writes by providing the necessary space for efficient ...
  51. [51]
    [PDF] Over-provisioning | Samsung Semiconductor
    Guaranteeing free space to accomplish the NAND management tasks (GC, wear-leveling, bad block management) means the SSD does not have to waste time preparing ...
  52. [52]
    What Are Hard Drive Error Correction Codes (ECCs)?
    Oct 28, 2022 · Modern hard drives use error correction codes (ECCs) to make sure that data is readable, even when the write process is imperfect.
  53. [53]
    [PDF] A Study of Soft Error Consequences in Hard Disk Drives
    ▫ ECC: Reed-Solomon correction code. ▫ Retries: Proprietary sequence of retry steps. ▫ Hard error: Read request that returns with an error status (HDD ECC,.
  54. [54]
    What are P-Lists and G-Lists? - Datarecovery.com
    Jan 4, 2016 · P-lists track unreliable sectors at manufacturing, while G-lists track sectors that become unreliable over time, impacting drive speed.Missing: provisioning management
  55. [55]
    Definition of hard disk defect management - PCMag
    First Test - The P-List. The P-list is a "permanent" or "primary" defect table that contains all the bad or marginal sectors found on each platter after testing ...Missing: NVMe | Show results with:NVMe
  56. [56]
    [PDF] Serial ATA Revision 3.1 (Gold) - SATA-IO
    Jul 18, 2011 · Serial ATA International Organization: Serial ATA Revision 3.1 specification ("Final Specification") is available for download at http://www.
  57. [57]
    [PDF] NVMe™ SSD Management, Error Reporting and Logging Capabilities
    Jun 30, 2020 · • Bad user and system NAND blocks. • XOR recoveries. • Uncorrectable ... • The Persistent Event Log page contains information about significant.
  58. [58]
    GNU ddrescue Manual
    Mar 21, 2025 · GNU ddrescue is a data recovery tool. It copies data from one file or block device (hard disc, cdrom, etc) to another, trying to rescue the good parts first in ...
  59. [59]
    Hex Editor Use Cases: Debugging, Analysis, File Recovery + More
    Mar 26, 2025 · A hex editor like UltraEdit allows users to: Manually scan and retrieve raw data from damaged sectors. Recover deleted files by identifying ...
  60. [60]
    Understanding Data Recovery Success Rates: Separating Fact from ...
    Data recovery success rates vary, with reported rates often inflated. A poll suggests an overall success rate of about 78%, but this is not definitive.
  61. [61]
    Seagate Hard Drive Recovery Australia: DIY Damage | Payam
    Jun 3, 2025 · In this case, a simple recovery with 90% success became a scratched platter job with only 30–50% success potential. Seagate Rosewood Recovery: ...
  62. [62]
    Chip-Off Digital Forensics Services | NAND Recovery ... - Gillware
    Like any other forensic technique, chip-off doesn't have a 100% success rate. For example, if the device is encrypted, as many smartphones today are ...
  63. [63]
    6 Causes of SSD Failure & Their Warning Signs
    May 23, 2024 · Bad sectors, also known as bad blocks, are parts of the storage space that have become defective. ... We have a 96% success rate with data ...
  64. [64]
    Clean Room Data Recovery: Here's What You Should Know
    Jun 7, 2024 · Success Rate: While clean room data recovery has a high success rate, it is not guaranteed. ... chips in an SSD are irreparably damaged, data ...
  65. [65]
    [PDF] Failure Trends in a Large Disk Drive Population - Google Research
    They report on a 2% failure rate for a population of 2489 disks during 2005, while men- tioning that replacement rates have been as high as 6% in the past.
  66. [66]
    Hard Drive Failure Rates: The Official Backblaze Drive Stats for 2024
    Feb 11, 2025 · The 2024 AFR for all drives listed was 1.57%, this is down from 1.70% in 2023. We expect the overall failure rates to continue to fall in 2025, ...Missing: modern raw
  67. [67]
    [PDF] Flash Reliability in Production: The Expected and the Unexpected
    Feb 25, 2016 · In our study we distinguish blocks that fail in the field, versus factory bad blocks that the drive was shipped with. The drives in our study ...
  68. [68]
    Are Solid State Drives / SSDs More Reliable Than HDDs? - Backblaze
    Apr 5, 2024 · Much like bad sectors on HDDs, there are bad blocks on SSDs. If you have a bad block, the computer will typically try to read or save a file, ...
  69. [69]
    The 3-2-1 Backup Strategy - Backblaze
    May 23, 2024 · The 3-2-1 backup rule is a simple, effective strategy for keeping your data safe. It advises that you keep three copies of your data on two different media ...
  70. [70]
    3-2-1 Backup Rule Explained: Do I Need One? - Veeam
    The 3-2-1 rule is keeping three data copies on two different media types, with one stored off-site. Discover what makes Veeam's backup strategy unique.
  71. [71]
    The Ultimate Guide to Drive Health Management - Acronis
    May 13, 2024 · This includes monitoring SMART (Self-Monitoring, Analysis, and Reporting Technology) data, which provides insights into various aspects of drive ...<|separator|>
  72. [72]
    Hard Drive Temperatures: Be Afraid - Coding Horror
    Dec 17, 2006 · Each one-degree drop of HDD temperature is equivalent to a 10% increase of HDD service life. Hard drives are only rated to 55C in most cases.
  73. [73]
  74. [74]
    Uninterruptible power supply FAQ - Eaton
    With a wide range of cost-effective models available, a UPS system is an essential investment to prevent damage, data loss and downtime caused by power problems ...
  75. [75]
  76. [76]
    What is RAID (redundant array of independent disks)? - TechTarget
    Mar 13, 2025 · RAID (redundant array of independent disks) is a way of storing the same data in different places on multiple hard disks or solid-state drives (SSDs)What is RAID 5? · What is RAID 0 (disk striping)? · RAID controller · Hardware RAID
  77. [77]
    [PDF] LDPC-in-SSD: Making Advanced Error Correction Codes ... - USENIX
    LDPC codes are considered for SSDs due to their superior error correction, but their use can increase read latency. This paper presents techniques to mitigate ...