Fact-checked by Grok 2 weeks ago

Native Command Queuing

Native Command Queuing (NCQ) is a command protocol extension to the Serial (SATA) interface standard that enables storage devices, such as hard disk drives (HDDs), to receive and manage multiple pending read and write commands from the host system simultaneously, allowing the device to internally reorder their execution for optimal efficiency. This technology optimizes data access by minimizing mechanical overhead, such as read/write head movements on HDDs, thereby reducing latency and improving overall (I/O) performance in scenarios with high transactional workloads. NCQ supports a maximum queue depth of 32 commands, identified by unique tags ranging from 0 to 31, though individual drives may report lower supported depths via the Identify command. Development of NCQ began in 2003 as an advancement over Tagged Command Queuing (TCQ), a similar but less efficient queuing mechanism used in earlier (PATA) and interfaces, with early implementation demonstrated by Seagate on drives that year. It was formally standardized in the 2.0 specification, released in April 2004 by the (), which also introduced 3 Gbit/s transfer speeds and made NCQ a key feature for enhancing drive autonomy. Introduced as an optional feature in the 2.0 specification, NCQ became widely adopted in subsequent revisions, including 3.0 and later, and requires host controller support via the (AHCI) mode rather than legacy emulation. In practice, NCQ benefits HDDs most in patterns common to multitasking environments, desktops, and servers, where it can increase throughput by up to 50% or more compared to non-queued operations by streamlining command processing and reducing idle time. For solid-state drives (SSDs), which lack mechanical components, NCQ provides lesser but still notable gains through internal controller optimizations, such as better parallelization of flash operations. By the mid-2000s, NCQ support became standard across major HDD manufacturers like Seagate, , and , contributing to the evolution of as a dominant in consumer and systems.

Fundamentals

Definition and Purpose

Native Command Queuing (NCQ) is an extension to the protocol that enables storage devices, particularly hard disk drives (HDDs), to manage up to 32 simultaneous read and write commands per port without requiring intervention from the host system. This feature allows the drive's to receive, tag, and process multiple commands concurrently, marking a significant advancement in 's command handling capabilities. Unlike earlier interfaces, which processed commands sequentially, NCQ integrates directly into the SATA II specification to support efficient queuing at the device level. The primary purpose of NCQ is to facilitate of commands by the storage device, optimizing the sequence in which operations are performed to minimize mechanical overhead in HDDs and alleviate general (I/O) bottlenecks. In HDDs, this reordering reduces unnecessary head movements and rotational delays by prioritizing commands based on the physical location of data on the disk, such as using rotational position optimization. For broader block storage contexts, NCQ addresses limitations in traditional , where the host managed command ordering, often leading to suboptimal performance; in contrast, interfaces historically supported tagged command queuing (TCQ) for similar device-level optimization, but required a native equivalent like NCQ to compete in multi-command environments. Core benefits of NCQ include enhanced operations per second () for workloads, where multiple commands can be handled in parallel to improve overall throughput and reduce . It maintains full with legacy systems that do not support queuing, ensuring seamless operation in mixed environments. Additionally, NCQ works in tandem with the (AHCI), which provides the necessary host-side support for queuing up to 32 commands, enabling () transfers and further reducing CPU involvement in I/O operations.

Operational Mechanism

Native Command Queuing (NCQ) operates within the Serial ATA (SATA) protocol, facilitated by the (AHCI), to enable the queuing and reordering of up to 32 read and write commands per port. The host issues these commands using First Party Queued (FPDMA Queued) protocols, such as READ FPDMA QUEUED (opcode 60h) or WRITE FPDMA QUEUED (opcode 61h), which include a unique tag in the Sector Count register to identify each command. The command flow begins when the host writes command headers to a command list in system memory, specified by the Port x Command List Base Address (PxCLB) register, and sets the corresponding bit in the Port x Command Issue (PxCI) register to notify the host bus adapter (HBA). The HBA fetches the command header and transmits it to the device via the SATA link, clearing the BSY (Busy) bit in the device status to allow immediate issuance of additional commands without waiting for prior completions. Upon receipt, the device firmware stores the commands in its internal queue, tagged by slot numbers (0-31), and prioritizes execution using algorithms such as elevator scheduling to optimize seek paths and rotational positioning, processing them out of order as needed. For data transfers, the device initiates First Party DMA (FPDMA) by sending a DMA Setup FIS (Frame Information Structure) to the host, which includes the command , direction (A bit for read/write), , and transfer count to program the DMA engine without host intervention. Data is then exchanged via DMA Data FIS packets, supporting out-of-order delivery across commands. Upon completion of a command, the device sends a Set Device Bits (SDB) FIS to update the host's Port x Slot Active (PxSACT) by clearing the bit corresponding to the completed , signaling readiness for the next and enabling interrupt coalescing to batch multiple completions into a single , thereby reducing CPU overhead. Key components include the 32 command slots, defined by the AHCI Capabilities (CAP.NCS) register, which represent the maximum queue depth and allow simultaneous outstanding commands tracked via the 32-bit PxSACT register. Command tagging uses these slot indices in FIS structures for , ensuring the host associates completions with specific requests. For error handling, if a command fails during transfer, the device logs the in the NCQ Error Log (Log Page 10h, accessible via READ LOG EXT command) and sends an SDB FIS with the ERR bit set or a D2H (Device to Host) FIS; the host can then retrieve details from error registers like PxSERR. Abort mechanisms allow the host to terminate incomplete queues by clearing bits in PxCI or PxSACT registers, or by resetting the port via PxCMD.ST, prompting the device to abort outstanding commands and clear the SActive register via an SDB FIS with all bits set. This process maintains queue integrity, with the device signaling SATA Device Busy (SDB) conditions if overwhelmed, preventing further command issuance until resolution. Conceptually, NCQ's queue depth and tagging form a logical where commands enter via tagged slots, reorder in the device's , and exit through batched notifications, optimizing multi-command scenarios by minimizing host-device synchronization.

Historical Development

Origins in Storage Interfaces

Prior to the advent of Native Command Queuing (NCQ), (PATA) interfaces operated under a single-command queuing model, where commands were executed strictly in the order received by the drive, resulting in significant inefficiencies during multitasking scenarios. This sequential processing exacerbated seek and rotational latencies in hard disk drives, particularly as operating systems increasingly handled multiple concurrent I/O requests from applications, leading to elevated overall system latency in environments with high transactional workloads. Early (SATA) implementations, introduced in the SATA 1.0 specification in 2003, inherited these limitations from PATA, lacking native support for advanced queuing and thus failing to fully capitalize on SATA's serial topology for improved parallelism. NCQ emerged as a key extension to address these shortcomings, jointly developed by Intel Corporation and , with the foundational whitepaper published in July 2003 outlining its protocol for allowing up to 32 outstanding commands per drive. It was formally introduced as an optional feature in the Revision 2.0 specification, released in April 2004, enabling drives to reorder commands internally using rotational position optimization to minimize mechanical overhead. This development was motivated by the growing I/O demands of multi-threaded operating systems, including Microsoft Windows and , particularly in anticipation of enhanced multitasking in future releases like , which would leverage advanced storage features for better performance. A primary influencing factor was the evolution from interfaces, which had long supported tagged command queuing (TCQ) for efficient handling of multiple commands in enterprise environments, to more cost-effective / drives aimed at consumer desktops. NCQ served as a bridge, adapting SCSI-like queuing natively into the ecosystem to facilitate convergence between desktop and server storage needs, allowing consumer-grade drives to approach enterprise-level I/O efficiency without the complexity of legacy hardware. Early prototypes and testing of NCQ focused on hard disk drives to validate its reliability under sustained workloads, with initial implementations appearing in models like the MaXLine III series in late 2004, which incorporated NCQ alongside second-generation features for high-capacity, mission-critical applications. These tests emphasized error recovery and queue management before broader consumer rollout, ensuring the technology's robustness in reordering commands without compromising .

Standardization and Adoption

Native Command Queuing (NCQ) was formally defined by the (SATA-IO) as part of the Revision 2.0 specification, released in April 2004, which also introduced 3 Gbit/s transfer speeds. This revision extended the original 1.5 Gbit/s interface by incorporating NCQ to optimize command execution in high-I/O workloads. While NCQ implementation remained optional in Revision 2.0, subsequent revisions such as 2.5 (August 2005) and 3.0 (May 2009) built upon it by adding related features like NCQ streaming commands, ensuring compatibility and encouraging broader support across -compliant devices. Adoption of NCQ accelerated following its specification, with Seagate introducing the first consumer hard disk drives supporting it in late 2004, including the 7200.7 series. By , NCQ had become widespread in the HDD market, integrated into models like the 7200.9 and Western Digital Caviar series, as manufacturers shifted to II-compliant products. Operating system support also matured around this time; the added native AHCI compatibility enabling NCQ in version 2.6.19, released in November . For , NCQ required AHCI drivers, with improved integration via Service Pack 3 (April 2008) and third-party drivers from chipset vendors like , facilitating its use in consumer setups. The push for NCQ adoption was driven by the rise of multi-core processors and technologies in the mid-2000s, which increased concurrent I/O demands and benefited from NCQ's ability to reorder commands for reduced . By , NCQ support had reached near-universal levels in consumer HDDs, with major vendors like Seagate, , and incorporating it as a feature to meet expectations in multitasking and environments. Early challenges to NCQ deployment included compatibility with legacy firmware and drivers that defaulted to emulation mode, disabling AHCI and thus NCQ functionality, leading to suboptimal performance or boot issues on pre-2005 systems. These were largely resolved through updated chipsets, such as Intel's ICH8 introduced in 2006, which provided robust AHCI support out-of-the-box, and the transition to firmware starting around 2007, which better handled modes without emulation.

Implementation in Hard Disk Drives

Performance Benefits

Native Command Queuing (NCQ) provides significant efficiency gains for hard disk drives (HDDs) by allowing the drive to reorder incoming commands, thereby minimizing head movements and optimizing patterns on the platter. This reordering reduces the effective seek time in multi-command scenarios, with total processing time for clusters of 4 to 8 I/O operations potentially halved compared to strict sequential execution, effectively cutting overhead by up to 50% in workloads. Such optimizations are particularly beneficial for HDDs, where seek times dominate due to the physical nature of rotating platters and actuator arms. In benchmarks using 7200 RPM drives like the 7200.7, NCQ demonstrates measurable improvements in input/output operations per second () for random workloads, often achieving up to 2x higher throughput in database simulations by batching and resequencing commands to avoid inefficient head jumps. For instance, in PassMark PerformanceTest simulations of and database environments, NCQ-enabled configurations read approximately 22% more data (594 MB vs. 488 MB) at higher transfer rates (2.03 MB/s vs. 1.66 MB/s) compared to non-NCQ setups. These gains translate to representative scaling from around 100 in basic random reads without queuing to 200 or more with NCQ, enhancing responsiveness in I/O-intensive applications. Additionally, NCQ's aggregation feature lowers CPU utilization by reducing the frequency of host s during batched command completions, freeing processor cycles for other tasks. NCQ excels in multitasking scenarios on HDDs, such as or , where overlapping read and write requests are common. On a 7200 RPM 7200.7, enabling NCQ shortened boot times by 23% (30 seconds vs. 39 seconds) and game load times like by 9% (59 seconds vs. 65 seconds), by prioritizing proximate data accesses and mitigating seek bottlenecks during concurrent operations. In database environments with random I/O patterns, the technology similarly boosts efficiency by reordering commands to align with platter geometry, reducing worst-case rotational latency to a single revolution (e.g., 8.3 ms at 7200 RPM). The efficacy of NCQ in HDDs scales with queue depth, as deeper queues allow more opportunities for intelligent reordering, though performance plateaus at the standard limit of 32 commands per port. At lower depths (e.g., 4-8), gains are modest, but as depth approaches 32, and throughput improvements become more pronounced in high-concurrency workloads, limited only by the drive's and mechanical constraints. This scaling makes NCQ ideal for environments with sustained multi-threaded I/O, such as file servers or creative software suites, where command queuing prevents idle time on the .

Safety and Data Integrity Features

Native Command Queuing (NCQ) incorporates several mechanisms to enhance in hard disk drives (HDDs), particularly addressing risks from errors and power disruptions. A key feature is the Forced Unit Access (FUA) bit, available in Write FPDMA Queued commands, which mandates that write data bypass the volatile write and be directly committed to the non-volatile before the drive reports completion to the host. This ensures that acknowledged writes survive sudden loss, mitigating the risk of from uncommitted cached data. NCQ's error handling framework supports partial command completions, enabling to track and retry only affected portions of queued operations via updated command statuses and Physical Region Descriptor (PRD) byte counts, rather than aborting entire queues. Upon detecting an error, the drive transmits a FIS with the ERR bit set and BSY cleared, halting further processing until responds, often by issuing a Read Log Ext command (log page 10h) to retrieve details like the failed command tag and shadow registers for precise fault isolation. Retry mechanisms involve software reissuing aborted commands after clearing errors in registers like PxCI and PxSERR, while the host bus adapter (HBA) automatically retries non-Data FIS transmissions for transient issues such as R_ERR signals. The SError (PxSERR) logs Serial interface errors, including discrepancies and faults, facilitating firmware-level isolation of issues without propagating them across the queue. Integration with helps manage unexpected shutdowns by supporting queued command flushes through error recovery protocols, where the host can abort and clear outstanding NCQ slots via PxSACT before power-off, ensuring orderly completion of critical operations. In configurations, this structured error isolation and flush capability reduces the likelihood of array-wide corruption during abrupt power events, as partial queue states can be resolved without inconsistent data propagation. Despite these safeguards, limitations exist: enabling FUA incurs performance overhead due to the synchronous nature of media commits, bypassing optimizations for . Additionally, NCQ's effectiveness in error handling heavily depends on the quality of the drive's ; suboptimal implementations can lead to persistent NCQ timeouts or unhandled errors, as observed in certain Seagate IronWolf models resolved via firmware updates.

Implementation in Solid-State Drives

Architectural Adaptations

In solid-state drives (SSDs), Native Command Queuing (NCQ) is adapted through the Serial ATA Advanced Host Controller Interface (AHCI), where the SSD controller receives and manages up to 32 outstanding commands in a single queue, enabling efficient handling of parallel I/O operations without the mechanical seek optimizations required for hard disk drives (HDDs). The Flash Translation Layer (FTL) in the SSD maps these queued logical block address commands to physical flash pages and blocks, performing out-of-place updates to accommodate the erase-before-write nature of . Unlike HDDs, where NCQ primarily minimizes head movement , SSD firmware leverages the queued commands to integrate with internal processes like garbage collection and , scheduling these maintenance tasks during idle queue periods to maintain performance without mechanical constraints. This adaptation allows the controller to distribute commands across multiple parallel channels and dies via interleaving techniques, maximizing throughput by executing operations concurrently on independent flash units. A key architectural difference is the absence of seek-based reordering; instead, NCQ facilitates merging small, scattered writes into larger units aligned with page sizes (typically 4-16 KB), thereby reducing by minimizing partial-page updates and associated overhead. SSD controllers support full with legacy AHCI modes, but performance is optimized when paired with host drivers that minimize Force Unit Access (FUA) commands, as excessive FUA usage bypasses caching and increases direct writes, exacerbating amplification.

Performance Implications

Native Command Queuing (NCQ) substantially improves SSD throughput and reduces in random and mixed I/O workloads by enabling the controller to reorder and parallelize up to 32 commands across multiple NAND flash channels, optimizing resource utilization without host intervention. In mixed workloads, this results in significant performance gains, with benchmarks showing SSDs achieving up to 2-3x higher when NCQ is enabled compared to configurations limited to single-queue operation, as the latter restricts parallelism and increases effective . For example, early enterprise-oriented SSDs like the 845DC PRO demonstrated peak random read performance of 92,000 under NCQ-supported conditions, highlighting efficient command batching that sustains high throughput in demanding environments. NCQ's benefits are particularly pronounced in and database scenarios, where concurrent I/O requests from multiple virtual machines or query threads create deep s. By allowing the SSD to process non-conflicting commands in parallel, NCQ minimizes stalls and , ensuring consistent responsiveness even under bursty loads. This is especially valuable during operations, which notify the drive of unused blocks for garbage collection; queued support in 3.1 enables these commands to be batched with ongoing reads and writes, preventing interruptions that could otherwise significantly degrade performance in non-NCQ modes. Similarly, during internal over-provisioning and wear-leveling tasks, NCQ maintains I/O flow by prioritizing user commands, reducing latency spikes in write-intensive database workloads. However, NCQ's advantages diminish in sequential transfer scenarios, where bandwidth limits—typically 6 Gbps for —dominate over queuing efficiency, yielding minimal gains beyond basic command handling. Potential bottlenecks arise when queue depths exceed the controller's internal parallelism, such as in SSDs with 4-8 NAND channels; excessive depth (e.g., beyond 32) can lead to internal contention and increased without proportional throughput improvements. In SSDs like the X25-M, which featured a 10-channel controller and was among the first to implement NCQ, benchmarks illustrated these dynamics: database workloads reached approximately 3,500 with low of 85 μs, nearly doubling performance over prior non-NCQ SSDs in mixed workstation tasks, though sequential speeds remained interface-bound. As of 2025, NCQ remains relevant for SSDs but is overshadowed by NVMe in and high-end .

Tagged Command Queuing

Tagged Command Queuing (TCQ) served as a predecessor to Native Command Queuing (NCQ) in interfaces, introduced in the ATA-4 specification in 1998 to enhance disk performance by allowing multiple read and write commands to be queued simultaneously. This SCSI-derived technology enabled the host to send up to 32 tagged commands to the drive, where each command included a unique tag for identification, and the drive could reorder them for optimal execution while preserving the host-specified . Unlike earlier ATA queuing methods, TCQ supported host-managed ordering through task attributes, such as SIMPLE or ORDERED, giving the initiator control over execution priorities and sequences to minimize seek times and rotational delays. In contrast to NCQ, which automates command tagging and reordering entirely at the device level within the Serial ATA protocol, TCQ imposes explicit host responsibilities that increase protocol overhead. must assign tags to each command and manage error recovery through per-command handshakes, including a command to poll for completions, resulting in up to two s per command and potential in lightly queued workloads. NCQ, as an evolution in interfaces, eliminates these host interventions by using a single 64-bit status packet for up to 32 simultaneous completions, reducing interrupt overhead to at most one per command and enabling first-party without additional host software involvement. This automation in NCQ makes it more suitable for modern multitasking environments, whereas TCQ's host-centric approach, while flexible for priority-based ordering, demands more computational resources from the controller, leading to inefficiencies in error-prone or high-latency scenarios. TCQ's adoption waned after 2004 as Serial II standardized NCQ, rendering TCQ largely obsolete for new designs while retaining in some controllers for legacy drives. For instance, early implementations often emulated TCQ to support older operating systems like , where native drivers could leverage tagged queuing on compatible hardware without full NCQ protocol upgrades. Although TCQ offered advantages in simple controller environments by allowing explicit —such as executing high-priority commands ahead of queued ones—its reliance on host-managed and tagging made it less efficient for concurrent operations compared to NCQ's device-optimized approach. Overall, TCQ's design suited enterprise roots but proved cumbersome for consumer evolution, paving the way for NCQ's broader efficiency gains in ecosystems.

Modern Interfaces like NVMe

Non-Volatile Memory Express (NVMe) is a high-performance storage protocol designed specifically for non-volatile memory devices like solid-state drives (SSDs) connected via the Peripheral Component Interconnect Express (PCIe) interface. Unlike traditional storage interfaces, NVMe employs a scalable featuring submission queues (SQs) and completion queues (CQs), where commands are posted to SQs by the host and completions are returned via CQs by the device. This setup supports up to 64,000 queues, each capable of holding up to 64,000 commands, enabling massive parallelism and efficient handling of I/O operations far beyond the limitations of earlier protocols. In relation to Native Command Queuing (NCQ), which is an extension of the (AHCI) limited to a single queue with up to 32 outstanding commands primarily for -connected drives, NVMe serves as a modern evolution optimized for SSDs in high-throughput environments. While AHCI with NCQ remains the standard for -based SSDs and hard disk drives (HDDs), NVMe primarily operates over PCIe, with extensions like NVMe over Fabrics (NVMe-oF) enabling support for other transports such as RDMA, , and /, and lacks direct compatibility with AHCI/NCQ, requiring dedicated controllers and drivers. This positions NVMe as the preferred interface for enterprise and performance-oriented SSDs, while NCQ persists in consumer applications. The transition to NVMe was driven by its ability to deliver significantly lower latency—typically under 10 microseconds end-to-end—compared to the millisecond-range access times associated with NCQ in mechanical HDDs or the higher overhead in AHCI for SSDs, making it ideal for data center workloads requiring rapid I/O processing. The NVMe specification version 1.0 was released on March 1, 2011, by the NVM Express organization, with widespread adoption accelerating in data centers thereafter due to its support for over 1 million IOPS and reduced CPU utilization. Coexistence between NVMe and NCQ-enabled systems is common in hybrid setups, such as modern motherboards that include both PCIe slots for NVMe SSDs and ports for AHCI/NCQ drives, allowing users to mix high-performance and . As of 2025, NCQ continues to be relevant in budget hardware, embedded systems, and installations where compatibility is prioritized over peak performance.

References

  1. [1]
    [PDF] Serial ATA Native Command Queuing - Seagate Technology
    Jul 1, 2003 · Native Command Queuing is a command protocol in Serial ATA that allows multiple commands to be outstanding within a drive at the same time.<|control11|><|separator|>
  2. [2]
    Native Command Queuing - SATA-IO
    Native Command Queuing (NCQ) is a technology designed to improve performance and reliability as the transactional workload increases on SATA hard disk drives.Missing: definition | Show results with:definition
  3. [3]
    [PDF] Serial ATA II Native Command Queuing Overview - Intel
    Apr 15, 2003 · This section describes the definitions of FIS structures, the SActive register and the commands that are used for SATA II Native Command Queuing ...
  4. [4]
    Comparing 2.5” SATA II to Other SATA Versions - Delkin Devices
    SATA II, or SATA 2.0, was introduced in April of 2004. It was the first version of SATA to support NCQ and is backwards compatible with SATA I. This second ...
  5. [5]
    HP Smart Array Controllers - SATA Native Command Queuing (NCQ ...
    Jul 2, 2009 · Native Command Queuing (NCQ) is a technology designed to increase performance of SATA hard disk drives by allowing the individual hard disks to internally ...Missing: definition | Show results with:definition
  6. [6]
    Serial ATA (SATA) Native Command Queuing (NCQ) FAQs
    Native Command Queuing is a process in which a hard drive reorders outstanding commands to reduce mechanical overhead and improve I/O latencies.
  7. [7]
    NCQ and TCQ on WD SATA Drives
    Aug 20, 2018 · Native Command Queuing (NCQ)​​ NCQ allows multiple commands to: Be active at the same time. Be rescheduled or re-ordered.Missing: definition | Show results with:definition
  8. [8]
    None
    ### Differences Between SCSI and ATA Regarding Command Queuing in Block Storage
  9. [9]
    None
    Below is a merged summary of the NCQ (Native Command Queuing) operational mechanism from the AHCI Specification (Rev 1.3.1), consolidating information from Sections 5.6.4, 5.3, 6.2, and 11.7. To retain all details in a dense and organized format, I’ll use a combination of narrative text and a table in CSV format for key components, protocol specifics, and other critical details. This ensures comprehensive coverage while maintaining clarity.
  10. [10]
    [PDF] EVENTS MEMBER Companies Specifications and Revisions ...
    FEBRUARY 2000. Serial SATA standard introduced through the efforts of APT Technologies,. Dell, Intel, Maxtor, and Seagate. OCTOBER 2005.
  11. [11]
    Maxtor MaXLine III - StorageReview.com
    Oct 10, 2004 · Along with this native implementation, Maxtor has incorporated some second-generation SATA features, most notably native command queuing (NCQ).
  12. [12]
    [PDF] SATA-IO Completes SATA Revision 2.5 Integrated Spec
    Aug 23, 2005 · SATA Revision 2.5 is a consolidated, integrated spec that includes the 3Gb/s feature, as well as other features like Native Command Queuing (NCQ) ...
  13. [13]
    SATA Revision 3.0 released - The Register
    May 27, 2009 · Introduced in the SATA Revision 2 spec, NCQ intelligently reorders read/write commands sent to a drive by its host, sequencing them in such a ...
  14. [14]
    Seagate 160 GB Barracuda 7200.7 Internal Hard Drive SATA NCQ ...
    30-day returnsWorld's first SATA desktop drive with NCQ; 7,200 RPM desktop performance and 8MB cache. See more. Product specifications. Item details. Features & Specs.
  15. [15]
    Seagate Barracuda 7200.9 - StorageReview.com
    Oct 31, 2005 · The Barracuda 7200.9 achieves an SR Office DriveMark of 610 IO/s per second with NCQ disabled. Though a significant 10% gain over the 7200.8, ...
  16. [16]
    Linux_2_6_19 - Linux Kernel Newbies
    Summary of the changes and new features merged in the Linux Kernel during the 2.6.19 development.
  17. [17]
  18. [18]
    [PDF] v6 SATA Product Manual - Seagate Technology
    Oct 26, 2016 · Native Command Queuing with command ordering to increase performance in demanding applications. • Perpendicular recording technology ...
  19. [19]
    How to enable AHCI in Windows XP without reinstalling - Super User
    May 22, 2012 · Windows XP does not have a AHCI/RAID driver, you must install a third party one. The most common is Intel, because the most common controller is Intel.AHCI Mode for Windows XP - Super UserAHCI driver installed but XP32 SP3 won't recognize it - only IDEMore results from superuser.com
  20. [20]
    [PDF] 313056_Intel® I/O Controller Hub 8 (ICH8) Family Datasheet
    May 15, 2014 · The Intel® I/O Controller Hub 8 (ICH8) Family chipset component may contain design defects or errors known as errata which may cause the product ...
  21. [21]
    Single Drive and RAID 0 SATA NCQ Performance with Seagate's ...
    Dec 17, 2004 · Not to be outdone, this year Seagate became the first company to introduce SATA drives featuring Native Command Queuing, or NCQ, a feature that ...
  22. [22]
    [PDF] Serial ATA Advanced Host Controller Interface (AHCI) 1.3 - Intel
    Jan 6, 2012 · 6.2.2.2 Native Command Queuing Error Recovery. The flow for system software to recover from an error when native command queuing commands are.
  23. [23]
    [PDF] Proposed Serial ATA II Draft Workgroup - T10.org
    Jun 23, 2003 · Upon encountering an error in processing a native queued command, the device shall transmit a. Register FIS to the host with the ERR bit set to ...
  24. [24]
    [PDF] Seagate® SkyHawk AI™ SATA Product Manual
    To ensure a clean shutdown, a Flush Cache, Standby, or Standby Immediate command must be completed before removal of interface power. NOTE. An unexpected power ...
  25. [25]
  26. [26]
    [PDF] Intel® Solid-State Drive 310 Series Product Specification
    The Intel SSD 310 Series supports the Native Command Queuing (NCQ) command set, which includes: • READ FPDMA QUEUED. • WRITE FPDMA QUEUED. Note: With a ...
  27. [27]
    [PDF] Critical Role of Firmware and Flash Translation Layers in Solid State ...
    The FTL maps logical to physical data, handles wear leveling, garbage collection, and translates sector access to page/block access in SSDs.
  28. [28]
    [PDF] Essential Roles of Exploiting Internal Parallelism of Flash Memory ...
    NCQ is especially important to SSDs, because the highly parallelized internal structure of an SSD can be effectively utilized only when the SSD is able to ...
  29. [29]
    [PDF] SFS: Random Write Considered Harmful in Solid State Drives
    Although the no-overwrite scheme in btrfs incurs lower write amplification compared to ext4 and btrfs-.
  30. [30]
    What is the performance impact of disabling NCQ? - Server Fault
    Apr 11, 2016 · Think about that: if AHCI only had a single 32-entries queue, NVMe has 64k queues with 64k entries each. NCQ helps the disc to rearrange ...
  31. [31]
    Samsung SSD 845DC PRO Review - StorageReview.com
    Jul 25, 2014 · The 845DC PRO started out with the best performance and maintained the lead throughout peaking at 35,000IOPS. In our average latency test three ...
  32. [32]
    SSDs and Linux Native Command Queuing - The Ongoing Struggle
    Aug 9, 2015 · The maximum depth of the queue in SATA is 31 for practical purposes, and so if the drive supports NCQ then Linux will usually set the depth to ...
  33. [33]
    [PDF] Comparing Serial ATA Native Command Queuing (NCQ) and ATA ...
    Oct 1, 2003 · Native Command Queuing (NCQ) is among the most anticipated advanced features introduced in the Serial ATA II: Extensions to Serial ATA 1.0 ...
  34. [34]
    NCQ (Native Command Queuing) and TCQ ... - Hardware Secrets
    NCQ can deal with up to 32 commands at a time, while TCQ can deal with up to 216 commands (TCQ hard disk drives, however, can usually support a queue of “only” ...<|control11|><|separator|>
  35. [35]
    [PDF] SAT comparison of SCSI and ATA queuing - t10.org
    SATA NCQ supports up to 32 commands queued at a time. The SATL has the option of mapping SCSI queued commands directly to SATA native queued commands and/or ...
  36. [36]
    TCQ, RAID, SCSI, and SATA - StorageReview.com
    Jun 24, 2004 · Most notable is the 8% drop in the Bootup DriveMark. Western Digital representatives attribute the difference to revised firmware. A closer look ...Missing: decline | Show results with:decline
  37. [37]
    Testbed4 - StorageReview.com
    Oct 12, 2005 · The SI3124-2 features full 133 MHz PCI-X support, SATA-2 style 300 MB/sec transfers, and NCQ as well as legacy TCQ support. SCSI Host ...
  38. [38]
    [PDF] NVMe Overview - NVM Express
    Aug 5, 2016 · In contrast, NVMe supports 64K commands per queue and up to 64K queues. These queues are designed such that I/O commands and responses to those ...
  39. [39]
    NVMe vs AHCI: Another Win for PCIe - Testing SATA Express And ...
    Mar 13, 2014 · The biggest advantage of NVMe is its lower latency. This is mostly due to a streamlined storage stack and the fact that NVMe requires no register reads to ...
  40. [40]
    Understanding SSD Technology: NVMe, SATA, M.2
    With AHCI drivers, commands utilize high CPU cycles with a latency of 6 microseconds while NVMe driver commands utilize low CPU cycles with a latency of 2.8 ...Missing: NCQ | Show results with:NCQ