Fact-checked by Grok 2 weeks ago

SCSI

Small Computer System Interface (SCSI) is a set of standards developed by the (ANSI) for connecting computers to peripheral devices, such as hard disk drives, tape drives, scanners, and printers, enabling efficient data transfer and device control through a shared bus . It defines mechanical, electrical, and functional requirements, including command sets that allow initiators (like computers) to communicate with targets (peripheral devices) in a device-independent manner. The origins of SCSI trace back to the late 1970s, evolving from ' SASI interface, which was proposed to ANSI's X3T9.2 in 1981 and formalized as the first SCSI (SCSI-1, ANSI X3.131-1986) in 1986. This initial version supported asynchronous and optional synchronous data transfers up to 5 MB/s, accommodating up to eight devices on a single bus with unique ID addresses from 0 to 7. SCSI-2 followed in 1990 (ANSI X3.131-1990, revised 1994), introducing enhancements like faster transfer rates up to 10 MB/s (or 20 MB/s in Fast mode), wider 16-bit buses, and additional command support for more device types, including processors and write-once media. Under the ANSI-accredited INCITS T10 technical committee (formerly X3T9.2), SCSI evolved into the in the mid-1990s, modularizing the standards into components like the , command sets (e.g., for primary commands), and transport protocols, which facilitated adaptations to interfaces. Notable evolutions include (SAS, INCITS 376-2003), which supports up to 65,536 devices at speeds exceeding 12 Gb/s, and for IP-based networking, maintaining SCSI's command structure while transitioning from parallel to serial topologies for . Despite competition from interfaces like and NVMe, SCSI remains influential in and data centers as of 2025, with ongoing T10 projects like SCSI to NVMe Translation (SNT).

Overview

Definition and Purpose

The Small Computer System Interface (SCSI) is a set of (ANSI) and InterNational Committee for Information Technology Standards (INCITS) specifications that define protocols for and between computer hosts and peripheral devices. These standards establish an bus for interconnecting computers with peripherals, emphasizing standardized command sets to ensure across diverse . At its core, SCSI separates the logical command from the physical , allowing consistent operation over various interconnects. The primary purpose of SCSI is to enable high-speed, reliable communication for and peripheral devices, including hard disk drives, drives, and optical . It supports multitasking by permitting a host to issue and queue multiple commands to devices simultaneously, often through features like tagged command queuing, which optimizes performance in environments with concurrent operations. Additionally, SCSI facilitates the connection of multiple devices—up to 16 on a single bus in some configurations—via a shared topology, allowing efficient resource sharing without requiring dedicated channels for each peripheral. SCSI's fundamental architecture revolves around key roles and components: the host adapter serves as the interface connecting the host computer to the SCSI bus, typically functioning in the initiator role to originate commands. devices, such as peripherals, operate in the target role to receive, process, and respond to these commands. The bus topology provides a shared medium for communication, enabling multiple initiators and targets to arbitrate access dynamically. Over its development, SCSI has evolved from parallel bus implementations to serial and IP-based protocols, with the ensuring backward compatibility by preserving the core command set across transport layers. This progression, detailed in successive SAM generations like SAM-5, coordinates standards for diverse physical interfaces while maintaining functional consistency for legacy and modern devices.

Key Components and Architecture

The SCSI architecture relies on distinct hardware components to facilitate communication between hosts and peripherals. The initiator, typically implemented as a host bus adapter (HBA), is the device that originates SCSI commands and requests to access or other peripherals. In contrast, the is the recipient device, such as a disk drive or unit, that executes the received commands and returns data or status information. For implementations, terminators are required at both ends of the bus to absorb signals and prevent reflections that could corrupt data transmission. In serial variants like (SAS), expanders serve as intelligent hubs that route connections between multiple initiator and ports, enabling scalable topologies beyond direct point-to-point links. At the logical level, SCSI operations proceed through a sequence of bus s that manage access, command delivery, data transfer, and completion. These include the BUS FREE phase, where the bus is idle; , where devices compete for based on ; SELECTION, where an initiator chooses a target; RESELECTION, allowing targets to reconnect for ongoing tasks; COMMAND, for sending operation instructions; DATA IN and DATA OUT, for bidirectional transfers; , for reporting outcomes; and , for control exchanges like acknowledgments. Parallel operates in half-duplex mode, with data flowing in one direction at a time, while serial protocols like support full-duplex operation for simultaneous bidirectional transfers on separate lanes. Transfers can be asynchronous, using handshaking signals for timing, or synchronous, employing a for higher throughput in compatible phases. Addressing in SCSI uses unique identifiers to distinguish devices on the bus. Each device is assigned a SCSI ID, ranging from 0 to 7 in 8-bit narrow configurations or 0 to 15 in 16-bit wide setups, with the initiator often defaulting to the highest ID (e.g., 7) for arbitration priority. The bus width determines the data path: 8-bit narrow supports basic connectivity, while 16-bit wide doubles throughput by utilizing additional lines. Multi-initiator environments are supported through shared bus arbitration, allowing multiple hosts to access targets concurrently while resolving conflicts via priority-based selection. Error handling ensures through layered detection and recovery mechanisms. In , odd checking on the data bus detects single-bit errors during transfers, triggering a MESSAGE ERROR if unrecoverable. Serial implementations, such as , employ () for robust end-to-end validation, particularly in domain validation sequences that test support. Retry involve reattempting failed transfers via messages like INITIATOR DETECTED ERROR or by reinitializing phases, with domain invalidation occurring after repeated failures to prompt renegotiation.

History

Origins and Early Development

The origins of SCSI trace back to 1979, when Shugart Associates developed the Shugart Associates System Interface (SASI) as a proprietary standard for connecting hard disk drives to small computers. Led by engineer Larry Boucher, the team at Shugart aimed to create a standardized, high-performance interface that addressed the limitations of earlier proprietary standards like Seagate's ST-506 and ST-412, which tied disk drives closely to specific controllers and restricted easy upgrades or multi-device sharing. SASI was designed to support up to eight devices on a single bus, emphasizing a device-independent command set to facilitate integration of intelligent peripherals without requiring custom hardware for each vendor's drive. This motivation stemmed from customer feedback highlighting the need for a more flexible, cost-effective alternative to bespoke interfaces prevalent in the late 1970s minicomputer era. Efforts to standardize SASI began in 1981, when , in collaboration with NCR Corporation, submitted it to the (ANSI) for approval, though initial attempts failed due to concerns over naming and scope. NCR's adoption of SASI for its systems played a key role in promoting its broader use and influencing the push toward a universal standard. By 1986, ANSI approved a modified version of SASI as the SCSI-1 standard (ANSI X3.131-1986), renaming it the Small Computer System Interface to reflect its applicability beyond storage to various peripherals, while incorporating enhancements like checking and synchronous data transfer options. This formalization marked SCSI's transition from a vendor-specific protocol to an industry-wide specification, enabling across different manufacturers. The initial SCSI-1 specifications defined an 8-bit parallel bus with , supporting asynchronous transfers up to 1.5 MB/s and synchronous modes reaching a maximum of 5 MB/s at a 5 MHz , limited by cable lengths of up to 6 meters. It featured a basic command set primarily oriented toward block-oriented devices like hard disks and tapes, including mandatory operations such as READ, WRITE, and , with logical unit addressing to handle multiple devices via unique SCSI IDs from 0 to 7. The design prioritized simplicity and extensibility, using a half-duplex bus for command, data, and status phases between one initiator (typically ) and up to seven . Early adoption of SCSI accelerated in the mid-1980s, with Apple introducing support in the Macintosh Plus in 1986, allowing external hard drives to expand storage for creative and professional applications. IBM integrated SCSI adapters into its PS/2 line starting in 1987, enhancing server and workstation capabilities in enterprise environments. Unix-based systems, particularly Sun Microsystems workstations from 1983 onward (initially using pre-standard SASI equivalents), embraced SCSI for its compatibility with multi-user, networked setups, solidifying its role in scientific computing and early client-server architectures.

Parallel SCSI Evolution

The evolution of parallel SCSI began with the SCSI-2 standard, ratified by ANSI in 1994 as X3.131-1994, which built upon the foundational asynchronous and synchronous transfer modes of SCSI-1 by introducing enhancements for higher performance and better device management. Key additions included Fast SCSI, enabling synchronous data transfers at 10 megatransfers per second (MT/s) for an effective throughput of 10 MB/s on narrow (8-bit) buses, and Wide SCSI, which expanded the bus to 16 bits for doubled bandwidth, achieving 20 MB/s in Fast Wide configurations. SCSI-2 also formalized tagged command queuing, allowing up to 256 commands per logical unit per initiator to optimize I/O operations in multi-device environments. In the late and early , the SCSI-3 family of standards, developed under ANSI's INCITS T10 committee, further advanced parallel interfaces through the specifications, focusing on speed increases and improvements. Ultra SCSI (also known as Fast-20), which raised synchronous rates to 20 MT/s (20 MB/s narrow, 40 MB/s wide), along with Differential (LVD) signaling for Ultra2 SCSI implementations, were defined in the SCSI Parallel Interface-2 (SPI-2, ANSI X3.302-1998), reducing voltage swings from 5 V to 2.5 V differential while supporting cable lengths up to 25 meters in compatible configurations. Later, Ultra160 SCSI (, 2000) incorporated validation, a that tests bus at the highest possible speed by sending patterned packets, ensuring reliable across the . These developments included critical enhancements for reliability and integration, such as refined tagged command queuing mechanisms that supported ordered, simple, head-of-queue, and untagged variants to prioritize tasks efficiently. error recovery was bolstered through mandatory in Ultra160, enabling detection and retransmission of corrupted packets to maintain in noisy environments. also facilitated integration by providing robust command sets and queuing that allowed hardware controllers to manage arrays of up to 15 devices (using SCSI IDs 0-15, excluding the initiator), enabling fault-tolerant configurations like RAID 5 without host intervention. Despite these advances, parallel SCSI faced inherent limitations that contributed to its eventual decline in favor of serial alternatives. Cable length constraints, even with LVD at up to 25 meters, restricted deployment in expansive server racks, while () from parallel signaling caused and signal degradation at higher speeds. was capped at 15 devices per bus due to addressing constraints, limiting its suitability for large-scale storage systems. Additionally, SCSI Enclosure Services (SES), introduced in as part of the SCSI-3 command sets (INCITS 305-1998), provided monitoring of enclosure components like power supplies and fans but could not overcome the physical bottlenecks of parallel .

Shift to Serial and Networked Interfaces

The shift from to SCSI interfaces in the early was driven by the physical and performance constraints of parallel buses, which reached their practical limits with speeds beyond 320 MB/s, short cable lengths of a few meters due to signal integrity issues like and , and support for only up to 16 devices per bus including the controller. Attempts to develop Ultra640 SCSI, targeting 640 MB/s, were abandoned as unreliable without prohibitively expensive mitigations for signal degradation. The emergence of Serial ATA (SATA) for cost-effective, high-volume storage further accelerated this transition by demonstrating the benefits of serial signaling, such as reduced pin counts and easier , prompting SCSI to evolve similarly to maintain competitiveness. The pinnacle of parallel SCSI came with the SCSI Parallel Interface-4 (SPI-4) standard, ratified in 2001 as ANSI INCITS 362 by the T10 committee, which defined the Ultra320 implementation with features like packetized transfers and cyclic redundancy checks to maximize reliability at 320 MB/s. This marked the end of significant parallel advancements, as attention turned to serial alternatives. In 2003, the ANSI T10 committee introduced (SAS) through INCITS 376, a point-to-point serial protocol designed to encapsulate SCSI commands over differential signaling, overcoming parallel bottlenecks while enabling compatibility with drives for mixed environments. Networked SCSI protocols paralleled this serial shift, providing scalable alternatives for storage area networks (SANs). Fibre Channel, standardized by ANSI in 1994 as FC-PH, emerged as an early serial interface for mapping SCSI over high-speed fiber links, supporting distances up to 10 km and switched topologies ideal for shared enterprise storage. Complementing this, iSCSI was developed to leverage existing IP networks, with the Internet Engineering Task Force ratifying RFC 3720 in 2003 to transport SCSI commands via TCP/IP, facilitating cost-effective Ethernet-based SANs without dedicated hardware. SAS milestones underscored its rapid adoption for . SAS-1, released in 2004, operated at 3 Gbit/s (300 /s effective per after encoding overhead) and supported up to 128 devices in basic configurations. SAS-2 advanced to 6 Gbit/s in 2009, adding capabilities like port selectors and subtractive while integrating expanders to to 65,536 devices per domain, vastly expanding enterprise connectivity options.

Standards and Protocols

SCSI Command Set

The SCSI command set defines the software interface for communication between initiators and targets, independent of the underlying transport protocol. It is specified primarily in the standard, which outlines core operations applicable to all device types, and device-specific command sets such as for block storage devices. Commands are encoded in a Command Descriptor Block (CDB), a fixed- or variable-length structure sent during the COMMAND phase, with the first byte as the operation code () and the last byte as the byte. Opcodes are grouped by CDB length, using bits 7-5 of the opcode to indicate the group code (e.g., 000b for 6-byte commands). CDB formats vary to accommodate different parameter needs and address spaces:
Format LengthGroup CodesKey FieldsExample OpcodesLimitations/Use
6-byte000bOpcode (1 byte), parameters (e.g., 21-bit LBA, 8-bit transfer length), Control (1 byte)00h (TEST UNIT READY), 03h (REQUEST SENSE), 12h (INQUIRY), 08h (READ(6)), 0Ah (WRITE(6))Basic operations; limited LBA (up to 8 MB) and transfer size (up to 256 blocks). Suitable for simple commands without large addresses.
10-byte001b–101b (groups 1–5)Opcode (1 byte), flags (e.g., protection, caching), 32-bit LBA, 16-bit transfer length, Control (1 byte)28h (READ(10)), 2Ah (WRITE(10))Extended addressing (up to 2 TB with 512-byte blocks); common for block I/O. Includes bits for read protection, disable page out, force unit access.
12-byte101bOpcode (1 byte), service action (if used), 32-bit LBA, 32-bit transfer length, Control (1 byte)A0h (REPORT LUNS), 4Ch (LOG SELECT)Larger transfer lengths; used for management commands like LUN reporting.
16-byte100bOpcode (1 byte), flags, 64-bit LBA, 32-bit transfer length, Control (1 byte)88h (READ(16)), 8Ah (WRITE(16))Supports very large storage (up to 8 EB); essential for modern high-capacity drives. Examples include extended READ/WRITE with 64-bit addressing.
Variable-length CDBs (opcode 7Fh) allow up to 16 additional bytes for service actions, enabling further extensions while maintaining compatibility. SCSI operations proceed through a sequence of logical phases that structure the command lifecycle: (devices compete for access), SELECTION (initiator addresses the using target ID and LUN), COMMAND (CDB transfer to ), DATA IN or OUT (optional data movement, e.g., blocks during READ/WRITE), (target reports completion code, such as 00h for GOOD), and (exchange of control information, e.g., COMMAND COMPLETE). These phases ensure ordered execution, with the target driving phase transitions via signals in implementations or equivalent exchanges in variants. The COMMAND, , , and phases collectively form the information transfer phases. Common commands span inquiry, status checks, and data access, with mandatory support in SPC for core functionality across devices. The INQUIRY command (opcode 12h, 6- or 10-byte CDB) retrieves device identification, such as vendor, product, and revision via standard or Vital Product Data pages. TEST UNIT READY (opcode 00h, 6-byte CDB) polls device readiness without data transfer, returning GOOD status if operational. REQUEST SENSE (opcode 03h, 6-byte CDB) fetches sense data for error diagnosis, including sense keys like ILLEGAL REQUEST (05h). For block devices under SBC, READ (e.g., opcode 28h for 10-byte) transfers specified logical blocks from the medium to the initiator, while WRITE (opcode 2Ah) does the reverse, both supporting caching controls like Force Unit Access (FUA) to bypass write cache. These commands enable fundamental block access, with parameters like logical block address (LBA) and transfer length defining the scope. The protocol supports advanced features for efficient multi-command handling. Disconnect/reconnect allows the target to release the connection during long operations (e.g., seeking) and reconnect later, configured via the Disconnect-Reconnect mode page (code 02h) in MODE SELECT/SENSE commands to tune queue depth and timeouts. Linked commands chain multiple CDBs as a single task (LINK bit set in control byte), processing them atomically until a LINKED COMMAND COMPLETE message, enhancing performance for sequential operations. Conditional completion, signaled by LINKED COMMAND COMPLETE (with FLAG), enables early task termination if a condition (e.g., buffer full) is met, as defined in task management functions. These features, rooted in the SCSI Architecture Model (SAM), facilitate queuing and optimize resource use across varied workloads.

Parallel SCSI Specifications

Parallel SCSI specifications define the physical, electrical, and timing characteristics for the parallel implementation of the interface, enabling reliable data transfer over a shared bus. These standards, developed by the ANSI and later INCITS committees under the T10 technical group, evolved from basic asynchronous operation to high-speed synchronous modes with differential signaling. The specifications emphasize compatibility, termination requirements, and bus configuration to minimize signal reflections and . Electrical specifications for parallel SCSI include three primary signaling methods: Single-Ended (SE), High Voltage Differential (HVD), and Low Voltage Differential (LVD). SE uses unbalanced signaling with TTL-compatible voltage levels, where logic low is 0.0–0.8 V and logic high is 2.0–5.25 V, requiring active termination at both ends of the bus to regulate voltage and prevent reflections; passive termination is not permitted for SE due to its susceptibility to over distances beyond 3 meters. HVD employs balanced differential signaling with voltage swings of ±2.0 V around a common mode of 5.0 V, allowing passive termination and supporting cable lengths up to 25 meters, though it draws higher power and is largely obsolete in modern systems. LVD, introduced for improved immunity and speed, uses balanced signaling with voltage swings of ±250–600 mV around a 1.25 V bias (common mode 2.5 V), mandating active termination via a linear at approximately 2.85 V; multimode devices detect the signaling type via the DIFFSENS line, where voltages below 0.5 V indicate SE, 0.7–1.9 V indicate LVD, and above 2.2 V indicate HVD. Timing and transfer speeds in parallel SCSI progressed through generations, starting with asynchronous operation at up to 5 MB/s for an 8-bit bus in SCSI-1, where data transfer occurs without a using handshaking via REQ/ lines with a minimum strobe width of 50 ns. Synchronous modes, enabled from SCSI-1 onward, use a clocking mechanism to achieve higher throughput: SCSI-1 synchronous reaches 5 MB/s (5 MHz transfer period), Fast SCSI (SCSI-2) doubles to 10 MB/s (50 ns period), Ultra SCSI (Fast-20, ) attains 20 MB/s (25 ns period), Ultra2 (Fast-40, SPI-2) reaches 40 MB/s (12.5 ns period), Ultra160 (Fast-80, SPI-3) achieves 160 MB/s (wide only) with packetized transfers and domain validation at 80 MT/s, and Ultra320 (Fast-320, SPI-4) delivers 320 MB/s via double-transition clocking (DT) on both edges of the REQ/ signals. For 16-bit wide buses, these rates effectively double due to data paths. Asynchronous modes remained at 3–5 MB/s across generations for compatibility with legacy devices. Configuration details for parallel SCSI buses specify narrow (8-bit) and wide (16-bit) variants. Narrow buses support 8 device IDs (0–7), with a maximum of 7 targets plus 1 initiator (typically assigned ID 7 for highest ); wide buses expand to 16 IDs (0–15), accommodating up to 15 targets plus the initiator at ID 7. Each device must have a unique ID set via jumpers, switches, or software, and the bus requires proper termination only at the physical ends, regardless of device count. requirements vary by signaling: SE uses 25-pair flat ribbon or twisted-pair cables with overall shielding, limited to 6 meters total length; HVD permits up to 25 meters with twisted-pair cabling; LVD recommends 34-pair twisted-pair cables with individual pair shielding for lengths up to 12 meters to maintain signal integrity at higher speeds. The evolution of these specifications is documented in key standards: SCSI-1 (ANSI X3.131-1986) established basic asynchronous and synchronous operation at 5 MB/s; SCSI-2 (ANSI X3.131-1994) introduced Fast modes and command enhancements; SCSI Parallel Interface (SPI, ANSI X3.253-1995) defined the initial SCSI-3 ; SPI-2 (INCITS 302-1998) added LVD and Ultra2; SPI-3 (INCITS 336-2000) incorporated Ultra160; and SPI-4 (INCITS 362-2002) specified Ultra320 with DT clocking. These documents outline all electrical offsets, timing parameters, and configuration rules for compliant implementations.
StandardMax Transfer Rate (8-bit / 16-bit)Key Features
SCSI-15 MB/s / 10 MB/sAsynchronous/synchronous, SE signaling
SCSI-2 (Fast)10 MB/s / 20 MB/sFaster synchronous, command set expansion
(SPI)20 MB/s / 40 MB/s20 MHz clock, SE/LVD
Ultra2 (SPI-2)40 MB/s / 80 MB/sLVD mandatory, 40 MHz
Ultra160 (SPI-3)160 MB/s (wide only)Packetized,
Ultra320 (SPI-4)320 MB/s (wide only)Double-transition clocking, 80 MHz effective

Serial and Protocol-Mapped SCSI

The SCSI Architecture Model () establishes a layered framework that abstracts SCSI commands from underlying transport protocols, enabling their mapping to serial and other interfaces while maintaining core SCSI semantics. This model defines key elements such as tasks, initiators, targets, and service delivery subsystems, which facilitate command encapsulation, , and error handling independent of the physical or . By specifying how SCSI Information Units (IUs) are transported, SAM ensures across diverse environments, from point-to-point serial connections to networked fabrics. SAM has progressed through multiple revisions to accommodate protocols and evolving requirements. SAM-2, published in , introduced foundational support for serial transports by refining task routing and models. SAM-3, released in 2004, enhanced asymmetric access controls and task set features. SAM-4, issued in 2008, improved exception processing and application client interfaces. SAM-5 (INCITS 515-2016), published in 2012 and reaffirmed in 2016, incorporates advanced and extensions for high-performance mappings. The current latest, SAM-6 (INCITS 546-2021), builds on these with further refinements, as of 2025. For serial implementations, mappings adapt SCSI commands to point-to-point or switched topologies. In Serial Attached SCSI (SAS), the Serial SCSI Protocol (SSP) encapsulates SCSI commands into frames for direct device communication, handling sequencing and acknowledgments at the transport layer. Fibre Channel employs the Fibre Channel Protocol (FCP), which packages SCSI commands into sequence sets within FC frames, supporting fabric-based routing and buffer-to-buffer credits for flow control. These mappings preserve SCSI's request-response model while leveraging serial link efficiencies. Protocol extensions further integrate SCSI with specialized transports. In Fibre Channel environments, SCSI sessions are established via Port Login (PLOGI), which negotiates parameters like and sequence initiative before FCP exchanges begin. The SCSI RDMA Protocol (SRP), standardized by T10, enables over RDMA-capable networks like , mapping SCSI tasks to RDMA operations with credit-based resource allocation to minimize CPU involvement. Interoperability across these mappings relies on a unified command set, decoupled from transport specifics. For instance, SCSI Block Commands-5 (SBC-5, INCITS 571-2025), the current revision as of 2025, defines consistent operations for block storage devices, such as read/write and format unit, applicable regardless of whether transported via SSP, FCP, or SRP. Transport layers add headers for task tagging, ordering, and credit management, ensuring reliable delivery without altering command semantics. This abstraction allows devices to support multiple protocols seamlessly, promoting broad ecosystem compatibility.

Physical Layer and Connectivity

Parallel Interface and Cabling

Parallel SCSI employs a multi-pin connector to facilitate the parallel of , , and ground signals across the bus. The primary connector types include the 50-pin Centronics-style for narrow (8-bit) s, commonly used in early SCSI implementations for internal connections; the 68-pin High-Density (HD) connector for wide (16-bit) buses, which supports higher widths and is prevalent in SCSI-2 and later standards; and the 80-pin Single Connector Attachment () for hot-plug drives, integrating power, , and configuration signals in a single suitable for or blind-mate applications. Very High Density (VHD) variants of the 68-pin connector offer a more compact while maintaining pin compatibility, often seen in legacy host bus adapters (HBAs) for with modern s. Cabling for parallel SCSI is differentiated by internal and external applications, with specifications designed to minimize signal noise and attenuation. Internal cabling typically uses flat ribbon cables, limited to approximately 1 meter to reduce , while external cabling employs shielded twisted-pair designs for lengths up to 6 meters in single-ended () configurations and 25 meters in low-voltage differential (LVD) modes, adhering to ANSI SPI-5 standards. The daisy-chain connects multiple devices in series via these cables, allowing up to 16 devices per bus, with each segment requiring precise —typically 90 ohms or greater for shielded cables—to ensure . Installation of parallel SCSI systems mandates strict adherence to termination and addressing protocols to prevent bus errors. Active or passive terminators must be installed at exactly one end of the bus on the host adapter and the farthest device, using resistors compliant with SCSI-3 specifications to absorb signal reflections; internal ribbon cables often incorporate integrated termination options. Each device requires a unique SCSI ID from 0 to 15, set via jumpers or software to avoid conflicts, with higher IDs granting priority during bus access. Power budgeting is critical in multi-device chains, particularly with connectors that supply +5V and +12V directly, necessitating against total draw to prevent overloads in enclosed systems.

Fibre Channel Implementation

Fibre Channel provides a high-speed serial interface for implementing SCSI in enterprise storage area networks (SANs), enabling reliable data transfer over distances far exceeding those of parallel SCSI. Defined by the INCITS T11 technical committee, it layers SCSI commands atop its protocol stack, with the physical and link layers handling transmission. The Fibre Channel physical layer (FC-0) supports both optical and electrical media, optimized for low-latency, lossless delivery in mission-critical environments. The utilizes for distances up to several hundred meters at higher speeds, for kilometer-scale links, and twinaxial (twinax) cables for short-range, cost-effective connections typically under 10 meters. Speeds have evolved from 1 Gbit/s in early implementations to 128 Gbit/s in modern standards, with incremental generations including 2, 4, 8, 16, 32, and 64 Gbit/s; for example, FC-PI-7 specifies up to 64 Gbit/s using four lanes for aggregated throughput. These rates support full-duplex operation, yielding effective data rates after encoding overhead, such as approximately 800 MB/s at 8 Gbit/s. Fibre Channel supports multiple topologies to suit varying network scales: point-to-point for direct device connections, arbitrated loop (FC-AL) as a legacy ring configuration limited to 127 devices for simpler setups, and (FC-SW) for scalable interconnecting thousands of nodes via switches. In switched fabrics, partitions the network logically for security and , restricting visibility to authorized devices and preventing unauthorized access in multi-tenant environments. At the link layer, FC-1 handles encoding and decoding for , transitioning from 8b/10b schemes in speeds up to 8 Gbit/s to more efficient from 16 Gbit/s onward to reduce overhead and support higher rates. FC-2 manages framing, flow control, and error detection through sequences (ordered frame sets) and exchanges (end-to-end communication sessions), ensuring ordered delivery without retransmissions in fabric topologies. The (FCP) maps SCSI commands into these frames for transport, with brief integration via fabric login processes. Key standards include FC-PI for physical interfaces, FC-FLA for fabric login procedures, and FC-SW for fabric services, enabling up to 2^{24} (16 million) unique N_Port IDs for addressing in large fabrics.

Serial Attached SCSI (SAS)

Serial Attached SCSI (SAS) serves as a point-to-point serial interface that replaces the parallel SCSI bus, enabling higher speeds and greater scalability in direct-attach storage environments. Developed by the T10 technical committee, SAS maintains with SCSI command sets while serializing the to support longer cables and more devices without the arbitration overhead of parallel buses. It uses differential signaling over copper cables, with each physical link (phy) operating independently to form wider ports by aggregating up to four lanes for increased . The physical layer of SAS consists of point-to-point serial links, where each phy supports data rates starting at 1.5 Gbit/s in SAS-1, doubling to 3 Gbit/s in SAS-1.1, 6 Gbit/s in SAS-2, 12 Gbit/s in SAS-3 (standardized in 2013), and 22.5 Gbit/s per lane in SAS-4 (INCITS 534-2019), with a 24 Gbit/s generation in development and initial products available as of 2025. Ports can combine up to four phys into wide configurations, such as a 4x port achieving up to 90 Gbit/s aggregate in SAS-4 after 128b/130b encoding overhead. SAS expanders employ subtractive routing, where unmatched addresses are forwarded to a subtractive port, or concatenated PHYs for table-based routing, allowing precise address resolution without flooding the network. Connectivity relies on daisy-chaining through expanders, which act as intelligent switches supporting table routing to connect up to 65,535 targets in a single domain, far exceeding parallel SCSI's limit of 16 devices. Common connectors include the SFF-8482 for SAS drive plugs, providing 29 pins for dual-lane signal and power integration, and internal Mini-SAS HD (SFF-8643) for high-density backplane connections supporting up to 12 Gbit/s per lane. SAS protocol layers are divided into PHY, link, and transport sublayers to manage serialization, error detection, and command framing. The SAS PHY layer handles (OOB) signaling for link initialization and speed negotiation, using patterns like ALIGN primitives to align data streams. The oversees management, control via credits—where initiators request credits before sending frames—and primitives such as DWS (dword synchronization) to maintain dword and detect errors. The transport layer encapsulates protocols: (Serial SCSI Protocol) for native SCSI commands over SAS, (SATA Tunneling Protocol) for bridging SATA drives, and (SAS Management Protocol) for domain discovery and configuration. Key features of SAS include dual-port architecture, where each device supports two independent ports for path redundancy and failover, enhancing reliability in arrays. Hot-plug capability is inherent in the connector design and OOB signaling, allowing devices to be inserted or removed without powering down the system, provided proper sequencing is followed. Additionally, SAS ensures with drives through , enabling cost-effective mixing of SAS and devices in the same domain while tunneling commands over SAS links.

Networked and Attached Variants

Internet SCSI (iSCSI)

Internet Small Computer Systems Interface () is a transport protocol that enables the encapsulation and transmission of SCSI commands over networks, facilitating access in -based environments. Developed to extend SCSI functionality beyond local buses to wide-area networks, iSCSI allows initiators—such as servers—to communicate with targets like storage arrays using standard Ethernet infrastructure. This approach promotes networking by leveraging existing ecosystems, supporting distances far exceeding those of while maintaining SCSI's command semantics. The protocol structure relies on Protocol Data Units (PDUs) to carry SCSI elements over connections, with the default port 3260 assigned for traffic. Key PDUs include the SCSI Command PDU (opcode 0x01), which encapsulates the SCSI Command Descriptor Block (CDB), task attributes, and sequence numbers like CmdSN for ordered delivery, and the Data-in PDU (opcode 0x25), which transfers read from target to initiator along with status information via StatSN. Other PDUs handle tasks such as SCSI Response (opcode 0x05) for command completion and Ready To Transfer (opcode 0x31) for write operations. These PDUs support immediate , bidirectional transfers, and extensions for larger CDBs, ensuring with SCSI standards while adding -specific headers for session control. iSCSI operation begins with login phases to establish a session: the Security phase (CSG=0) negotiates authentication methods like CHAP or and optional ; the Login Operational phase (CSG=1) sets parameters such as maximum s and error recovery levels; and the Full Feature phase (CSG=3) enables full SCSI command execution. Sessions, identified by an Initiator Session ID (ISID) and Target Portal Group Tag, support multiple connections for redundancy and throughput (multiple connections per session, or MC/S). Integrity is maintained via optional digests—HeaderDigest and DataDigest using CRC32C—for error detection on headers and payloads. Error recovery operates at configurable levels: Level 0 offers none; Level 1 handles within-command issues like digest errors; Level 2 recovers within a connection via ; and Level 3 enables session-wide recovery, including to alternate paths. At the physical and , iSCSI utilizes Ethernet from 1 Gbit/s to 100 Gbit/s, integrating seamlessly with and supporting features like tagging for traffic isolation and jumbo frames (up to 9,000 bytes MTU) to reduce overhead and improve throughput for large data transfers. Initiators can be software-based, integrated into operating systems like Microsoft Windows or , or hardware-accelerated via dedicated iSCSI HBAs for offloading processing. The standards were initially defined in 3720 (April 2004) and consolidated in 7143 (April 2014), which obsoletes the earlier version while preserving core functionality and adding clarifications for interoperability. iSCSI's primary advantages lie in IP convergence, allowing storage traffic to share with general networking, which lowers costs by avoiding specialized hardware like switches and enables scalable, long-distance deployments using commodity Ethernet components. However, it faces challenges with due to TCP's and mechanisms, particularly in high-traffic or distant networks, where may lag behind dedicated protocols without optimizations like dedicated VLANs or RDMA alternatives.

SCSI RDMA Protocol (SRP)

The SCSI RDMA Protocol (SRP) is a transport protocol that enables the mapping of SCSI commands and data transfers over Remote Direct Memory Access (RDMA) networks, allowing initiators to access remote SCSI targets with minimal CPU involvement. It defines a set of request and response operations that encapsulate SCSI commands within RDMA work requests, utilizing RDMA read and write operations for data movement. A key mechanism is the use of an R_Key (remote key), which provides secure direct access to registered memory regions on the target, facilitating zero-copy transfers where data moves directly between application buffers without intermediate copying. This design is particularly suited for high-performance computing (HPC) and database environments requiring low-latency, high-throughput storage access. SRP operations begin with connection establishment using the InfiniBand Connection Manager (CM), where the initiator sends an SRP_LOGIN_REQ to the target, specifying parameters such as the maximum number of information units and supported RDMA operations; the target responds with SRP_LOGIN_RSP to accept or reject the , forming an I_T (initiator-target association). Once connected, SCSI commands are queued using tag-based mechanisms, with each request including a unique tag for ordering and identification, supporting up to multiple concurrent channels per for . Data transfer modes include immediate data, where small payloads are embedded in the request, and unsolicited data, allowing the target to initiate RDMA writes for write commands without prior reads, enhancing efficiency in asymmetric I/O scenarios. functions, such as aborting tasks, are also mapped to SRP operations to maintain SCSI semantics over the RDMA fabric. At the physical layer, SRP primarily operates over fabrics using copper twinaxial or optical cables, supporting data rates up to 200 Gbit/s in high-data-rate () configurations, though it is adaptable to faster variants. For Ethernet-based deployments, SRP runs over (RDMA over Converged Ethernet) using standard Ethernet cabling and switches, or iWARP (Internet Wide Area RDMA Protocol) over TCP/IP, enabling integration with existing IP networks. Multipath I/O is supported through the underlying RDMA fabric's routing capabilities, allowing load balancing and across multiple physical paths to improve reliability and in clustered setups. The protocol is standardized in INCITS 365-2002 (reaffirmed 2017) for the original SRP, with extensions in SRP-2 (INCITS 551-2019, reaffirmed 2024) adding support for advanced RDMA features like enhanced error handling and larger transfer sizes.

USB Attached SCSI (UAS)

USB Attached SCSI (UAS) is a transport protocol that enables the transmission of SCSI commands and data over the Universal Serial Bus (USB), allowing USB-connected storage devices to utilize the full SCSI command set for efficient data access. Defined as a bridge between SCSI architectures and , UAS supports features like command queuing and , making it suitable for high-performance applications such as external hard drives and solid-state drives. Unlike earlier USB mass storage protocols, UAS leverages USB's bulk transfer capabilities to pipeline multiple commands, reducing latency and improving throughput, particularly on and later interfaces. The protocol operates by encapsulating SCSI commands, data, and status within dedicated USB pipes, replacing the limitations of the Bulk-Only Transport (BOT) protocol used in traditional . UAS employs a four-pipe model consisting of a command bulk-out pipe for sending SCSI commands from the host, a status interrupt-in endpoint for asynchronous task status notifications, a data-in bulk-in pipe for reading data from the device, and a data-out bulk-out pipe for writing data to the device. This separation prevents mixing of command, data, and status phases, enabling true pipelining where multiple SCSI commands can be issued and processed concurrently without waiting for prior operations to complete. SCSI commands are transported using USB bulk streams in and higher, with stream IDs associating data transfers to specific commands; for example, up to 256 streams per endpoint are possible, though typical implementations use fewer for efficiency. The protocol supports scatter-gather operations for non-contiguous data transfers, allowing devices to handle complex I/O patterns directly, and includes mechanisms for autosense data delivery and functions like abort and reset, all compliant with SCSI Architecture Model-4 (SAM-4). In operation, a UAS is enumerated as a USB class with interface code 0x62, distinct from BOT's 0x50, using the base class code 0x08 for . Upon connection, the host issues SCSI commands via the command pipe, each tagged with a for queuing up to 32 or more commands depending on capabilities. Data transfers utilize the bulk pipes with support, enabling simultaneous inbound and outbound operations that exploit USB's full-duplex nature. Status updates, including completion codes and sense data, are reported asynchronously via the endpoint, minimizing polling overhead. This setup supports advanced SCSI features like tagged command queuing, improving random I/O performance over BOT by allowing the to reorder and optimize commands locally. For instance, in benchmarks with , UAS can achieve up to 30% higher sequential read speeds compared to BOT due to reduced host intervention and better bus utilization. Physically, UAS devices connect via standard USB interfaces, including legacy Type-A, Type-B, Micro-USB, and modern Type-C connectors, with compatibility spanning USB 2.0 full-speed (12 Mbit/s) and high-speed (480 Mbit/s) modes, though optimal performance requires SuperSpeed (5 Gbit/s) or faster. The protocol integrates seamlessly with USB 3.1's SuperSpeed+ (10 Gbit/s) and USB 3.2's multi-lane configurations up to 20 Gbit/s, as well as USB4's asymmetric speeds reaching 40 Gbit/s in one direction while maintaining with USB 3.2. Power management follows USB standards, including selective suspend for idle devices and link power states to reduce consumption, while hot-plug support allows dynamic connection and disconnection without system reboot. UAS targets consumer and storage, enabling low-cost enclosures for HDDs and SSDs without needing specialized controllers. The UAS originated from T10 project 2095-D, culminating in ANSI INCITS 471-2010 (reaffirmed 2015 and 2020, with further reaffirmation in progress as of 2024), which specifies the transport mappings; an updated UAS-3 version, ANSI INCITS 572-2021 (ISO/IEC 14776-253:2023), extends support to USB 3.x enhancements and SAM-6. The (USB-IF) formalized the device class aspects in the Protocol (UASP) v1.0 specification released in , ensuring interoperability through defined descriptors and capabilities negotiation. Key benefits include enhanced scalability for multi-command workloads, lower CPU utilization via asynchronous status handling, and native USB features like plug-and-play , making UAS ideal for portable in modern environments. These advantages position UAS as a efficient, SCSI-native alternative to BOT, particularly for bandwidth-intensive applications on high-speed USB links.

Device Management

Identification and Addressing

In parallel SCSI implementations, devices are identified using an 8-bit SCSI ID field, with narrow (8-bit data transfer) buses supporting IDs in the range 0-7 and wide (16-bit data transfer) buses supporting IDs in the range 0-15. These IDs are typically configured via jumpers, switches, or dip switches on the device, though some systems allow software-based selection. During the selection phase, an initiator asserts the selection line and places a bit pattern on the data bus corresponding to the device's ID; the responds if its ID matches, enabling and command transfer. Serial and networked SCSI variants employ more scalable addressing schemes based on worldwide unique identifiers to support larger topologies. In Serial Attached SCSI (SAS), each port uses a 64-bit SAS address, which functions as a (WWN) for unique identification across the fabric. implementations utilize a 24-bit N_Port ID assigned by the fabric for routing, alongside WWNs for node and port identification. For , devices are identified by an iSCSI Qualified Name (IQN), a string in the format "iqn.yyyy-mm.naming-authority:unique-name" that ensures global uniqueness without relying on network addresses. In multi-path environments, Asymmetric Logical Unit Access (ALUA) enables initiators to query path states (e.g., active/optimized vs. active/non-optimized) across multiple target ports sharing the same logical unit, optimizing and load balancing. Device discovery and LUN addressing follow standardized processes to enumerate accessible resources. In , initiators perform a bus by sequentially attempting selection for each possible (0-15 on wide buses), issuing an command to retrieve basic device information once a responds. For serial variants like and , discovery involves port login procedures—such as SAS IDENTIFY or Fibre Channel PLOGI—followed by targeted enumeration. The command with the EVPD bit set retrieves Vital Product Data (VPD) pages, including the Device Identification VPD page (code 83h) for detailed identifiers like WWNs and port names. LUN enumeration uses the REPORT LUNS command, which lists accessible logical units through the target port, supporting up to 256 LUNs per target in the peripheral device addressing method defined in SPC-4. Extensions in networked environments, such as NVMe over Fabrics (NVMe-oF) mapped to SCSI protocols, introduce IDs as additional addressing layers, translating NVMe into SCSI LUNs while preserving via REPORT LUNS and VPD for . This ensures seamless integration in hybrid fabrics, where NVMe (typically numbered 1 to 1024) are exposed as addressable units akin to traditional SCSI LUNs.

Device Types and Logical Units

In SCSI, peripheral device types are identified through the Peripheral Device Type (PDT) field in the response to command, which specifies the general category of the device connected to a logical unit. This 5-bit field, located in bits 0 through 4 of byte 0 in the standard INQUIRY data, allows initiators to determine the appropriate command set for interaction with the device. For example, a PDT value of 0x00 indicates a direct access block device, such as a or , supporting storage operations; 0x01 denotes a sequential access device like a for linear ; and 0x05 represents a read-only direct access device, typically for CD or DVD optical media. These types ensure compatibility across diverse peripherals while enabling device-specific behaviors. Logical units (LUNs) in SCSI represent addressable components within a target device, virtualizing physical or abstract storage entities for initiator access. The LUN structure is defined as a 64-bit (8-byte) identifier, supporting both single-level and hierarchical addressing schemes as outlined in the SCSI Architecture Model. In single-level addressing, LUNs range from 0 to 255, using a simple 8-bit peripheral device method suitable for basic configurations where all units are directly addressed under a target. Hierarchical addressing, conversely, employs the full 8-byte format to support multi-level hierarchies, such as those in complex storage arrays, allowing up to four levels of addressing (e.g., bus, target, controller, and device). RAID arrays, for instance, are commonly presented as virtual direct access LUNs (PDT 0x00) by array controllers, aggregating multiple physical disks into a single logical block device for fault-tolerant storage. Similarly, processor logical units (PDT 0x03) are used on host bus adapters (HBAs) for management tasks, such as configuration and diagnostics, without representing storage media. The SCSI Primary Commands-6 (SPC-6) standard, published as INCITS 566-2025, governs core mechanisms for logical units, including unit attention conditions that notify initiators of asynchronous events like resets or mode changes, and reservations that enable exclusive access to prevent concurrent modifications. For block-oriented devices, SCSI Block Commands-5 (SBC-5, INCITS 571-2025) extends these with commands for read/write operations, formatting, and error recovery on direct access LUNs. devices, such as tapes, rely on SCSI Stream Commands-5 (SSC-5, INCITS 503-2022) for tape-specific functions like space and locate operations on sequential LUNs. Modern extensions enhance LUN virtualization in networked environments. In storage area networks (SANs), virtual LUNs abstract underlying physical storage, allowing dynamic provisioning and migration across arrays while maintaining SCSI semantics. Additionally, in SCSI-over-NVMe mappings, NVMe namespaces are presented as equivalent LUNs, enabling legacy SCSI applications to interface with high-performance NVMe devices through one-to-one translations of block commands to NVMe operations.

Enclosure Services

SCSI Enclosure Services (SES) defines a standardized SCSI command set for initiators to access and manage services within enclosures housing multiple SCSI devices, such as just a bunch of disks (JBODs), enabling monitoring and control of non-device elements like environmental sensors and indicators. The protocol targets enclosure service components that respond to diagnostic commands, facilitating fault-tolerant operations in storage systems. SES-3, approved as INCITS 518-2017 with a reaffirmation in 2022, introduced enhanced support for subenclosures and detailed element descriptors, while SES-4, published as INCITS 555 in 2020, refined these with improved protocol integration for modern interfaces. Enclosure services processes, identified by Peripheral Device Type 0Dh in INQUIRY data, operate as standalone logical units or embedded within devices like SAS expanders. Core functionality relies on the ENCLOSURE SERVICES command (opcode B5h), which transfers data blocks to and from the enclosure services interface for and reporting, often in conjunction with RECEIVE DIAGNOSTIC RESULTS (opcode 1Ch) and SEND DIAGNOSTIC (opcode 1Dh) commands as defined in SPC-5. These commands access diagnostic pages, such as the page (code 02h) for overall health and the In/Out pages (codes 07h/08h) for sensor limits, allowing retrieval and setting of element statuses. For example, elements report voltage, current, and failure states, while SAS expander elements provide phy-level details like link errors and speed. sensors monitor ambient and component-specific values with configurable high/low critical and warning , triggering alerts via sense data (e.g., ASC/ASCQ 0B/01 for exceeded limits). elements speed and status (e.g., failed or stalled), ensuring cooling integrity. SES enables precise control of visual indicators, including LED states for device slots to aid identification and maintenance; the Enclosure Control page (code 00h) issues blink commands to flash LEDs on specific slots, using parameters like select code 01h for identify and request/setting values for on/off or blink patterns. Integration occurs primarily in SAS environments, where SES is embedded in expanders for topology discovery and element management, supporting JBOD configurations by reporting on connectors, slots, and expanders via Additional Element Status pages (code 0Ah). As a legacy equivalent for parallel SCSI, SCSI Accessed Fault-Tolerant Enclosures (SAF-TE), developed in 1998 by nStor and Intel and standardized as NCITS 305, provided similar in-band monitoring but with less flexibility, predating SES's broader adoption in serial protocols. SES maintains backward compatibility with SAF-TE elements while extending capabilities for serial attached SCSI (SAS) and Fibre Channel enclosures.

Applications and Legacy

Historical and Current Uses

During the 1990s, SCSI emerged as the dominant interface for storage in enterprise servers from manufacturers like and , where it facilitated high-performance data access for applications in Unix-based environments. , in particular, powered RAID configurations that became standard for fault-tolerant storage in these systems, enabling scalable arrays for database and file serving workloads. On workstations, including Apple's Macintosh line, SCSI connected hard drives, , and backups, supporting creative and scientific computing tasks until the mid-1990s when integrated began to appear in consumer models. In personal computers, remained prevalent for add-on storage until the introduction of Serial ATA (SATA) in 2003, which offered simpler cabling and lower costs for consumer applications. As of 2025, SCSI variants continue to play essential roles in environments despite consumer decline. Serial Attached SCSI (SAS) dominates deployments, particularly in hyperscale cloud infrastructures from providers like AWS and , where its dual-port architecture ensures and up to 2.8 GB/s per lane for mission-critical workloads. Internet Small Computer Systems Interface (iSCSI) is widely adopted in small-to-medium business (SMB) (NAS) setups, leveraging Ethernet for cost-effective block-level access in and backup scenarios. (FC) persists in mainframe systems, such as IBM zSeries, for low-latency, high-throughput connectivity in financial and . USB Attached SCSI (UAS) enables efficient protocol handling in external solid-state drives (SSDs), improving transfer speeds over USB Class in portable tools. In niche applications, SCSI supports tape libraries using Linear Tape-Open (LTO) technology for archival storage, with commands managing data ingestion and retrieval in systems like IBM's TS4500. Automation devices, identified by Peripheral Device Type (PDT) 0x1D as SCSI Medium Changers, control robotic cartridge handling in these libraries for efficient media swapping. Additionally, SCSI translation layers bridge legacy software to NVMe-based SSDs, mapping commands like READ and WRITE to maintain compatibility in hybrid storage arrays without full stack overhauls. SCSI's market presence has waned in consumer segments, supplanted by and NVMe for everyday computing, but it remains vital in enterprise storage for reliability in data centers. The latest SAS-4 standard, ratified in 2019 and marketed as 24 Gbit/s, entered production around 2021, enhancing for dense racks while sustains legacy investments.

Comparison with Modern Interfaces

Serial Attached SCSI (SAS), a serial evolution of the SCSI standard, maintains with drives, allowing devices to connect to SAS controllers while providing enhanced robustness for environments. SAS supports advanced command queuing with depths up to 254 commands and dual-port configurations for and , enabling multiple paths to devices that improve availability in mission-critical systems. In contrast, , optimized for consumer applications, limits queuing to 32 commands via Native Command Queuing (NCQ) and typically operates with single-port interfaces, making it simpler and more cost-effective for desktop hard disk drives (HDDs) and basic needs. This distinction positions SAS as superior for high-I/O workloads in servers, where its higher (MTBF) of approximately 1.6 million hours supports 24/7 operations, compared to 's 1.2 million hours for intermittent use. Compared to NVMe, which leverages PCIe for direct low-latency access to , SCSI commands are often translated into NVMe equivalents in networked extensions like NVMe over Fabrics (NVMe-oF) to bridge legacy software stacks. For instance, SCSI READ commands map to NVMe Read operations, and status codes like CHECK CONDITION translate to NVMe-specific errors, preserving compatibility without full redesign. NVMe excels in speed, offering up to 35% higher and lower latency (e.g., reducing from 5,871 μs to 5,089 μs, a decrease of about 13%, at 16 KiB blocks) due to its SSD-optimized design and reduced protocol overhead, making it ideal for modern flash-based storage. However, SCSI remains preferable for legacy tape and optical drives, where its broad device-agnostic command set supports archival formats like LTO tapes connected via interfaces, as seen in recent solutions for long-term retention. In Ethernet-based storage, extends SCSI over / to compete with file-level protocols like SMB3 and NFS, but introduces encapsulation overhead that can impact efficiency in high-throughput scenarios. 's block-level access provides direct disk manipulation, yielding higher performance and reliability for database applications by avoiding file-system abstractions, unlike NFS's RPC-based or SMB3's emphasis on and . This block-oriented reliability ensures consistent I/O for transactional workloads, though may lag in simplicity for shared file access compared to NFS's built-in locking. SCSI's enduring advantages include multi-initiator support, allowing multiple hosts to access shared storage for clustering, and its suitability for long-term archiving in environments reliant on tape media. Parallel SCSI variants were largely phased out by the 2010s in favor of serial protocols like SAS and Fibre Channel (FC), driven by scalability limits in parallel bus topologies. As of 2025, SAS and FC persist in enterprise storage area networks (SANs), with FC holding about 40% market share by technology due to its low-latency, high-availability features in large-scale deployments. In cloud contexts, SCSI influences persists through iSCSI-like block storage in services such as AWS Elastic Block Store (EBS), which emulates SAN-style access for virtual machines. Recent advancements, including SAS-4's 24 Gbit/s specification ratified in 2019 and ongoing USB Attached SCSI (UAS) driver enhancements for faster USB 3.0+ transfers, underscore SCSI's adaptation to 2020s hybrid environments.

References

  1. [1]
    [PDF] small computer system interface (SCSI)
    This standard specifies the mechanical, electrical, and functional requirements for a small computer input/output bus interface, and command sets for ...
  2. [2]
    INCITS 131-1994[S2013]: Small Computer Systems Interface
    Oct 3, 2024 · SCSI-2 was published in August 1990 as ANSI X3.T9.2/86-109 with further revisions in 1994 and subsequent adoption of a multitude of interfaces.
  3. [3]
    SCSI Adapters - An introduction to SCSI devices - IBM
    Jan 29, 2019 · SCSI-1 standards are defined by ANSI X3.131.1986. The newer SCSI-2 specifications are defined by ANSI X3T9.2/375R revision 10K, 1993. The SCSI ...
  4. [4]
    SCSI Standards Architecture (*) - t10.org
    This chart reflects the currently approved SCSI project family. T10 is responsible for all projects, except: IEEE is responsible for IEEE 1394; T11 responsible ...
  5. [5]
    T10 - SCSI - INCITS
    From 1997-2001, INCITS operated under the name Accredited Standards Committee NCITS, National Committee for Information Technology Standards. From 1961- 1996, ...
  6. [6]
  7. [7]
    RFC 4455 - Definition of Managed Objects for Small Computer ...
    The command sets were separated from the physical interface definitions, and a SCSI Architectural Model (SAM) was created to define the interaction between the ...
  8. [8]
    SCSI Command Tag Queuing - IBM
    SCSI command tag queuing refers to queuing multiple commands to a SCSI device. Note: This operation is not supported by all SCSI I/O controllers.
  9. [9]
    [PDF] SCSI Self-Paced Training Guide | HPE Community
    SCSI is a high-speed bus capable of supporting multiple devices, including devices connected to the outside of the system. Due to the high speed, and the ...
  10. [10]
    [PDF] Small Computer System Interface
    A unique SCSI device ID (7–0) is assigned to the SCSI host adapter and to each device controller. One of the devices on the bus must be the initiator (usually.
  11. [11]
    T10 Working Drafts - t10.org
    SPC-6 contains the sixth-generation definition of the basic commands for all SCSI devices. SPC-6 is used in conjunction with a standard for the specific device ...
  12. [12]
    None
    Below is a merged summary of SCSI definitions and details from the Seagate SCSI Commands Reference Manual (Rev. J, October 2016), consolidating all information from the provided segments into a comprehensive response. To maximize density and clarity, I will use tables in CSV format where appropriate to organize detailed information, followed by narrative summaries for concepts that are less tabular in nature. The response retains all unique details mentioned across the segments, with references to sections and pages where available.
  13. [13]
    [PDF] Proposal for Parallel SCSI: Domain Validation - t10.org
    i) If the target detects a CRC or a parity error or any other exception indicating a failure, it shall go into MESSAGE IN phase. The target shall then send a ...
  14. [14]
    Larry Boucher - CHM - Computer History Museum
    Previously, he was director of design services at Shugart Associates, where he conceived the idea of the SCSI interface and authored its initial specifications.Missing: inventor | Show results with:inventor
  15. [15]
    [PDF] Oral History of Laurence "Larry" Boucher
    Jan 27, 2015 · Richard Steele was the guy managing that and he had already chosen SASI as the name of his device, which was “Shugart Associates Storage.Missing: inventor | Show results with:inventor
  16. [16]
    25 Years of Macintosh - Low End Mac
    Jan 24, 2009 · In January 1986, Apple introduced the first expandable Macintosh, the Mac Plus, bringing SCSI into play. The Mini vMac Mac Plus Emulator ...
  17. [17]
    First-Hand:Sun Microsystems's Storage History – The Early Years
    Apr 17, 2025 · In 1983 Sun introduced new deskside systems with 5¼-inch disk drives which attached through the Small Computer Systems Interface (SCSI), the ...
  18. [18]
    SCSI-2 Spec - Introduction
    b) Fast SCSI (synchronous data transfers of up to 10 mega-transfers per second); c) Command queuing (up to 256 commands per initiator per logical unit); d) High ...<|control11|><|separator|>
  19. [19]
    SCSI glossary, practical definitions and terminologies for SCSI
    Bridge Controller: A target controller that uses SCSI for the connection between the initiator and some other bus (or SCSI) to connect to the peripheral device.
  20. [20]
    Ultra3 SCSI Low Voltage Differential (LVD) Drives | Seagate US
    Three new components of Ultra160 SCSI are Double Transition Clocking, CRC, and Domain Validation. Low Voltage Differential (LVD) devices are defined under the ...
  21. [21]
    7 SCSI commands and status - staff.uni-mainz.de
    Tagged queuing is new in SCSI-2. A target may support both tagged and untagged ... All commands received with a SIMPLE QUEUE TAG message after a command ...
  22. [22]
    3.1. SCSI Addressing
    Each SCSI bus can have multiple SCSI devices connected to it. In SCSI parlance the HBA is called the "initiator" and takes up one SCSI id number (typically 7).
  23. [23]
    [PDF] Serial ATA and Serial Attached SCSI technologies
    This section describes the limitations of parallel ATA and SCSI to meet future enterprise requirements and why the industry is migrating toward serial I/O.Missing: motivations shift
  24. [24]
    SCSI going serial | InfoWorld
    May 9, 2003 · Although interesting, SATA solutions are no match for high-performing and very reliable SCSI siblings, which are still the cornerstone of ...
  25. [25]
    SCSI Parallel Interface - 4 (SPI-4) - T10.org
    SCSI Parallel Interface - 4 (SPI-4). Phase: Withdrawn Project Number: 1365-D BSR Number: INCITS 362. Status: Withdrawn Action: none Date:Missing: ANSI | Show results with:ANSI
  26. [26]
    INCITS Announces the ANSI Approval of Serial Attached SCSI (SAS ...
    The new SAS SCSI standard, developed by INCTS Technical Committee T10 and designated as INCITS 376, is the next generation to parallel SCSI. The SCSI family of ...
  27. [27]
    What is Fibre Channel? History, Layers, Components and Design
    Sep 9, 2025 · The first draft of the standard was completed in 1989. The American National Standards Institute (ANSI) approved the FC-PH standard in 1994.
  28. [28]
    iSCSI (Internet Small Computer System Interface) By - TechTarget
    May 15, 2024 · ... iSCSI standard to the Internet Engineering Task Force in 2000. The protocol was ratified in 2003. ISCSI makes it possible to set up a shared ...
  29. [29]
    Serial Attached SCSI Standards Overview - Thomas-Krenn-Wiki-en
    May 5, 2020 · Serial Attached SCSI (SAS) is a serial interface for data transfer. Standards include 3Gb/s (SAS-1), 6Gb/s (SAS-2), and 12Gb/s (SAS-3).
  30. [30]
    How can 8 SAS lanes support 1024 disks? [duplicate] - Server Fault
    Oct 24, 2015 · How can 8 SAS lanes support 1024 disks? Using SAS expanders - each SAS channel can theoretically support 65,536 devices per link using expanders ...SAS-2 6Gbps Maximum Disk Capacity - Server FaultHP SAS Expander 12Gb max storage limitations - Server FaultMore results from serverfault.com
  31. [31]
    [PDF] SCSI Primary Commands -3 - T10.org
    Nov 26, 2008 · This document is draft being submitted to ISO. It is not the ISO approved document.Missing: width | Show results with:width
  32. [32]
    None
    Below is a merged summary of the READ and WRITE commands from SBC-3 (T10/05-344r0), combining all information from the provided segments into a single, dense response. To maximize detail and clarity, I’ll use tables in CSV format for the CDB structures, supplemented by additional text for context and references. All unique details from each segment are retained, including opcodes, CDB structures, and useful URLs.
  33. [33]
    [PDF] SCSI Architecture Model - 3 (SAM-3) - t10.org
    Mar 16, 2002 · SAM-3 is a T10 proposal for the SCSI Architecture Model, which provides a common basis for coordinating SCSI standards. This document is not a ...
  34. [34]
    [PDF] Multimode SCSI 15 Line Terminator datasheet (Rev. B)
    Any diff sense signal below 0.5V indicates single ended, between 0.7V and 1.9V is LVD SCSI and above 2.2V is HVD SCSI. In the single ended mode, a multi-mode ...Missing: signaling | Show results with:signaling
  35. [35]
    [PDF] DS2119M Ultra3 LVD/SE SCSI Terminator - Analog Devices
    The DS2119M is a SCSI terminator that supports both LVD and SE termination, automatically selecting the mode based on the bus. It is compliant with SCSI SPI-2, ...
  36. [36]
  37. [37]
    [PDF] System Engineering Note No. 911 SCSI Bus Mode Detection and ...
    Voltage levels for HVD and DIFFSENS are pulled to +5.0 Volts. The DIFFSENS signal is an input to both the SCSI processor and differential transceivers. Should a ...
  38. [38]
    SCSI Revision Levels | Seagate US
    SCSI-2 was an upgrade from the original SCSI interface. Changes included faster data rates and mandated message and command structure to improve compatibility.
  39. [39]
    SCSI FAQ Answers. What is SCSI? SCSI technical information ...
    Another new feature of SPI-2 is the SCSI Interlock Protocol (SIP) which defines the parallel command set. Also, SPI-2 adds two new SCSI connectors:
  40. [40]
    Appendix A. Common bus types (SCSI and other)
    SCSI. The original SCSI 1 standard (ANSI specification X3.131-1986) introduced an 8 bit parallel bus that was able to do asynchronous transfers at 1.5 ...Missing: phases | Show results with:phases
  41. [41]
    5 (SAM-5) - SCSI Architecture Model - t10.org
    SAM-5 is a published SCSI Architecture Model (project 2104-D), with status INCITS 515-2016, and a 5 year review date of 2022.
  42. [42]
    [PDF] Managing FLOGI, Name Server, and RSCN Databases - Cisco
    FLOGI stores fabric login info, name server stores host/device attributes, and name server stores name entries for all hosts in the FCNS database.<|separator|>
  43. [43]
  44. [44]
    None
    Below is a merged summary of the Parallel SCSI Physical Interface based on the Seagate Manual (Rev. A), consolidating all information from the provided segments into a single, comprehensive response. To maximize detail and clarity, I’ve organized key information into tables where appropriate, while retaining narrative sections for broader context. All unique details from each summary are included, with references to standards and URLs preserved.
  45. [45]
    SCSI and other Peripheral Interface Standards
    A single SCSI bus can hold up to eight peripherals. Each device on a single bus needs a unique SCSI ID, ranging from 0 to 7. The host adapter itself also needs ...Missing: multiple | Show results with:multiple
  46. [46]
    [PDF] SOLUTIONS GUIDE 2024 - Fibre Channel Industry Association
    Dec 9, 2024 · The committee completed the FC-PI-8 standard (the standard for 128GFC Fibre Channel speed) in 2023, and ... INCITS FC-PI-8: The 128G Fibre Channel ...
  47. [47]
    Fibre Channel Standards - Broadcom Inc.
    INCITS Fibre Channel (T11) Technical Committee. The INCITS Fibre Channel Technical Committee is the governing body for all Fibre Channel-related standards.
  48. [48]
    [PDF] Fibre Channel Speedmap
    Apr 6, 2017 · – Doesn't account for encoding (8b/10b, 64b/66b). – Doesn't account ... 8 Gbit FC. 8b/10b. 800 MB/s. 8.5. 10 Gbit Ethernet. 64b/66b. 1250 MB/s.
  49. [49]
    Fibre Channel architecture - IBM
    You can interconnect a set of nodes with Fibre Channel Arbitrated Loop (FC-AL) ring topology. The maximum number of ports that you can have on an FC-AL is 127.
  50. [50]
    [PDF] Fibre Channel Zoning Basics
    Jun 27, 2019 · – FC-SW – Switch Fabric. – Etc… • Zoning is defined in FC-GS and FC-SW standards. • Material for this presentation taken from in. FC-GS and FC- ...Missing: topologies AL<|separator|>
  51. [51]
    Storage Networking Basics: Understanding the Fibre Channel Protocol
    Oct 11, 2007 · The term FCP, Fibre Channel Protocol, refers to the interface protocol for SCSI, or the FC-4 mapping. ... Refer to the ANSI T11 FC-SW-3 ...
  52. [52]
    OpenVMS - SAN - What is the Difference Between the FC Port Name ...
    Each N_Port has one or more 24-bit address identifiers called N_Port ID. The N-Port ID serves a function similar to that of the Port in an IP network and ...
  53. [53]
    [PDF] Serial Attached SCSI Technical Overview - t10.org
    May 6, 2002 · ▫ SCSI standards. ▫ ATA standards. ▫ Serial ATA overview ... ▫ The “official” way in pSCSI and FCP: – disk drive becomes an initiator.
  54. [54]
    [PDF] Understanding the Value of SAS-3 - Microchip Technology
    • delivers the highest bandwidth ever at 12 Gbps (gigabits per second) per lane, doubling the data rate for data storage devices. • reduces latency and ...
  55. [55]
    [PDF] Serial Attached SCSI 1.1 (SAS-1.1) Standard - t10.org
    May 7, 2005 · The physical layer defines: a) passive interconnect (e.g., cables and connectors); and b) transmitter and receiver device electrical ...
  56. [56]
    What is Serial-Attached SCSI (SAS)? - TechTarget
    Apr 4, 2022 · SAS is a method used to access computer peripheral devices that employs a serial -- one bit at a time -- means of digital data transfer over thin cables.Missing: introduction | Show results with:introduction
  57. [57]
    [PDF] SFF Committee documentation may be purchased in ... - SNIA.org
    Aug 31, 2018 · Abstract: This specification defines an Unshielded dual lane Input/Output connector for serial interface unshielded devices, backplanes and ...
  58. [58]
    [PDF] Serial Attached SCSI (SAS) Interface Manual - Seagate Technology
    Jul 5, 2006 · This is a Serial Attached SCSI (SAS) Interface Manual, specifically Revision B, which includes an introduction to the SAS interface.
  59. [59]
    SAS vs SATA: Which Storage Interface Is Right for You? | HP® Tech ...
    Aug 16, 2024 · Dual-Port Capability: SAS: Supports dual-port functionality, allowing for redundant paths to the drive; SATA: Generally single-port, though ...
  60. [60]
    [PDF] SERIAL-ATTACHED SCSI (SAS) CONNECTORS - Farnell
    This range conforms to SFF8482, SFF8680 and is capable of meeting up to 12Gb/s. The SAS connector system is designed to support hot plugging and blind mating ...
  61. [61]
    Don't be afraid to be SAS-sy ... a primer on basic SAS and SATA
    May 6, 2017 · When a SATA drive is attached to a SAS port, it is operated in a special mode using the Serial ATA Tunneling Protocol (STP). SATA drives are ...
  62. [62]
    RFC 3720 - Internet Small Computer Systems Interface (iSCSI)
    This document describes a transport protocol for Internet Small Computer Systems Interface (iSCSI) that works on top of TCP.
  63. [63]
    RFC 7143 - Internet Small Computer System Interface (iSCSI ...
    This document describes a transport protocol for SCSI that works on top of TCP. The iSCSI protocol aims to be fully compliant with the standardized SCSI ...
  64. [64]
    The Emergence of iSCSI - ACM Queue
    Jul 14, 2008 · SAM defines the concepts of initiators, targets, and logical units as follows: Initiator—The device that sends the command to another device.Enter Ethernet · Figure 5 · Performance For Any Price...
  65. [65]
    SRP - SCSI RDMA Protocol - NVIDIA Docs
    Oct 23, 2023 · The SCSI RDMA Protocol (SRP) is designed to take full advantage of the protocol offload and RDMA features provided by the InfiniBand architecture.SRP Initiator · Manually Establishing an SRP... · SRP sysfs Parameters
  66. [66]
    [PDF] 4.1 Overview of SRP operation - T10.org
    Jul 19, 2001 · SRP login establishes an RDMA channel. Initially, a single channel is used, but multiple channels may be used later. Each request uses one ...
  67. [67]
  68. [68]
  69. [69]
    ISO/IEC 14776-253:2023 - 3 (UAS-3) - ISO
    Small Computer System Interface (SCSI) — Part 253: USB attached SCSI - 3 (UAS-3)
  70. [70]
    [PDF] USB Attached SCSI Protocol (UASP) - NewMaxx's SSD Page
    UASP is a USB Attached SCSI Protocol. It uses a 4-pipe model and does not mix commands, data, or status.
  71. [71]
    UASP: Accelerating USB for Mass Storage | Synopsys Blog
    Oct 16, 2011 · Explore how UASP enhances USB performance for mass storage devices, offering faster data transfer and improved efficiency.Missing: UAS | Show results with:UAS
  72. [72]
    Defined Class Codes | USB-IF
    USB defines class code information that is used to identify a device's functionality and to nominally load a device driver based on that functionality.Missing: Attached SCSI
  73. [73]
    [DOC] usb-attached-scsi-best-practices-windows-8.docx
    The UAS specification is compliant with T10 SAM4. In addition, having USB-IF DWG compliance ensures more robust device-host interoperability as the ...
  74. [74]
    What's the Difference Between USB UASP And BOT
    Want to know about high speed, USB block transfers? Then you need to know about Bulk Only Transport (BOT), UAS (USB Attached SCSI) and UASP (UAS Protocol).Missing: benefits | Show results with:benefits
  75. [75]
    USB Attached SCSI Protocol (UASP) v1.0 and Adopters Agreement
    USB Attached SCSI Protocol (UASP) v1.0 and Adopters Agreement 06/24/2009 Specification Device Class Specification uasp_1_0.zip 0 bytes
  76. [76]
  77. [77]
    [PDF] Ultra320 SCSI Host Adapters User's Guide
    Ultra SCSI. A standard for SCSI data transfers. It allows a transfer rate of up to. 20 Mbytes/s over an 8-bit SCSI bus and up to 40 Mbytes/s over a 16-bit. SCSI ...
  78. [78]
    [PDF] Device Identifiers and VPD data - t10.org
    Oct 23, 2003 · The INQUIRY Device Identification VPD page (83h) returns logical unit, target port, and (with 02-254) target device related identifiers.Missing: width | Show results with:width
  79. [79]
    [PDF] How Fibre Channel Hosts & Targets Really Communicate - SNIA.org
    Fibre Channel hosts/targets communicate via N_Port IDs (FCIDs) and Fabric Login (FLOGI) to obtain FCIDs, using 3 byte layer 3 addresses.
  80. [80]
    RFC 3721 - Internet Small Computer Systems Interface (iSCSI ...
    This document provides examples of the Internet Small Computer Systems Interface (iSCSI; or SCSI over TCP) name construction and discussion of discovery of ...
  81. [81]
    [PDF] Asymmetric SCSI behavior - t10.org
    Mar 21, 2001 · The value in the asymmetric logical units access (ALUA) field (see 7.6.2 ) indicates whether or not the logical unit supports asymmetric logical ...
  82. [82]
    [PDF] SCSI Translation Reference - NVM Express
    Jun 24, 2015 · REPORT LUNS (SPC-4) ... The Namespace Identifier field shall be translated by converting each hexadecimal nibble into an ASCII.
  83. [83]
  84. [84]
  85. [85]
    SCSI Enclosure Services - 3 (SES-3) - t10.org
    SCSI Enclosure Services - 3 (SES-3) ... [Return to the T10 home page.] For more information on T10 or if you have comments on this page contact the T10 Officers.Missing: standard | Show results with:standard
  86. [86]
    SCSI Enclosure Services - 4 (SES-4) - t10.org
    SCSI Enclosure Services - 4 (SES-4). Phase: Published Project Number: BSR Number: INCITS 555. Status: INCITS 555-2020 Action: 5 yr review Date: 2025
  87. [87]
    sg_ses - access a SCSI Enclosure Services (SES) device
    DESCRIPTION. Fetches management information from a SCSI Enclosure Service (SES) device. This utility can also modify the state of a SES device. The DEVICE ...
  88. [88]
    HP 9000 T500, T520, T600 and 890 PA-RISC Mainframes - OpenPA
    HP 9000 T-Class servers were large 32-bit PA-RISC mainframes from the mid-1990s, built with modular system cards that contain processors, memory or I/O ...
  89. [89]
    RAID systems - StorageSearch.com
    By the late 1990s RAID systems using PC form factor disks had become the most common form of bulk storage in enterprise servers and even some (Unix) mainframes.
  90. [90]
    SCSI History - Low End Mac
    Dec 20, 2014 · SCSI is a set of standards for parallel (multi-wire) data transfer among physically separate devices. The original standard (SCSI) has been ...
  91. [91]
    SCSI SSDs - the parallel kind - directory StorageSearch.com
    Jul 10, 2010 · SCSI hard disks were the mass storage of choice for workstations and servers from around 1988 until 2002 - at which latter point SATA drives ...
  92. [92]
    SAS storage isn't dead yet — SAS 24G+ adds new features for next ...
    Jul 24, 2024 · The new 24G+ offers the same performance but adds five new features designed to enhance storage performance for traditional servers and hyperscale data centers.
  93. [93]
    iSCSI vs SAS vs FC: Protocols Comparison - NAKIVO
    May 23, 2023 · There is no limit to the maximum number of connected iSCSI targets using the iSCSI protocol. The maximum amount of storage you can connect ...Fc Vs Sas Vs Iscsi... · Storage Technologies... · What Is Sas?
  94. [94]
    What is 24G SAS and How does SAS-4 Fit in the Data Center
    Apr 18, 2022 · 24G SAS is out, but the deployment is still gaining momentum, so we wanted to discuss what the new standard is as our readers are going to see it more often.
  95. [95]
    Supported SCSI commands - IBM
    Supported SCSI commands. An introduction to the SCSI commands that are recognized by the TS4500 tape library.
  96. [96]
    SCSI media changer driver - The Linux Kernel documentation
    This is a driver for SCSI Medium Changer devices, which are listed with “Type: Medium Changer” in /proc/scsi/scsi. This is for real Jukeboxes.Missing: PDT 0x1D
  97. [97]
    None
    ### Summary of SCSI to NVMe Translation Layers
  98. [98]
    The State of SAS Drives - Horizon Technology
    Feb 22, 2024 · 2004: SAS arrives on the scene in as SAS-1 (3.0 Gbit/s). · 2007: SAS rival NVMe begins development. · 2012: NVMe becomes commercially available.Missing: milestones | Show results with:milestones
  99. [99]
    Samsung launches SAS-4 enterprise SSD for servers - ZDNET
    Apr 26, 2021 · Based on the 24G SAS (SAS-4) interface, it will provide double the speed of 12G SAS-3 at 22.5Gbps, Samsung said. The new SSD also supports ...
  100. [100]
    [PDF] SCSI Translation Reference - NVM Express
    4.5 REPORT LUNS (SPC-4). REPORT LUNS (SPC-4) returns a list of logical units to the application client. The 4 least significant bytes shall be set to 0h ...<|control11|><|separator|>
  101. [101]
    NVMe over TCP vs iSCSI - Evolution of Network Storage - simplyblock
    Jan 8, 2025 · Yes, NVMe over TCP is superior to iSCSI in almost any way. NVMe over TCP provides lower protocol overhead, better throughput, lower latency, and ...
  102. [102]
    Spectra Logic launches optical SAS switch to support data center ...
    Jan 29, 2025 · Spectra Logic has launched its OSW-2400 Optical SAS Switch to support data center tape storage connectivity.<|separator|>
  103. [103]
    NFS vs iSCSI - Differences Between Data Sharing Protocols - AWS
    Because the iSCSI protocol works at the block level, it can generally provide higher performance than NFS by manipulating the remote disk directly. NFS adds ...Missing: overhead | Show results with:overhead
  104. [104]
  105. [105]
    Serial Attached SCSI - Frequently Asked Questions | Seagate US
    Serial Attached SCSI (SAS) is an evolutionary development of parallel SCSI, a proven technology which has been the foundation of enterprise storage for over ...
  106. [106]
  107. [107]
    Comparing your on-premises storage patterns with ... - Amazon AWS
    Aug 20, 2020 · iSCSI is a storage networking technology that enables the sharing of storage resources over an IP network. Often, disk resources are the storage ...Typical On-Premises Storage... · Aws Storage Service Patterns · Data Storage Migration And...
  108. [108]
    Serial Attached SCSI - 4 (SAS-4) - t10.org
    SAS-4 is a published project (INCITS 534-2019) with a 5 year review date of 2024. Project number is BSR INCITS 534.
  109. [109]