SCSI
Small Computer System Interface (SCSI) is a set of standards developed by the American National Standards Institute (ANSI) for connecting computers to peripheral devices, such as hard disk drives, tape drives, scanners, and printers, enabling efficient data transfer and device control through a shared bus interface.[1] It defines mechanical, electrical, and functional requirements, including command sets that allow initiators (like computers) to communicate with targets (peripheral devices) in a device-independent manner.[1] The origins of SCSI trace back to the late 1970s, evolving from Shugart Associates' SASI interface, which was proposed to ANSI's X3T9.2 committee in 1981 and formalized as the first SCSI standard (SCSI-1, ANSI X3.131-1986) in 1986.[2] This initial version supported asynchronous and optional synchronous data transfers up to 5 MB/s, accommodating up to eight devices on a single bus with unique ID addresses from 0 to 7.[3] SCSI-2 followed in 1990 (ANSI X3.131-1990, revised 1994), introducing enhancements like faster transfer rates up to 10 MB/s (or 20 MB/s in Fast mode), wider 16-bit buses, and additional command support for more device types, including processors and write-once media.[2][3] Under the ANSI-accredited INCITS T10 technical committee (formerly X3T9.2), SCSI evolved into the SCSI-3 architecture in the mid-1990s, modularizing the standards into components like the SCSI Architecture Model (SAM), command sets (e.g., SPC for primary commands), and transport protocols, which facilitated adaptations to serial interfaces.[4] Notable evolutions include Serial Attached SCSI (SAS, INCITS 376-2003), which supports up to 65,536 devices at speeds exceeding 12 Gb/s, and iSCSI for IP-based networking, maintaining SCSI's command structure while transitioning from parallel to serial topologies for enterprise storage.[2] Despite competition from interfaces like SATA and NVMe, SCSI remains influential in high-performance computing and data centers as of 2025, with ongoing T10 projects like SCSI to NVMe Translation (SNT).[5]Overview
Definition and Purpose
The Small Computer System Interface (SCSI) is a set of American National Standards Institute (ANSI) and InterNational Committee for Information Technology Standards (INCITS) specifications that define protocols for parallel and serial data transfer between computer hosts and peripheral devices.[6] These standards establish an input/output bus architecture for interconnecting computers with peripherals, emphasizing standardized command sets to ensure interoperability across diverse hardware.[1] At its core, SCSI separates the logical command protocol from the physical transport layer, allowing consistent operation over various interconnects.[7] The primary purpose of SCSI is to enable high-speed, reliable communication for storage and peripheral devices, including hard disk drives, tape drives, and optical media. It supports multitasking by permitting a host to issue and queue multiple commands to devices simultaneously, often through features like tagged command queuing, which optimizes performance in environments with concurrent operations.[8] Additionally, SCSI facilitates the connection of multiple devices—up to 16 on a single bus in some configurations—via a shared topology, allowing efficient resource sharing without requiring dedicated channels for each peripheral.[9] SCSI's fundamental architecture revolves around key roles and components: the host adapter serves as the interface connecting the host computer to the SCSI bus, typically functioning in the initiator role to originate commands.[10] Target devices, such as storage peripherals, operate in the target role to receive, process, and respond to these commands.[11] The bus topology provides a shared medium for communication, enabling multiple initiators and targets to arbitrate access dynamically.[10] Over its development, SCSI has evolved from parallel bus implementations to serial and IP-based protocols, with the SCSI Architecture Model (SAM) ensuring backward compatibility by preserving the core command set across transport layers.[11] This progression, detailed in successive SAM generations like SAM-5, coordinates standards for diverse physical interfaces while maintaining functional consistency for legacy and modern devices.[11]Key Components and Architecture
The SCSI architecture relies on distinct hardware components to facilitate communication between hosts and peripherals. The initiator, typically implemented as a host bus adapter (HBA), is the device that originates SCSI commands and task management requests to access storage or other peripherals.[12] In contrast, the target is the recipient device, such as a disk drive or tape unit, that executes the received commands and returns data or status information.[12] For parallel SCSI implementations, terminators are required at both ends of the bus to absorb signals and prevent reflections that could corrupt data transmission.[12] In serial variants like Serial Attached SCSI (SAS), expanders serve as intelligent hubs that route connections between multiple initiator and target ports, enabling scalable topologies beyond direct point-to-point links.[12] At the logical level, SCSI operations proceed through a sequence of bus phases that manage access, command delivery, data transfer, and completion. These include the BUS FREE phase, where the bus is idle; ARBITRATION, where devices compete for control based on priority; SELECTION, where an initiator chooses a target; RESELECTION, allowing targets to reconnect for ongoing tasks; COMMAND, for sending operation instructions; DATA IN and DATA OUT, for bidirectional transfers; STATUS, for reporting outcomes; and MESSAGE, for control exchanges like acknowledgments.[1] Parallel SCSI operates in half-duplex mode, with data flowing in one direction at a time, while serial protocols like SAS support full-duplex operation for simultaneous bidirectional transfers on separate lanes.[1] Transfers can be asynchronous, using handshaking signals for timing, or synchronous, employing a clock signal for higher throughput in compatible phases.[1] Addressing in SCSI uses unique identifiers to distinguish devices on the bus. Each device is assigned a SCSI ID, ranging from 0 to 7 in 8-bit narrow configurations or 0 to 15 in 16-bit wide setups, with the initiator often defaulting to the highest ID (e.g., 7) for arbitration priority.[1] The bus width determines the data path: 8-bit narrow supports basic connectivity, while 16-bit wide doubles throughput by utilizing additional lines.[12] Multi-initiator environments are supported through shared bus arbitration, allowing multiple hosts to access targets concurrently while resolving conflicts via priority-based selection.[12] Error handling ensures data integrity through layered detection and recovery mechanisms. In parallel SCSI, odd parity checking on the data bus detects single-bit errors during transfers, triggering a MESSAGE PARITY ERROR if unrecoverable.[1] Serial implementations, such as SAS, employ cyclic redundancy check (CRC) for robust end-to-end validation, particularly in domain validation sequences that test protocol support.[13] Retry protocols involve reattempting failed transfers via messages like INITIATOR DETECTED ERROR or by reinitializing phases, with domain invalidation occurring after repeated failures to prompt renegotiation.[13]History
Origins and Early Development
The origins of SCSI trace back to 1979, when Shugart Associates developed the Shugart Associates System Interface (SASI) as a proprietary standard for connecting hard disk drives to small computers. Led by engineer Larry Boucher, the team at Shugart aimed to create a standardized, high-performance interface that addressed the limitations of earlier proprietary standards like Seagate's ST-506 and ST-412, which tied disk drives closely to specific controllers and restricted easy upgrades or multi-device sharing. SASI was designed to support up to eight devices on a single bus, emphasizing a device-independent command set to facilitate integration of intelligent peripherals without requiring custom hardware for each vendor's drive. This motivation stemmed from customer feedback highlighting the need for a more flexible, cost-effective alternative to bespoke interfaces prevalent in the late 1970s minicomputer era.[14][15] Efforts to standardize SASI began in 1981, when Shugart Associates, in collaboration with NCR Corporation, submitted it to the American National Standards Institute (ANSI) for approval, though initial attempts failed due to concerns over naming and scope. NCR's adoption of SASI for its systems played a key role in promoting its broader use and influencing the push toward a universal standard. By 1986, ANSI approved a modified version of SASI as the SCSI-1 standard (ANSI X3.131-1986), renaming it the Small Computer System Interface to reflect its applicability beyond storage to various peripherals, while incorporating enhancements like parity checking and synchronous data transfer options. This formalization marked SCSI's transition from a vendor-specific protocol to an industry-wide specification, enabling interoperability across different manufacturers.[1][15] The initial SCSI-1 specifications defined an 8-bit parallel bus with single-ended signaling, supporting asynchronous transfers up to 1.5 MB/s and synchronous modes reaching a maximum of 5 MB/s at a 5 MHz clock rate, limited by cable lengths of up to 6 meters. It featured a basic command set primarily oriented toward block-oriented devices like hard disks and tapes, including mandatory operations such as READ, WRITE, and REQUEST SENSE, with logical unit addressing to handle multiple devices via unique SCSI IDs from 0 to 7. The design prioritized simplicity and extensibility, using a half-duplex bus for command, data, and status phases between one initiator (typically the host) and up to seven targets.[1] Early adoption of SCSI accelerated in the mid-1980s, with Apple introducing support in the Macintosh Plus in 1986, allowing external hard drives to expand storage for creative and professional applications. IBM integrated SCSI adapters into its PS/2 line starting in 1987, enhancing server and workstation capabilities in enterprise environments. Unix-based systems, particularly Sun Microsystems workstations from 1983 onward (initially using pre-standard SASI equivalents), embraced SCSI for its compatibility with multi-user, networked setups, solidifying its role in scientific computing and early client-server architectures.[16][17]Parallel SCSI Evolution
The evolution of parallel SCSI began with the SCSI-2 standard, ratified by ANSI in 1994 as X3.131-1994, which built upon the foundational asynchronous and synchronous transfer modes of SCSI-1 by introducing enhancements for higher performance and better device management.[2] Key additions included Fast SCSI, enabling synchronous data transfers at 10 megatransfers per second (MT/s) for an effective throughput of 10 MB/s on narrow (8-bit) buses, and Wide SCSI, which expanded the bus to 16 bits for doubled bandwidth, achieving 20 MB/s in Fast Wide configurations.[18] SCSI-2 also formalized tagged command queuing, allowing up to 256 commands per logical unit per initiator to optimize I/O operations in multi-device environments.[19] In the late 1990s and early 2000s, the SCSI-3 family of standards, developed under ANSI's INCITS T10 committee, further advanced parallel interfaces through the SCSI Parallel Interface (SPI) specifications, focusing on speed increases and signal integrity improvements. Ultra SCSI (also known as Fast-20), which raised synchronous rates to 20 MT/s (20 MB/s narrow, 40 MB/s wide), along with Low Voltage Differential (LVD) signaling for Ultra2 SCSI implementations, were defined in the SCSI Parallel Interface-2 (SPI-2, ANSI X3.302-1998), reducing voltage swings from 5 V to 2.5 V differential while supporting cable lengths up to 25 meters in compatible configurations.[20] Later, Ultra160 SCSI (SPI-3, 2000) incorporated domain validation, a negotiation protocol that tests bus integrity at the highest possible speed by sending patterned data packets, ensuring reliable operation across the domain.[19] These developments included critical enhancements for reliability and integration, such as refined tagged command queuing mechanisms that supported ordered, simple, head-of-queue, and untagged variants to prioritize tasks efficiently.[21] Parity error recovery was bolstered through mandatory cyclic redundancy checks (CRC) in Ultra160, enabling detection and retransmission of corrupted data packets to maintain integrity in noisy environments.[20] Parallel SCSI also facilitated RAID integration by providing robust command sets and queuing that allowed hardware controllers to manage arrays of up to 15 devices (using SCSI IDs 0-15, excluding the initiator), enabling fault-tolerant configurations like RAID 5 without host intervention.[22] Despite these advances, parallel SCSI faced inherent limitations that contributed to its eventual decline in favor of serial alternatives. Cable length constraints, even with LVD at up to 25 meters, restricted deployment in expansive server racks, while electromagnetic interference (EMI) from parallel signaling caused crosstalk and signal degradation at higher speeds.[20] Scalability was capped at 15 devices per bus due to addressing constraints, limiting its suitability for large-scale storage systems.[22] Additionally, SCSI Enclosure Services (SES), introduced in 1998 as part of the SCSI-3 command sets (INCITS 305-1998), provided monitoring of enclosure components like power supplies and fans but could not overcome the physical bottlenecks of parallel topology.Shift to Serial and Networked Interfaces
The shift from parallel to serial SCSI interfaces in the early 2000s was driven by the physical and performance constraints of parallel buses, which reached their practical limits with speeds beyond 320 MB/s, short cable lengths of a few meters due to signal integrity issues like skew and crosstalk, and support for only up to 16 devices per bus including the controller.[23] Attempts to develop Ultra640 SCSI, targeting 640 MB/s, were abandoned as unreliable without prohibitively expensive mitigations for signal degradation.[23] The emergence of Serial ATA (SATA) for cost-effective, high-volume storage further accelerated this transition by demonstrating the benefits of serial signaling, such as reduced pin counts and easier scalability, prompting enterprise SCSI to evolve similarly to maintain competitiveness.[24] The pinnacle of parallel SCSI came with the SCSI Parallel Interface-4 (SPI-4) standard, ratified in 2001 as ANSI INCITS 362 by the T10 committee, which defined the Ultra320 implementation with features like packetized transfers and cyclic redundancy checks to maximize reliability at 320 MB/s.[25] This marked the end of significant parallel advancements, as attention turned to serial alternatives. In 2003, the ANSI T10 committee introduced Serial Attached SCSI (SAS) through INCITS 376, a point-to-point serial protocol designed to encapsulate SCSI commands over differential signaling, overcoming parallel bottlenecks while enabling compatibility with SATA drives for mixed environments.[26] Networked SCSI protocols paralleled this serial shift, providing scalable alternatives for storage area networks (SANs). Fibre Channel, standardized by ANSI in 1994 as FC-PH, emerged as an early serial interface for mapping SCSI over high-speed fiber links, supporting distances up to 10 km and switched topologies ideal for shared enterprise storage.[27] Complementing this, iSCSI was developed to leverage existing IP networks, with the Internet Engineering Task Force ratifying RFC 3720 in 2003 to transport SCSI commands via TCP/IP, facilitating cost-effective Ethernet-based SANs without dedicated hardware.[28] SAS milestones underscored its rapid adoption for direct-attached storage. SAS-1, released in 2004, operated at 3 Gbit/s (300 MB/s effective per link after encoding overhead) and supported up to 128 devices in basic configurations.[29] SAS-2 advanced to 6 Gbit/s in 2009, adding capabilities like port selectors and subtractive routing while integrating expanders to scale to 65,536 devices per domain, vastly expanding enterprise connectivity options.[29][30]Standards and Protocols
SCSI Command Set
The SCSI command set defines the software interface for communication between initiators and targets, independent of the underlying transport protocol. It is specified primarily in the SCSI Primary Commands (SPC) standard, which outlines core operations applicable to all device types, and device-specific command sets such as SCSI Block Commands (SBC) for block storage devices. Commands are encoded in a Command Descriptor Block (CDB), a fixed- or variable-length structure sent during the COMMAND phase, with the first byte as the operation code (opcode) and the last byte as the control byte. Opcodes are grouped by CDB length, using bits 7-5 of the opcode to indicate the group code (e.g., 000b for 6-byte commands).[31][32] CDB formats vary to accommodate different parameter needs and address spaces:| Format Length | Group Codes | Key Fields | Example Opcodes | Limitations/Use |
|---|---|---|---|---|
| 6-byte | 000b | Opcode (1 byte), parameters (e.g., 21-bit LBA, 8-bit transfer length), Control (1 byte) | 00h (TEST UNIT READY), 03h (REQUEST SENSE), 12h (INQUIRY), 08h (READ(6)), 0Ah (WRITE(6)) | Basic operations; limited LBA (up to 8 MB) and transfer size (up to 256 blocks). Suitable for simple commands without large addresses.[31] |
| 10-byte | 001b–101b (groups 1–5) | Opcode (1 byte), flags (e.g., protection, caching), 32-bit LBA, 16-bit transfer length, Control (1 byte) | 28h (READ(10)), 2Ah (WRITE(10)) | Extended addressing (up to 2 TB with 512-byte blocks); common for block I/O. Includes bits for read protection, disable page out, force unit access.[32] |
| 12-byte | 101b | Opcode (1 byte), service action (if used), 32-bit LBA, 32-bit transfer length, Control (1 byte) | A0h (REPORT LUNS), 4Ch (LOG SELECT) | Larger transfer lengths; used for management commands like LUN reporting.[31] |
| 16-byte | 100b | Opcode (1 byte), flags, 64-bit LBA, 32-bit transfer length, Control (1 byte) | 88h (READ(16)), 8Ah (WRITE(16)) | Supports very large storage (up to 8 EB); essential for modern high-capacity drives. Examples include extended READ/WRITE with 64-bit addressing.[32] |
Parallel SCSI Specifications
Parallel SCSI specifications define the physical, electrical, and timing characteristics for the parallel implementation of the SCSI interface, enabling reliable data transfer over a shared bus. These standards, developed by the ANSI and later INCITS committees under the T10 technical group, evolved from basic asynchronous operation to high-speed synchronous modes with differential signaling. The specifications emphasize compatibility, termination requirements, and bus configuration to minimize signal reflections and electromagnetic interference.[11] Electrical specifications for parallel SCSI include three primary signaling methods: Single-Ended (SE), High Voltage Differential (HVD), and Low Voltage Differential (LVD). SE uses unbalanced signaling with TTL-compatible voltage levels, where logic low is 0.0–0.8 V and logic high is 2.0–5.25 V, requiring active termination at both ends of the bus to regulate voltage and prevent reflections; passive termination is not permitted for SE due to its susceptibility to noise over distances beyond 3 meters. HVD employs balanced differential signaling with voltage swings of ±2.0 V around a common mode of 5.0 V, allowing passive termination and supporting cable lengths up to 25 meters, though it draws higher power and is largely obsolete in modern systems. LVD, introduced for improved noise immunity and speed, uses balanced differential signaling with voltage swings of ±250–600 mV around a 1.25 V bias (common mode 2.5 V), mandating active termination via a linear voltage regulator at approximately 2.85 V; multimode devices detect the signaling type via the DIFFSENS line, where voltages below 0.5 V indicate SE, 0.7–1.9 V indicate LVD, and above 2.2 V indicate HVD.[34][35][36][37] Timing and transfer speeds in parallel SCSI progressed through generations, starting with asynchronous operation at up to 5 MB/s for an 8-bit bus in SCSI-1, where data transfer occurs without a clock signal using handshaking via REQ/ACK lines with a minimum strobe width of 50 ns. Synchronous modes, enabled from SCSI-1 onward, use a clocking mechanism to achieve higher throughput: SCSI-1 synchronous reaches 5 MB/s (5 MHz transfer period), Fast SCSI (SCSI-2) doubles to 10 MB/s (50 ns period), Ultra SCSI (Fast-20, SPI) attains 20 MB/s (25 ns period), Ultra2 (Fast-40, SPI-2) reaches 40 MB/s (12.5 ns period), Ultra160 (Fast-80, SPI-3) achieves 160 MB/s (wide only) with packetized transfers and domain validation at 80 MT/s, and Ultra320 (Fast-320, SPI-4) delivers 320 MB/s via double-transition clocking (DT) on both edges of the REQ/ACK signals. For 16-bit wide buses, these rates effectively double due to parallel data paths. Asynchronous modes remained at 3–5 MB/s across generations for compatibility with legacy devices.[38][11][39] Configuration details for parallel SCSI buses specify narrow (8-bit) and wide (16-bit) variants. Narrow buses support 8 device IDs (0–7), with a maximum of 7 targets plus 1 initiator (typically assigned ID 7 for highest arbitration priority); wide buses expand to 16 IDs (0–15), accommodating up to 15 targets plus the initiator at ID 7. Each device must have a unique ID set via jumpers, switches, or software, and the bus requires proper termination only at the physical ends, regardless of device count. Cable requirements vary by signaling: SE uses 25-pair flat ribbon or twisted-pair cables with overall shielding, limited to 6 meters total length; HVD permits up to 25 meters with twisted-pair cabling; LVD recommends 34-pair twisted-pair cables with individual pair shielding for lengths up to 12 meters to maintain signal integrity at higher speeds.[40][39] The evolution of these specifications is documented in key standards: SCSI-1 (ANSI X3.131-1986) established basic asynchronous and synchronous operation at 5 MB/s; SCSI-2 (ANSI X3.131-1994) introduced Fast modes and command enhancements; SCSI Parallel Interface (SPI, ANSI X3.253-1995) defined the initial SCSI-3 physical layer; SPI-2 (INCITS 302-1998) added LVD and Ultra2; SPI-3 (INCITS 336-2000) incorporated Ultra160; and SPI-4 (INCITS 362-2002) specified Ultra320 with DT clocking. These documents outline all electrical offsets, timing parameters, and configuration rules for compliant implementations.[11]| Standard | Max Transfer Rate (8-bit / 16-bit) | Key Features |
|---|---|---|
| SCSI-1 | 5 MB/s / 10 MB/s | Asynchronous/synchronous, SE signaling |
| SCSI-2 (Fast) | 10 MB/s / 20 MB/s | Faster synchronous, command set expansion |
| Ultra (SPI) | 20 MB/s / 40 MB/s | 20 MHz clock, SE/LVD |
| Ultra2 (SPI-2) | 40 MB/s / 80 MB/s | LVD mandatory, 40 MHz |
| Ultra160 (SPI-3) | 160 MB/s (wide only) | Packetized, cyclic redundancy check |
| Ultra320 (SPI-4) | 320 MB/s (wide only) | Double-transition clocking, 80 MHz effective |