Parallel SCSI
Parallel SCSI, formally known as the SCSI Parallel Interface (SPI), is a set of American National Standards Institute (ANSI) standards defining a parallel bus architecture for high-speed data transfer between computers and peripheral devices such as hard disk drives, tape drives, and optical storage.[1] It employs a shared bus topology with multiple electrical connections that transmit data bits simultaneously across 8-bit (narrow) or 16-bit (wide) data paths, supporting both asynchronous and synchronous transfer modes to achieve rates from 5 MB/s in early implementations to 320 MB/s in Ultra320 variants.[2] The interface uses a request-acknowledge (REQ/ACK) handshake protocol for reliable communication, incorporating control signals like Busy (BSY), Select (SEL), Command/Data (C/D), Input/Output (I/O), and Message (MSG) to manage bus phases such as arbitration, selection, command, data, status, and message exchange.[1] Introduced in the mid-1980s as part of the original SCSI-1 specification (ANSI X3.131-1986), Parallel SCSI evolved to address growing demands for faster and more reliable storage interconnects in enterprise environments.[3] Subsequent revisions, including SCSI-2 (ANSI X3.131-1994) which added wide transfers and fast synchronous modes up to 10 MB/s, and SCSI-3 standards like SPI-2 through SPI-5 (T10/1525D), introduced enhancements such as low-voltage differential (LVD) signaling for longer cable lengths up to 25 meters, packetized protocols for improved efficiency, and cyclic redundancy check (CRC-32) error detection in double-transition data phases.[2] These developments enabled support for up to 16 devices on a single bus (or 8 in narrow configurations) with features like tagged command queuing, disconnect/reconnect for multitasking, and quick arbitration selection (QAS) to reduce latency.[3] The architecture ensures backward compatibility with earlier SCSI generations, allowing mixed-device environments while gracefully rejecting unsupported extensions, and relies on self-configuring software for automatic device detection and addressing via SCSI IDs from 0 to 7 (or 15 in wide mode).[1] Signaling options include single-ended (SE) for shorter distances up to 6 meters or differential (HVD/LVD) for noise-resistant, longer runs, with connectors standardized as 50-pin or 68-pin high-density types.[2] Although largely superseded by serial interfaces like Serial Attached SCSI (SAS) and Serial ATA (SATA) in modern systems due to scalability limits in parallel topologies, Parallel SCSI remains notable for its role in pioneering intelligent, multi-device storage networking in servers and workstations through the 1990s and early 2000s.[3]History
Origins
The origins of Parallel SCSI trace back to 1979, when Larry Boucher, an engineer at Shugart Associates, led the development of the Shugart Associates System Interface (SASI), a parallel bus designed to connect small computers to storage peripherals like hard disk drives.[4] SASI emerged in response to the limitations of proprietary interfaces prevalent in the late 1970s, such as Seagate's ST-506, which restricted connectivity to single-device hard drives with fixed capacities and lacked support for multiple peripherals or broader interoperability.[5] This innovation addressed the growing demand for a more flexible, device-independent standard amid the expansion of minicomputers and early personal computing, where affordable VLSI controllers enabled intelligent peripherals to handle their own error correction and operations, reducing reliance on host-based controllers.[5] In 1981, Shugart Associates and NCR Corporation proposed SASI to the American National Standards Institute (ANSI) for broader adoption, leading to the formation of the X3T9.2 technical committee in 1981 to refine and extend the interface into an open standard.[6] The committee, chaired by figures like William E. Burr, focused on creating a peer-to-peer I/O bus that supported up to eight devices (expandable later), asynchronous and synchronous data transfers, and commands for diverse peripherals, motivated by the need to prevent market fragmentation in the burgeoning multibillion-dollar storage industry.[5] Boucher, often credited as a key architect, left Shugart in 1981 to co-found Adaptec, which became a major producer of SASI and early SCSI host adapters.[4] By 1986, the X3T9.2 committee had transformed SASI into the Small Computer System Interface (SCSI), ratified as ANSI X3.131-1986 (SCSI-1), which renamed the protocol to remove vendor-specific branding while retaining core SASI compatibility.[7] This standard emphasized a single physical bus for efficient multi-tasking environments, supporting moderate cable lengths up to 6 meters and features like self-configuring devices via the mandatory INQUIRY command.[5] Early implementations targeted minicomputers from vendors like DEC and Sun Microsystems, as well as emerging personal computers such as the Apple Macintosh and IBM PC compatibles, primarily for attaching hard disk drives and tape drives to enable reliable, high-capacity data storage in professional and small business settings.[4]Standardization and Evolution
The standardization of Parallel SCSI began with the American National Standards Institute (ANSI) ratifying the SCSI-1 specification, designated as X3.131-1986, in 1986. This initial standard established the foundational parallel interface for connecting computers to peripheral devices, supporting asynchronous and synchronous data transfers at up to 5 MB/s over an 8-bit bus.[7] SCSI-2 followed, with an initial publication in August 1990 under ANSI X3.T9.2/86-109 and final approval in January 1994 as X3.131-1994, introducing enhancements such as command queuing, wider 16-bit bus options, and support for faster synchronous transfers up to 10 MB/s. The SCSI-3 architecture, initiated in the mid-1990s, marked a shift toward modular specifications by separating the protocol into distinct layers; the SCSI Parallel Interface (SPI) profile, first defined in X3.253-1995, focused on the physical and electrical characteristics of the parallel bus. Subsequent SPI revisions—SPI-2 (INCITS 302-1998), SPI-3 (INCITS 336-2000), SPI-4 (INCITS 362-2002), and SPI-5 (INCITS 367-2003)—extended through the early 2000s, incorporating serial extensions within the broader SCSI framework while maintaining parallel variants.[7][8][9] Key drivers in this evolution included the demand for higher data rates, achieved through faster clock speeds and synchronous modes; expansion to wider buses for increased throughput; and the adoption of low-voltage differential (LVD) signaling over single-ended (SE) to improve noise immunity, cable lengths up to 25 meters, and overall reliability in enterprise environments. These advancements culminated in Ultra-640 (SPI-5), ratified in 2003, which supported transfer rates up to 640 MB/s using a 16-bit LVD bus with double-edge clocking.[10][11] Following the 2003 ratification of SPI-5, development of parallel SCSI variants declined as serial alternatives gained prominence; the introduction of Serial Attached SCSI (SAS) under INCITS 376-2003 offered comparable performance with simpler cabling and better scalability, while Serial ATA (SATA) addressed consumer needs, effectively supplanting parallel interfaces in new designs by the mid-2000s.[12][13]Standards
Comparison Table
The following table provides a comparison of key parameters across major Parallel SCSI standards, focusing on maximum synchronous transfer rates, bus widths, primary signaling types, maximum number of devices (including the host adapter), and maximum cable lengths under typical configurations (e.g., for low device counts and appropriate termination). Data is derived from official specifications and manufacturer documentation.[2][14]| Standard | Max Transfer Rate (MB/s) | Bus Width | Signaling Type | Max Devices | Max Cable Length |
|---|---|---|---|---|---|
| SCSI-1 | 5 | 8-bit | SE | 8 | 6 m |
| SCSI-2 (Fast) | 10 | 8-bit | SE | 8 | 3 m |
| SCSI-2 (Fast Wide) | 20 | 16-bit | SE | 16 | 3 m |
| Ultra SCSI | 20 | 8-bit | SE | 8 | 1.5 m |
| Wide Ultra SCSI | 40 | 16-bit | SE | 16 | 1.5 m |
| Ultra-2 | 80 | 16-bit | LVD | 16 | 12 m |
| Ultra-160 | 160 | 16-bit | LVD | 16 | 12 m |
| Ultra-320 | 320 | 16-bit | LVD | 16 | 12 m |
| Ultra-640 | 640 | 16-bit | LVD | 16 | 0.5 m (typical internal/multi-device) |
SCSI-1
SCSI-1, ratified as American National Standard ANSI X3.131-1986 on June 23, 1986, established the foundational architecture for parallel Small Computer System Interface (SCSI) technology, enabling communication between initiators such as host adapters and targets like disk drives or tape units.[5] This standard introduced a client-server-like initiator/target model, where initiators initiate commands and targets respond, supporting up to eight devices on the bus through unique SCSI IDs while allowing daisy-chaining for connectivity.[5] The protocol emphasized simplicity for early personal computers and workstations, prioritizing reliability in local storage environments over high-speed networking. The bus operated as an 8-bit narrow parallel interface using single-ended (SE) signaling, which transmitted data via voltage differences relative to ground and limited the maximum cable length to 6 meters to mitigate signal noise and reflections.[15] Asynchronous transfer mode was mandatory, providing reliable handshaking at speeds up to 1.5 MB/s, while synchronous mode was optional and achieved up to 5 MB/s through clocked data bursts, though adoption varied due to implementation complexity.[5] Connections utilized a 50-pin Centronics-style connector, resembling the parallel printer interface but adapted for bidirectional data paths including control signals, data lines, and optional parity for basic error detection.[15] The basic command set focused on essential operations for direct-access devices, mandating six core commands: Test Unit Ready (00h) to check device readiness, Inquiry (12h) to retrieve device identification and capabilities, Request Sense (03h) for error status, Read (08h/28h) for data retrieval, Write (0Ah/2Ah) for data storage, and Format Unit (04h) for media initialization.[5] These commands used fixed-length 6-byte command descriptor blocks (CDBs) without support for command queuing or linked commands, enforcing strict sequential execution.[5] Notably, SCSI-1 lacked built-in error correction mechanisms beyond optional odd parity on the bus and sense data reporting; medium-level errors relied on device-specific error-correcting codes (ECC), but the interface itself provided no retransmission or forward error correction.[5] These limitations, including the absence of advanced error handling and the 8-bit constraint, positioned SCSI-1 as a robust but basic standard for 1980s storage needs, later addressed in SCSI-2 through enhancements like expanded command sets.[15]SCSI-2
SCSI-2, ratified as ANSI X3.131-1994, represented a significant evolution from its predecessor by enhancing reliability, performance, and interoperability for parallel bus operations in computing environments.[16] This standard introduced mandatory features that addressed limitations in error detection and multi-device coordination, while optional extensions allowed for higher throughput in demanding applications.[17] A key performance improvement was the support for synchronous data transfer rates up to 10 MB/s on an 8-bit bus, achieved through a minimum transfer period of 100 ns.[17] Additionally, an optional 16-bit wide mode—often termed Fast SCSI—enabled doubled bandwidth, supporting up to 20 MB/s, which facilitated faster data movement between hosts and peripherals like hard drives and tape units.[10] These enhancements were complemented by defined electrical specifications, including single-ended signaling for shorter cable lengths up to 6 meters and differential signaling for extended runs up to 25 meters, alongside provisions for optical interfaces to support emerging fiber-based connections.[17] To bolster reliability, SCSI-2 mandated parity checking with odd parity on the data bus, ensuring detection of transmission errors across all connected devices.[17] It also added command queuing capabilities, including tagged queuing with up to 256 commands per initiator per logical unit using tags such as HEAD OF QUEUE, SIMPLE QUEUE, and ORDERED QUEUE, which allowed devices to manage and prioritize multiple pending operations efficiently.[17] For international compatibility, export-oriented subsets were defined: the Common Command Set (CCS), comprising 18 essential commands like INQUIRY and READ for broad device support, and Fixed Block Architecture (FBA) for standardized block-level addressing in direct-access devices.[17] SCSI-2 supported a maximum of 8 devices on a narrow (8-bit) bus or 16 devices on a wide (16-bit) configuration, with one reserved as the initiator.[10] Bus access was governed by formalized arbitration and selection phases: during arbitration, devices with higher priority (based on SCSI ID) gained control within 10 microseconds or less, followed by the selection phase where the initiator addressed a specific target via the SEL line and SCSI ID bits.[17] These phases ensured orderly multi-device operation while minimizing contention on shared buses.[17]SCSI-3 and SPI Series
The SCSI-3 standards introduced a modular framework for the SCSI protocol family, decoupling command sets, transport protocols, and physical interfaces to facilitate broader interoperability and future extensions. The core SCSI-3 architecture was ratified as ANSI X3.270-1996, establishing a reference model for coordinating input/output operations across diverse device types and environments. This framework emphasized peer-to-peer communication and layered abstractions, allowing independent evolution of components without disrupting existing implementations.[18] Central to this modularity is the SCSI Architecture Model (SAM), which defines common services, tasks, and mappings for SCSI commands, while the SCSI Parallel Interface (SPI) series specifies the physical and electrical characteristics for parallel bus implementations. SAM documents outline application-layer behaviors, such as task management and error handling, applicable to both parallel and serial transports, whereas SPI standards focus exclusively on parallel signaling, arbitration, and data transfer mechanisms. This separation enabled SCSI-3 to support a unified command set across varied interfaces, promoting standardization through dedicated project documents from the T10 technical committee.[18] The SPI series evolved progressively to address performance and reliability in parallel environments, with SPI-2 ratified as ANSI INCITS 302-1998, SPI-3 as ANSI INCITS 336-2000, SPI-4 as ANSI INCITS 362-2002, and SPI-5 as ANSI INCITS 367-2003. These iterations refined bus negotiation, timing, and error detection protocols, culminating in SPI-5's comprehensive definitions for high-speed parallel operations. A pivotal feature in SPI-3 was domain validation, a process where initiators and targets exchange test data patterns to confirm the bus topology's integrity, ensuring negotiated parameters like width and speed are sustainable without excessive errors. Subsequent standards, notably SPI-4, incorporated cyclic redundancy checking (CRC) for information units, providing robust end-to-end data protection by detecting transmission faults in real time.[8][19][20] SCSI-3 also served as a transitional architecture toward serial interconnects, integrating mappings for protocols like Fibre Channel and Serial Storage Architecture (SSA) under the same command umbrella, which extended SCSI's reach to longer-distance, higher-bandwidth networks while maintaining backward compatibility with parallel variants.[18]Fast/Wide and Ultra Variants
The Fast SCSI variant emerged as part of the SCSI-2 standard, approved by ANSI in 1994, which doubled the synchronous transfer rate from 5 MB/s to 10 MB/s over the traditional 8-bit narrow bus by increasing the clock frequency to 10 MHz.[21] This enhancement maintained compatibility with existing SCSI-1 devices while enabling faster data throughput for applications demanding improved performance, such as early server environments.[21] Complementing Fast SCSI, the Wide SCSI configuration, also defined in SCSI-2, expanded the data bus to 16 bits using a 68-pin connector, supporting up to 16 devices on the bus instead of 8.[21] When paired with Fast timing, Wide SCSI achieved transfer rates of 20 MB/s, effectively doubling the bandwidth of narrow Fast SCSI and addressing growing storage needs in mid-1990s computing systems.[21] This combination, often termed Fast Wide SCSI, became a common setup for workstations and entry-level servers, utilizing high-density connectors for denser cabling.[22] Building on these advancements, Ultra SCSI—formally known as Fast-20 and incorporated into the SCSI-3 Parallel Interface (SPI) standards—was introduced in 1996, raising the clock speed to 20 MHz for 20 MB/s transfers on 8-bit narrow buses and 40 MB/s on 16-bit wide buses.[23] This iteration retained single-ended (SE) signaling for most implementations, prioritizing cost-effective upgrades over differential methods at the time.[24] Low-voltage differential (LVD) signaling, which improved noise immunity and cable lengths for higher speeds, was later integrated starting with the Ultra2 variant in 1998.[25] Ultra SCSI employed standard single-edge clocking, where data was transferred on one edge of the REQ/ACK handshake signal, allowing higher rates through clock acceleration without altering the fundamental protocol.[3] Achieving full Ultra speeds necessitated compatible host controllers and target drives, as mismatched components would negotiate down to the lowest common rate—such as Fast or SCSI-2 levels—ensuring backward compatibility but limiting performance.[24] These foundational enhancements in speed and width paved the way for subsequent Ultra series developments.Ultra-2 to Ultra-640
The Ultra-2 SCSI standard, introduced in 1997 as part of the SCSI Parallel Interface-2 (SPI-2) specification, doubled the transfer rate of previous Ultra SCSI variants to achieve 80 MB/s on a 16-bit wide bus (or 40 MB/s narrow) through a 40 MHz clock rate, also known as Fast-40.[2] It introduced multimode support for both low-voltage differential (LVD) signaling, which improved noise immunity and allowed cable lengths up to 12 meters, and single-ended (SE) signaling for backward compatibility, though SE limited high-speed performance to shorter distances.[2] Key advancements included optional packetized data transfers and domain validation to ensure signal integrity across mixed signaling environments.[2] Building on SPI-2, the Ultra-3 SCSI standard, ratified in 1999 under the SCSI Parallel Interface-3 (SPI-3) specification and often referred to as Ultra160, increased speeds to 160 MB/s on wide buses via an 80 MHz clock (Fast-80), incorporating mandatory paced data transfers and cyclic redundancy check (CRC) for enhanced error detection.[2] Domain validation became a core feature, enabling initiators and targets to negotiate optimal transfer parameters by testing bus conditions during initialization, which mitigated skew and noise issues at higher frequencies while maintaining LVD signaling exclusivity for reliability.[2] This iteration emphasized information unit transfers, packetizing commands, data, and status to streamline high-speed operations without altering the fundamental parallel bus topology.[2] The Ultra-320 standard, defined in the 2002 SCSI Parallel Interface-4 (SPI-4) specification, further accelerated transfers to 320 MB/s using a 160 MHz clock (Fast-160) and double-edge synchronous transfers, where data was clocked on both rising and falling edges of the signal for doubled throughput.[2] It mandated CRC protection across all data phases and introduced advanced training sequences to compensate for cable skew up to 2.5 ns, ensuring robust performance in LVD environments but requiring premium cabling to sustain 12-meter lengths.[2] These enhancements prioritized error-free high-bandwidth applications, such as enterprise storage arrays, by integrating paced DT (data transfer) phases as standard.[2] Ultra-640, standardized in 2003 via the SCSI Parallel Interface-5 (SPI-5) specification (INCITS 367-2003), pushed parallel SCSI to its theoretical peak of 640 MB/s on wide buses with a 320 MHz clock, employing a fully packetized protocol for information units that bundled commands, data, status, and extended error correction mechanisms.[26] This required sophisticated skew compensation and advanced LVD signaling, but practical implementations were constrained to very short internal cables of 0.5 meters due to severe signal degradation, heat generation, and crosstalk at such frequencies.[2] The escalating challenges of maintaining signal integrity, including increased power consumption and electromagnetic interference, ultimately highlighted the physical limits of parallel architectures, paving the way for the transition to serial interfaces like Serial Attached SCSI (SAS).[2]Electrical Characteristics
Signaling Types
Parallel SCSI utilizes three primary electrical signaling types: single-ended (SE), high-voltage differential (HVD), and low-voltage differential (LVD). These methods determine the voltage levels, noise resilience, power efficiency, and maximum cable lengths for data transmission across the bus.[27] SE and HVD represent earlier implementations, while LVD became prevalent in later standards for improved performance and compatibility.[2] Single-Ended (SE) signaling operates as an unbalanced system, where signals are driven relative to ground using voltage levels ranging from 0 to 5 V. This approach is prone to electromagnetic interference and crosstalk, especially over longer distances, limiting reliable operation to a maximum cable length of 6 meters for slower modes or 3 meters at higher speeds like Ultra SCSI.[27][28] Due to its simplicity and lower cost, SE was widely adopted in early personal computing and mid-range storage applications.[27] High-Voltage Differential (HVD) signaling employs a balanced differential transmission mode, with differential voltage swings typically between ±2 V and ±5 V across the paired lines (per RS-485 standards). The differential nature rejects common-mode noise effectively, supporting cable lengths up to 25 meters and making HVD suitable for enterprise environments requiring robust, long-distance connections.[27][29] However, HVD consumes more power and requires specialized, more expensive transceivers compared to other types.[27] It predates LVD and is now largely obsolete in modern systems due to incompatibility with newer multimode designs.[2] Low-Voltage Differential (LVD) signaling also uses balanced differential transmission but at reduced voltage levels, with a common-mode voltage of 0.845 V to 1.655 V and differential voltages from 375 mV to 800 mV for logic states. This lower-voltage design (based on 3.3 V logic) results in decreased power consumption, less heat generation, and support for cable lengths up to 12 meters, enabling higher transfer rates like Ultra-160 and Ultra-320.[30][27] LVD offers superior noise immunity over SE while maintaining compatibility through multimode transceivers.[2] Multimode transceivers facilitate backward compatibility in LVD systems by automatically detecting and switching to SE mode when connected to a single-ended bus, ensuring mixed-device environments function without requiring separate cabling or adapters.[27] This feature, defined in SCSI SPI standards, prevents electrical mismatches and supports seamless integration of legacy SE devices.[2] HVD, however, cannot interoperate with multimode setups and requires dedicated infrastructure.[27]SCSI Signals and Lines
Parallel SCSI employs a set of dedicated electrical signals transmitted over shared bus lines to facilitate communication between initiators and targets. In the narrow bus configuration, defined in the original SCSI standard (ANSI X3.131-1986), there are 18 signal lines: 9 bidirectional control lines and 9 data-related lines consisting of 8 data bits and 1 parity bit. These lines enable asynchronous and synchronous data transfers, with control signals managing arbitration, selection, and handshaking, while data lines carry commands, status, messages, and payload information.[5] The core control signals, all active-low and OR-tied for multi-drop compatibility, include:- BSY (Busy): Asserted by the initiator or target to indicate the bus is in use during arbitration, selection, or reselection phases; it prevents other devices from interfering.[5]
- SEL (Select): Driven by the initiator to select a target or by the target to reselect the initiator, enabling device addressing on the bus.[5]
- RST (Reset): An OR-tied signal asserted by any device to reset the entire bus, clearing all states and pending operations.[5]
- REQ (Request): Asserted by the target to initiate a data or command transfer handshake, signaling readiness to send or receive information.[5][1]
- ACK (Acknowledge): Driven by the initiator to confirm receipt of data or a command in the REQ/ACK handshake protocol.[5][1]
- ATN (Attention): Asserted by the initiator to notify the target of an impending message, often used to request a message-out phase.[5][2]
- MSG (Message): Set true by the target to indicate a message phase, where control information such as identify or disconnect messages is exchanged.[5][2]
- C/D (Control/Data): Driven by the target to differentiate control information (true, e.g., commands or status) from data (false) on the data bus.[5]
- I/O (Input/Output): Asserted true by the target to indicate data flow toward the initiator (input) or false for output from the initiator.[5][2]
Addressing and Configuration
SCSI IDs and LUNs
In Parallel SCSI, devices on the bus are addressed using unique SCSI IDs, which range from 0 to 7 on narrow (8-bit) buses supporting up to eight devices, and from 0 to 15 on wide (16-bit) buses supporting up to sixteen devices.[1] The host adapter, acting as the initiator, typically uses SCSI ID 7 by default, as this ID provides the highest priority during bus access contention.[1] SCSI IDs are configured through hardware mechanisms such as DIP switches or jumpers on the device, or via software settings in some implementations, ensuring no two devices share the same ID on the bus.[5] During bus arbitration, devices assert their ID bits on the data bus while driving the BSY signal; the device with the highest numerical ID value wins control, establishing a priority system that favors higher-numbered IDs to resolve simultaneous requests efficiently.[1] Logical Unit Numbers (LUNs) extend addressing within a target device at a given SCSI ID, allowing multiple logical units—such as individual drives in a RAID array—to be accessed independently.[32] LUNs are structured as 64-bit identifiers but commonly use a single-level format ranging from 0 to 255 for up to 256 logical units per target, with the REPORT LUNS command enabling discovery and management of these units.[32] This scheme supports complex storage configurations by mapping logical units to physical resources without requiring additional physical IDs.[32]Termination Practices
Proper termination is essential in Parallel SCSI systems to prevent signal reflections that can degrade data integrity and cause transmission errors. The SCSI bus operates as a transmission line, and without appropriate termination at the ends, signals can bounce back, leading to ghosting or ringing that corrupts data transfers.[1] Passive termination, the simplest method used in early SCSI implementations, employs resistor networks to match the bus impedance and absorb signals. For single-ended (SE) signaling, each signal line connects through a 220 Ω resistor to TERMPWR (typically +5 V) and a 330 Ω resistor to ground, providing an effective impedance of approximately 132 Ω. This configuration is suitable for low-speed buses but draws significant current (up to 24 mA per line) and performs poorly in noisy environments or longer cables. Low-voltage differential (LVD) passive termination uses a 150 Ω resistor across each differential pair for common-mode impedance and an additional 105–110 Ω differential resistor, ensuring signal absorption without active regulation.[33][34] Active termination improves upon passive methods by incorporating voltage regulation and dynamic impedance control, reducing power consumption and enhancing signal quality, particularly for high-speed variants like Ultra SCSI. In SE mode, active terminators use a linear regulator to maintain a precise 2.85 V reference (within 2.7–3.0 V), paired with 110 Ω resistors per line, allowing the bus to handle higher currents (up to 48 mA) without voltage droop. For LVD and multimode systems, active terminators auto-sense the bus type via the DIFFSENS line: below 0.6 V indicates SE mode, while 0.7–1.9 V selects LVD with 105 Ω differential and 150 Ω common-mode impedance, plus a 112 mV fail-safe bias to prevent floating states. Multimode terminators support voltage ranges of 2.7–5.25 V for SE and auto-switch to high-impedance mode if high-voltage differential (HVD) is detected (>2.4 V on DIFFSENS).[34][35][34] Termination must occur exactly at the two physical ends of the bus: typically the host adapter and the last device in the chain, with no more than two sets active to avoid over-termination, which increases loading and skew. For wide SCSI (68-pin), three terminator ICs (e.g., DS2117M or UCC5672) are required per end to cover all 34 differential pairs or 68 single-ended lines, powered by TERMPWR (4.0–5.25 V). Modern SCSI drives often feature auto-termination, enabling or disabling based on sensing additional devices via the bus, simplifying configuration while adhering to end-only rules.[1][34][35] Improper termination, such as missing or excessive terminators, leads to signal reflections, non-monotonic edges, and data errors like parity failures or bus hangs, particularly in LVD systems where mismatched terminators (e.g., SE on LVD bus) can damage drivers. LVD requires dedicated low-voltage terminators, as SE types cause overcurrent and failure; always verify compatibility for multimode buses.[1][34]Bus Operation
Bus Phases
The Parallel SCSI bus operates through a series of distinct phases that govern the flow of control, commands, data, and status between initiators and targets, ensuring orderly access in a multi-device environment. These phases are essential for managing bus contention and information transfer, with the bus transitioning between them based on the state of key control signals and handshaking mechanisms. The eight primary phases are BUS FREE, ARBITRATION, SELECTION, RESELECTION, COMMAND, DATA, STATUS, and MESSAGE, though not all occur in every transaction. The DATA phase supports transfers in either direction (DATA IN from target to initiator or DATA OUT from initiator to target), and the MESSAGE phase supports messages in either direction, as determined by the I/O signal.[2][5] Transitions between phases are driven by the REQ/ACK handshake protocol, where the REQ signal from the target requests a data byte transfer, and the ACK signal from the initiator acknowledges it, allowing phase changes only after ACK negation. The specific phase is determined by the combination of three control lines: C/D (Command/Data), I/O (Input/Output), and MSG (Message). For instance, the information transfer phases—COMMAND, DATA, STATUS, and MESSAGE—are defined by unique states of these lines: COMMAND is C/D=1, I/O=0, MSG=0; DATA is C/D=0, I/O=varies, MSG=0; STATUS is C/D=1, I/O=1, MSG=0; and MESSAGE is C/D=1, I/O=varies, MSG=1. The target device typically controls these transitions during information phases, while bus management phases like ARBITRATION and SELECTION are initiator-driven.[2] The following table summarizes the eight bus phases, their purposes, and control line states:| Phase | Purpose | Control Lines (C/D, I/O, MSG) | Key Characteristics |
|---|---|---|---|
| BUS FREE | Idle state with no device controlling the bus; all signals released. | N/A | BSY and SEL false for at least one bus settle delay (400 ns minimum). Transitions to ARBITRATION when a device asserts BSY and SEL.[2] |
| ARBITRATION | Devices contend for bus control; highest-priority SCSI ID wins. | N/A | Devices assert BSY and their ID on data lines after a bus free delay (800 ns minimum); winner determined after arbitration delay (2.4 µs minimum). Ensures fairness per ANSI SPI-5.[2] |
| SELECTION | Initiator selects a specific target to initiate a command. | N/A | Initiator asserts SEL, BSY, and target ID on data lines; target responds by asserting BSY within 200 µs (selection abort time); I/O false.[2] |
| RESELECTION | Target reconnects to initiator for ongoing or new tasks (e.g., after disconnect). | N/A | Similar to SELECTION but target-initiated; I/O true to distinguish; response within 200 µs.[2] |
| COMMAND | Initiator transfers command descriptor block (CDB) to target. | 1, 0, 0 | Data flows initiator to target via REQ/ACK.[2] |
| DATA | Data transfer between initiator and target. | 0, varies, 0 | DATA IN (I/O=1): target to initiator; DATA OUT (I/O=0): initiator to target; via REQ/ACK.[2] |
| STATUS | Target reports command completion status to initiator. | 1, 1, 0 | Single-byte status sent target to initiator; follows command execution.[2] |
| MESSAGE | Exchange of control messages (e.g., Command Complete, Identify). | 1, varies, 1 | MESSAGE IN (I/O=1): target to initiator; MESSAGE OUT (I/O=0): initiator to target; single-byte or multi-byte.[2] |
Command, Data, and Status Transfer
In Parallel SCSI, the command phase occurs after the initiator has selected a target device, allowing the initiator to transfer a Command Descriptor Block (CDB) to the target over the data bus.[2] The CDB specifies the operation to be performed, such as reading or writing data, and includes parameters like logical block addresses and transfer lengths.[36] Standard CDB formats are fixed-length structures of 6, 10, or 12 bytes, with the length determined by the operation code in the first byte: 6-byte CDBs support basic commands with 21-bit addressing and up to 256-block transfers, 10-byte CDBs extend to 32-bit addressing for commands like READ(10), and 12-byte CDBs provide further extensions for advanced operations like READ(12).[36] During this phase, the target asserts the REQ signal, and the initiator responds with ACK in a handshake to transfer each byte sequentially.[2] Data phases in Parallel SCSI handle the exchange of information between initiator and target, using either asynchronous or synchronous modes negotiated earlier in the bus operation.[1] In asynchronous transfers, each byte requires a full REQ/ACK handshake with no offset, ensuring strict pacing but limiting throughput.[2] Synchronous transfers, common in higher-speed variants, employ a REQ/ACK offset value greater than zero, allowing the target to assert multiple REQ signals ahead of ACK responses from the initiator, enabling burst transfers of multiple bytes or words without interlocks after the initial handshake.[1] The offset, typically up to 8 in early standards and higher (e.g., 64) in Ultra variants, combined with the transfer period (e.g., 25 ns for Ultra SCSI), determines the effective burst size and rate, supporting wide 16-bit or 32-bit data paths for increased efficiency.[2] Data phases move information from target to initiator (DATA IN) or from initiator to target (DATA OUT), with the I/O line indicating flow.[36] The status phase follows the completion of data transfer (if any), where the target reports the outcome of the command to the initiator using a single-byte status code on the data bus.[2] Common codes include GOOD (00h), indicating successful task completion without errors, and CHECK CONDITION (02h), signaling an exception such as a medium error or invalid command parameter that requires further investigation.[36] This phase uses a REQ/ACK handshake similar to command transfer, with the C/D and I/O lines asserted appropriately.[2] Upon receiving status, the bus typically transitions to a message phase, where the target sends a completion message like Command Complete (00h) to acknowledge the end of the nexus transaction.[36] Error handling in Parallel SCSI relies on sense data to diagnose issues reported via CHECK CONDITION status.[36] The initiator issues a separate REQUEST SENSE command (operation code 03h) in a new transaction to retrieve up to 18 or more bytes of sense information from the target, including a sense key (e.g., ILLEGAL REQUEST), additional sense code, and qualifier detailing the error context, such as a parity failure during transfer.[36] This command supports fixed or descriptor formats and clears the pending sense data, enabling recovery or logging without resetting the bus.[2] In cases of parity or cyclic redundancy check errors during synchronous data phases, the initiator may assert ATN to initiate a message for aborting the task.[2]Physical Interfaces
External Connectors
External Parallel SCSI connections primarily utilized two main connector types for narrow (8-bit) and wide (16-bit) buses, with specific designs to ensure reliable signal integrity over external cabling. For narrow SCSI, the 50-pin Centronics-style connector was standard in SCSI-1 implementations, featuring a low-density ribbon-style interface that supported asynchronous and synchronous data transfers up to 5 MB/s. This connector evolved in SCSI-2 and later standards to the high-density 50-pin DB50 (D-subminiature) variant, which offered a more compact footprint while maintaining compatibility with existing cables and providing improved shielding options for faster signaling rates up to 10 MB/s in Fast SCSI.[2] Wide SCSI, introduced in SCSI-2, employed 68-pin connectors to accommodate the expanded 16-bit data path, enabling transfer rates up to 20 MB/s in Fast Wide configurations. The high-density 68-pin (HD68) connector became the predominant external interface for SCSI-2 and SCSI-3, utilizing a shielded D-subminiature design that included additional pins for parity and control signals. For Ultra SCSI variants, which supported speeds up to 40 MB/s, the Micro-D (also known as VHDC) connector provided a smaller, more robust alternative to the HD68, reducing the physical size by approximately two-thirds while preserving full 68-pin functionality for external applications.[2][37] External cabling for Parallel SCSI was designed to minimize noise and crosstalk, with twisted-pair wiring recommended for single-ended (SE) signaling to maintain signal quality. Shielded cables, typically using 30- or 32-gauge wire, were mandatory for electromagnetic interference (EMI) compliance, with narrow cables having a 6.35 mm diameter and wide cables up to 12.70 mm. Maximum cable lengths varied by signaling type and speed: SE configurations supported up to 6 meters for Fast-10 and Fast-20 modes, while low-voltage differential (LVD) signaling extended this to 12 meters for Ultra-2 and Ultra-3 implementations, allowing greater flexibility in system layouts.[2][37] Pin assignments for external connectors followed standardized layouts defined in ANSI SPI specifications, allocating dedicated pins for ground returns, data lines, control signals, and in some cases, termination power. The 50-pin connectors dedicated pins 26-33 to data bits DB(0) through DB(7), with control signals like BSY on pin 43, REQ on 49, and ACK on 44, interspersed with multiple ground pins (e.g., 1-25) for noise reduction. Wide 68-pin connectors extended this with pins 32-35 for DB(8) through DB(11) and additional pins for higher data bits, control pins such as DIFFSENS on 16 for LVD/SE detection, and TERMPOWER on pins 17 and 18 to supply voltage for bus termination. The following table summarizes key pin functions for both connector types:| Pin Type | Function | 50-pin Position | 68-pin Position |
|---|---|---|---|
| Ground | Signal return | 1-25, etc. | 1, 19, 20, etc. |
| Data | DB(0) (LSB) | 26 | 7 |
| Data | DB(7) (MSB narrow) | 33 | 14 |
| Data | DB(8) (wide LSB) | - | 32 |
| Data | DB(15) (wide MSB) | - | 5 |
| Control | BSY (Busy) | 43 | 25 |
| Control | REQ (Request) | 49 | 24 |
| Control | ACK (Acknowledge) | 44 | 23 |
| Power | TERMPWR | 21, 25 | 17, 18 |