Fact-checked by Grok 2 weeks ago

Peripheral Component Interconnect

Peripheral Component Interconnect (PCI) is an industry-standard local bus architecture designed for connecting hardware components, such as add-in cards and peripherals, to a computer's motherboard. Developed by Intel as a response to fragmented bus standards like ISA and VESA's VL-Bus, the original PCI specification was released in 1992 and first implemented in 1993 alongside the Pentium processor. The Special Interest Group (), an open established in with over 1,000 members, was formed to maintain and evolve the standard, ensuring broad industry compatibility through royalty-free licensing. As a parallel bus operating at 33 MHz with a 32-bit data width (expandable to 64 bits), conventional supported maximum theoretical throughput of 133 MB/s, featuring plug-and-play auto-configuration for resources like interrupts and addressing. It quickly became ubiquitous in , enabling faster data transfer for devices like graphics cards and network adapters, and was named PC Magazine's Product of the Year in for its role in standardizing hardware integration. Later revisions introduced 66 MHz speeds and 3.3V signaling for improved efficiency, while extended it for servers with higher bandwidth up to 1 GB/s. By the early 2000s, limitations of the parallel design prompted the transition to PCI Express (PCIe), a serial point-to-point interface launched in , which offers scalable lanes and dramatically higher speeds while maintaining with PCI software. Today, while legacy PCI slots are rare in consumer hardware, its foundational principles underpin modern expansions like PCIe 7.0, supporting data rates up to 128 GT/s for applications in , , and networking.

Overview

Definition and Purpose

The Peripheral Component Interconnect (PCI) is a high-speed parallel computer expansion bus standard developed by and introduced in as a local bus system for connecting peripheral devices to a computer's . Designed to enable modular hardware expansion, PCI provides a standardized for add-in cards—such as accelerators, cards, and network interfaces—to interface directly with the (CPU) and system memory. The primary purpose of PCI is to facilitate efficient, high-bandwidth communication between the host processor and peripheral devices, supporting burst-mode data transfers at speeds up to 133 MB/s in its original 32-bit configuration operating at 33 MHz. This capability addressed the limitations of earlier expansion buses like the Industry Standard Architecture (ISA), which was constrained to 8.33 MB/s, and the VESA Local Bus (VLB), a short-lived interim solution offering theoretical bandwidth up to 133 MB/s at 33 MHz but lacking robust standardization, electrical stability for multiple devices, and plug-and-play support. By incorporating auto-configuration mechanisms, PCI simplified device installation and resource allocation, promoting broader adoption in personal computers during the mid-1990s. Fundamentally, PCI employs a shared bus with multiple expansion slots connected via a common set of address, data, and control lines, allowing up to five devices per bus segment. Transactions occur in a master-slave model, where a bus master (such as the CPU or a peripheral card) initiates read or write operations to a target slave device, enabling and synchronized data exchange across the system. Later revisions expanded these foundations to include 66 MHz clock rates and 64-bit data widths for enhanced .

Key Features and Advantages

The (PCI) bus operates synchronously, utilizing a shared to coordinate all transactions among connected devices, which ensures predictable timing and simplifies implementation compared to asynchronous buses. The base specification defines a 33 MHz , delivering theoretical peak of 133 MB/s for 32-bit transfers, with later revisions supporting 66 MHz for doubled performance. It employs a multiplexed 32-bit and bus, which can be extended to 64 bits via optional signaling for enhanced capacity in high-bandwidth applications. Architecturally, PCI supports up to 32 devices per bus through unique device numbering in its configuration mechanism, though electrical loading constraints typically limit unbuffered implementations to around 10 loads, including the host bridge and slots. A primary advantage of PCI is its burst transfer mode, which enables multiple consecutive data phases following a single address phase, allowing efficient to or I/O without repeated addressing overhead. This contrasts sharply with the bus, where each data transfer requires a dedicated address cycle, capping ISA throughput at approximately 8 MB/s even at its 8 MHz clock, while PCI achieves significantly higher effective rates for burst-oriented operations like or disk I/O. capabilities further reduce CPU involvement by permitting peripheral devices to initiate () transactions, offloading data movement and minimizing processor interrupts for sustained transfers. PCI's plug-and-play auto-configuration, facilitated by a 256-byte configuration space per device accessible via standardized reads and writes during system initialization, enables dynamic through or operating system , obviating manual or switch settings common in systems. This promotes ease of use and scalability across diverse hardware. The bus also ensures with slower devices, as all components adhere to the same protocol but can signal readiness at reduced speeds without disrupting higher-speed peers. In specialized implementations, the PCI Hot-Plug specification allows runtime insertion or removal of cards with and surprise removal detection, enhancing reliability in or environments.

History and Development

Origins and Initial Design

The Peripheral Component Interconnect (PCI) standard originated in the early 1990s as a response to the growing performance demands of personal computers, particularly with the impending release of Intel's processor. Intel's Architecture Labs began developing the local bus around 1990 to create a high-performance, processor-independent interface for connecting peripherals directly to the CPU, bypassing the limitations of existing expansion buses. The primary motivations were the shortcomings of the (ISA) bus, which operated at only 8.33 MHz with a 16-bit data width, resulting in a maximum throughput of about 8 MB/s and lacking support for efficient or plug-and-play configuration, and the (EISA) bus, which, while offering 32-bit addressing and at up to 8.33 MHz (around 33 MB/s), was overly complex, expensive to implement, and primarily suited for servers rather than desktops. In late 1991, Intel collaborated with key industry partners—including IBM, Compaq, and Digital Equipment Corporation (DEC)—to refine the design and promote it as an open standard, culminating in the formation of the PCI Special Interest Group (PCI-SIG) in June 1992. The PCI-SIG, with these founding members at its core, aimed to ensure broad adoption by managing compliance and evolution of the specification. The initial PCI Local Bus Specification, version 1.0, was released by Intel in June 1992, defining a 32-bit bus operating at 33 MHz for a theoretical maximum bandwidth of 133 MB/s, supporting both burst transfers and plug-and-play resource allocation to simplify system integration. This design targeted desktop and server systems, emphasizing simplicity, low cost, and scalability over the proprietary or fragmented alternatives like VESA Local Bus. Early adoption accelerated in 1993 following the launch of 's processor in March, with the company's 430LX chipset (codenamed Mercury) integrating PCI support as the first such implementation for -based systems. Unveiled publicly at the trade show in November 1993, PCI quickly gained traction in PC manufacturing, enabling faster I/O for , networking, and peripherals in an era of rapidly advancing CPU speeds. By integrating PCI into mainstream chipsets, and its partners marked the transition to a unified, high-speed expansion standard that dominated PC architectures for the next decade.

Standardization and Revisions

The (PCI-SIG) was established in 1992 by , , , DEC, and other prominent industry players to govern the PCI specification, ensuring its evolution through collaborative development and compliance testing. This quickly grew to include hundreds of members, fostering widespread by standardizing the for peripheral connectivity across diverse hardware ecosystems. Subsequent revisions to the PCI Local Bus Specification refined its capabilities to meet emerging computational demands. , released on April 30, 1993, formalized the core connector design, pinout, and electrical signaling, providing a stable foundation for implementation. Version 2.1, issued June 1, 1995, introduced support for 66 MHz operation to double potential over the original 33 MHz clock and added optional 64-bit address and data extensions for enhanced performance in high-end systems. These updates enabled broader compatibility with faster processors while maintaining with earlier designs. Further enhancements came in Version 2.2, published December 18, 1998, which incorporated refinements to protocols, including better support for low-power states and hot-plug capabilities through companion specifications. Version 2.3, effective March 29, 2002, addressed limitations in 64-bit addressing for systems exceeding 4 of by modifying the configuration space to handle mappings, while deprecating 5 V signaling in favor of 3.3 V for improved efficiency and safety. These revisions solidified PCI as a industry standard, with implementations in chipsets from vendors like , , and , enabling seamless integration in billions of personal computers and servers. By 2003, the shifted primary development efforts toward , recognizing the need for serial interconnects to support escalating bandwidth requirements, though conventional continued to receive errata updates and support thereafter. This transition marked the maturation of as a foundational , with its specifications remaining influential in and industrial applications.

Physical and Electrical Specifications

Connector Design and Pinout

The PCI connector utilizes an edge-card design with gold-plated contacts, known as "gold fingers," on the add-in card that insert into a on the or . The standard 32-bit PCI connector consists of 62 pins per side (124 total contacts), with 120 dedicated to signals and 4 serving as keying positions to prevent incompatible insertions. For 64-bit PCI support, an extension adds 30 pins per side (60 total additional contacts), enabling wider data paths while maintaining with 32-bit cards, for a total of 92 pins per side (184 contacts). Key signal pins are assigned as follows: the multiplexed address and data lines AD[31:0] occupy designated positions across both sides (e.g., A20 for AD31, A31/B31 for AD0/AD1), allowing bidirectional transfer of 32-bit addresses and . Bus command signals C/BE[3:0]# (e.g., A32 for C/BE0#, B32 for C/BE1#, A28 for C/BE2#, B28 for C/BE3#) indicate the type of , such as memory read or I/O write. Control signals include FRAME# (A34) to delineate the start and duration of a bus , IRDY# (A35) and TRDY# (B35) for initiator and ready states, DEVSEL# (B36) for device select assertion, and (A36) to request transaction termination. Power and ground pins are distributed throughout, with +5V (e.g., A23, B23), +3.3V (e.g., A42, B42 in 3.3V keyed slots), and multiple GND connections (e.g., A3, B3) for stable operation. Signals are grouped logically for efficient routing and : address/data and pins form the core multiplexed bus in the middle of the connector, while and signals cluster on the edges near the card's leading and trailing ends. slots at specific positions (pins A12/A13 and B12/B13) differentiate 5V-only, 3.3V-only, and universal voltage environments, ensuring electrical compatibility. A 32-bit PCI card, using only the first 62 pins, can insert into a 64-bit slot if the slot features universal keying, though the extension remains unused; conversely, 64-bit cards require a full 64-bit slot to access the additional AD[63:32] and C/BE[7:4]# pins.
Signal GroupExample Pins (Side A/B)Description
Address/Data (AD)A20 (AD31), A31 (AD0) / B31 (AD1), B20 (AD30)Multiplexed 32-bit lines for addresses and data
Bus Commands (C/BE#)A32 (0#), A28 (2#) / B32 (1#), B28 (3#)Command/byte enable signals (4 bits for 32-bit)
Transaction ControlA34 (FRAME#), A35 (IRDY#), A36 (STOP#) / B35 (TRDY#), B36 (DEVSEL#)Bus phase and handshake signals
Power/GroundA23 (+5V), A42 (+3.3V), A3 (GND) / B23 (+5V), B42 (+3.3V), B3 (GND)Supply and reference voltages
64-bit ExtensionA64-A93, B64-B93 (approx.)Additional AD[63:32], C/BE[7:4]#, parity, and REQ64#/ACK64#
This table illustrates representative pin assignments from the 32-bit base; full details span all 124 positions in the specification.

Voltage Levels and Keying

The original PCI Local Bus Specification, released in 1992, supported only 5V signaling and power supply for add-in cards and slots. To address increasing power demands and enable lower consumption in denser systems, 3.3V signaling was introduced in Revision 2.0 of the specification in 1993, with further refinements for universal compatibility in Revision 2.1 in 1995. Universal slots accommodate both voltage levels by providing separate power pins—VCC for 5V and VCC3.3 for 3.3V—allowing cards to detect the available voltage through the VI/O pin and configure their I/O buffers accordingly. Mechanical keying prevents the insertion of incompatible cards into slots by using notches on the card's edge connector that align with raised tabs in the slot. 3.3V-only cards feature a notch between pins 12 and 13 (approximately 56 mm from the card's backplate), while 5V-only cards have a notch between pins 32 and 33 (approximately 104 mm from the backplate); universal cards include both notches to fit either slot type. These keying positions ensure that a 3.3V card cannot be inserted into a 5V-only slot (and vice versa), avoiding potential electrical mismatches. Pin assignments for the power rails are detailed in the connector design specifications. Power delivery to PCI slots occurs primarily through the +5V and +3.3V rails, with add-in cards limited to a maximum of 25 W combined from these rails, as encoded by the card's presence detect pins (PRSNT1# and PRSNT2#) in increments of 7.5 W up to that limit. Auxiliary +12 V and -12 V rails are available for specialized needs, such as analog components or EEPROM programming, typically supporting up to 1 A on +12 V and 0.5 A on -12 V, though these are optional and depend on system implementation. Inserting a 5V-only card into a 3.3V-only can lead to issues, including improper signaling levels that may cause unreliable operation or component damage due to voltage mismatches. Conversely, the greater risk arises from inserting a 3.3V card into a 5V , where the higher signaling voltage can exceed the card's tolerances and cause immediate failure, particularly in hot-plug scenarios without proper sequencing. These mechanisms collectively ensure safe and reliable voltage handling in PCI systems.

Form Factors and Compatibility

PCI add-in cards adhere to defined form factors to ensure compatibility with various sizes while maintaining a standardized for insertion into slots. The full-length form factor measures 312 mm (12.28 inches) in length, providing ample space for components requiring extensive board area. Half-length cards are limited to 175 mm (6.9 inches), suitable for systems with restricted internal dimensions. Low-profile variants, intended for slimline cases, utilize shorter lengths—MD1 at 119.91 mm (4.72 inches) for basic 32-bit cards and up to 167.64 mm (6.6 inches) for more complex designs—with a maximum height of 64.41 mm (2.54 inches) including the connector, yet all employ the identical 32-bit or 64-bit as full-size cards. Compatibility across form factors emphasizes backward and forward integration. A 32-bit PCI card fits securely into a 64-bit slot, occupying the initial 32-bit portion of the longer connector without requiring an adapter, though performance remains limited to 32-bit capabilities. Universal slots and cards facilitate voltage compatibility by supporting both 3.3 V and 5 V signaling through dual-keying mechanisms that prevent incorrect insertions. Mini PCI, a compact variant introduced by PCI-SIG in late 1999, addresses space constraints in portable devices like laptops with a reduced board size of approximately 59.6 mm × 50.95 mm. It supports 32-bit operations at 33 MHz and integrates directly into motherboards via an edge connector. The specification defines three types for varying stacking needs: Type I for single-height cards, Type II for dual-height configurations allowing stacked components such as modems, and Type III for even taller stacking in thicker assemblies. Type I and II use a 100-pin connector, while Type III employs a 124-pin interface to accommodate additional pins for power and signals. Voltage keying in Mini PCI mirrors standard PCI practices to avoid electrical mismatches. Furthermore, Mini PCI cards can interface with CardBus bridges to enable hot-plug capabilities in supported systems.

Configuration Mechanisms

Auto-Configuration Process

The auto-configuration process in PCI allows the system to dynamically discover, identify, and initialize connected devices during without requiring manual jumper settings or switches. This software-driven mechanism is initiated by the host bridge under or operating system control, which systematically scans the PCI bus hierarchy starting from bus 0. The scan probes each possible bus (0-255), device (0-31), and function (0-7 for multifunction devices) by issuing configuration read transactions to the 256-byte configuration space allocated per device/function. These transactions use Type 00h cycles for devices on the local bus and Type 01h cycles for propagating to downstream buses via bridges, enabling of the entire . PCI defines two configuration access mechanisms to facilitate this probing, with Mechanism #1 serving as the primary method in version 1.0 and later. Mechanism #1 employs I/O-mapped ports—0x0CF8 for setting a 32-bit address (including bus, device, , and ) and 0x0CFC for data transfer—while using address bit mapping to select the device's IDSEL line for targeted access. Version 2.0 deprecated Mechanism #2 for new designs, retaining it only for compatibility using a system-defined I/O in the range 0xC000h-0xCFFFh (or equivalent). Mechanism #1 remains the standard for auto- in subsequent revisions. Central to device identification are standardized registers in the first 64 bytes of the configuration space header (offsets 00h-3Fh). The 16-bit Vendor ID at offset 00h uniquely identifies the manufacturer (e.g., 0x8086 for Intel), and a value of 0xFFFF indicates no device is present, allowing the scan to skip empty slots. The adjacent 16-bit Device ID at 02h specifies the exact product variant. The 8-bit Revision ID at offset 08h, and the 24-bit Class Code at offsets 09h-0Bh (programming interface at 09h, subclass at 0Ah, base class at 0Bh) defines the device's functional category, such as 0x010000 for SCSI controllers or 0x020000 for Ethernet adapters, enabling software to recognize and load appropriate drivers. These fields, read early in the scan, confirm device presence and type before proceeding to resource setup. Resource allocation follows detection and relies on the six Base Address Registers (BARs) at offsets 10h-24h in the configuration header, which describe the device's memory or I/O space needs. To determine requirements, software writes 0xFFFFFFFF to a BAR and reads back the value, where inverted bits reveal the alignment and size (e.g., low bits cleared to 0 indicate I/O space, while bit 2 distinguishes 32-bit from 64-bit addressing). The BIOS or OS then allocates non-overlapping base addresses—writing them back to the BARs—for memory regions, I/O ports, and expansion ROM, ensuring devices can map to the host's address space. Interrupt resources are assigned similarly via the Interrupt Pin and Line registers, integrating with broader interrupt handling mechanisms. This allocation completes device enablement by setting the Command register bits for bus mastership, memory/I/O access, and other functions.

Interrupt Handling

In traditional PCI systems, interrupt requests from peripheral devices are managed using four dedicated signal lines per expansion slot: INTA#, INTB#, INTC#, and INTD#. These lines are optional for devices but provide a standardized for signaling events to the host processor. The signals operate as level-sensitive , asserted low (active low) using open-drain output buffers, which enables wired-OR sharing among multiple devices connected to the same line without electrical conflicts. The interrupt handling process begins when a device asserts its assigned INTx# line to indicate an event requiring CPU attention. This assertion is routed through PCI bridges or directly to the system's interrupt controller, such as the Intel 8259 () or (), where it is mapped to a specific system IRQ line based on configuration space settings established during the auto-configuration process. The interrupt controller then notifies the CPU, which suspends its current execution, saves the , and vectors to the corresponding () via the . Since the interrupts are level-sensitive, the device must deassert the INTx# line only after the ISR has serviced the request to avoid continuous triggering; shared lines require all asserting devices to deassert before the interrupt can be cleared. In multi-slot or hierarchical PCI topologies, interrupt lines are routed via PCI-to-PCI bridges, which typically remap downstream INTx# signals to upstream lines using a rotational offset (e.g., INTA# from a downstream device may map to INTD# on the bridge) to balance load and enable sharing across segments. This routing ensures scalability in systems with multiple buses while maintaining compatibility. To address limitations of pin-based interrupts, such as the fixed number of lines and sharing overhead, () were introduced as an optional feature in Revision 2.2 of the PCI Local Bus Specification. With , a device signals an by issuing a dedicated memory write transaction to a locally assigned address and data value, rather than asserting a physical pin; this write is treated as a posted transaction and routed through the PCI fabric to the controller. supports up to 32 vectors per device (using a 16-bit message data field) and employs edge semantics, where each write is a distinct event without requiring deassertion, enhancing efficiency in high-device-density environments. Configuration occurs via capability structures in the device's , where the system allocates the target address during initialization. Interrupt signaling in PCI operates independently of bus arbitration for data transactions; while devices compete for bus mastery using separate REQ# and GNT# signals, interrupt assertion on INTx# lines or writes can occur concurrently without requiring bus ownership. This separation allows low-latency event notification even when the bus is occupied by other operations.

Bus Architecture and Operations

Address Spaces and Memory Mapping

The PCI bus utilizes three primary address spaces to enable host-to-device communication: the configuration space, the I/O space, and the memory space. The configuration space is a per-function space limited to 256 bytes, accessed through specialized mechanisms distinct from standard I/O or memory transactions, allowing enumeration and setup of devices during system initialization. The I/O space provides a flat addressing model for legacy device control, supporting either a 16-bit range (up to KB total) or a 32-bit extension (up to 4 GB), depending on the host bridge implementation. In contrast, the memory space facilitates memory-mapped I/O operations, offering a 32-bit range by default (up to 4 GB) with optional 64-bit extensions for larger systems. Device memory mapping is managed through Base Address Registers (BARs) located in the configuration space header (offsets 0x10 to 0x24 for standard devices), where each BAR specifies the type, size, and location of the device's addressable regions. During enumeration, the operating system probes each BAR by writing all 1s to it and reading back the value; the fixed bits (typically low-order) that remain 0 indicate the device's requested region size, which must be a power of 2 (e.g., 4 , 16 , 1 , or up to 2 per BAR). The OS then assigns non-overlapping base addresses from the available I/O or , writing these values back to the BARs to map the device's registers or buffers into the system's address map, ensuring and avoiding conflicts across multiple devices. Within the memory space, BARs distinguish between prefetchable and non-prefetchable regions to optimize . A prefetchable BAR (indicated by bit 3 set in the BAR) denotes a region without read side effects, allowing the host CPU or bridges to perform speculative burst reads across 4 KB boundaries and cache line alignments without risking or unnecessary stops, which enhances throughput for sequential access patterns like transfers. Non-prefetchable regions (bit 3 clear) are used for areas with potential side effects on reads, such as registers, and restrict prefetching to prevent errors, though they may incur higher due to aligned access requirements. For systems exceeding 4 GB of addressable , PCI supports 64-bit addressing through extensions in the memory space. A 64-bit BAR is signaled by setting bits [2:1] to 10b in the lower BAR, consuming two consecutive -bit BARs: the first holds the lower bits of the base address, while the second provides the upper bits (MAB[63:32]). Transactions targeting these addresses employ a dual-address cycle mechanism, where the high bits are transferred in the first address phase followed by the low bits in the second, enabling devices to respond to addresses beyond the 32-bit limit while maintaining compatibility with legacy -bit systems. This extension is particularly vital for prefetchable regions in high-memory environments, as it allows mapping large device buffers without fragmentation.

Command Codes and Transaction Types

In the PCI bus protocol, bus commands are encoded on the C/BE[3:0]# lines during the address phase to specify the type of transaction a master device intends to perform. These four-bit encodings allow for 16 possible commands, though some are reserved or specific to extensions. The primary commands include Interrupt Acknowledge (0000), Special Cycle (0001), I/O Read (0010), I/O Write (0011), Memory Read (0110), Memory Write (0111), Configuration Read (1010), and Configuration Write (1011), with additional memory-related variants such as Memory Read Multiple (1100), Dual Address Cycle (1101), Memory Read Line (1110), and Memory Write and Invalidate (1111).
CommandEncoding (C/BE[3:0]#)Description
Interrupt Acknowledge0000Master reads interrupt vector from an interrupting device; implicitly addressed to interrupt controller.
Special Cycle0001Broadcast message to all agents on the bus, without a target response; used for system-wide signals like shutdown.
I/O Read0010Master reads from I/O space; supports single or burst transfers, non-posted to ensure completion acknowledgment.
I/O Write0011Master writes to I/O space; non-posted, requiring target acknowledgment before completion.
Reserved0100Not used in standard PCI.
Memory Read0110Master reads from memory space; supports single or burst transfers, targeting specific address spaces like system or expansion ROM.
Memory Write0111Master writes to memory space; posted, allowing the master to proceed without waiting for target acknowledgment to improve performance.
Reserved1000Not used in standard PCI.
Configuration Read1010Master reads from a device's configuration space for initialization; uses Type 0 or Type 1 addressing.
Configuration Write1011Master writes to a device's configuration space; non-posted.
Memory Read Multiple1100Optimized memory read supporting cache-line bursts across multiple cache lines.
Dual Address Cycle1101Precedes a 64-bit address transaction for 64-bit addressing support.
Memory Read Line1110Memory read optimized for filling a full cache line in a burst.
Memory Write and Invalidate1111Memory write that invalidates cache lines, combining write and coherency operations.
Transaction types in PCI are categorized as reads and writes, with variations for single or burst modes to transfer multiple doublewords efficiently. Reads are generally non-posted, requiring the target to complete data transfer before the master proceeds, while memory writes are posted to decouple the master from target latency, though I/O and configuration transactions remain non-posted for reliability. These commands target distinct address spaces, such as I/O for legacy device control or memory for bulk data access. Masters initiate transactions by asserting the FRAME# signal during the address phase, driving the command on C/BE[3:0]# and the target address on AD[31:0]#, while holding REQ# for arbitration. Targets respond to valid commands by asserting DEVSEL# to indicate readiness, with timing classified as fast (deasserted one or two clock cycles after FRAME#), medium (three cycles), or slow (four or more cycles) to accommodate varying device decoding speeds. Devices must claim transactions matching their enabled address ranges via the configuration space registers, ensuring only the intended responds. Special cycles differ by not requiring DEVSEL#, as they are broadcasts without a specific .

Latency Management and Delayed Transactions

In the PCI bus architecture, latency arises primarily from the time required for a target device to respond to an initiator's request, with the specification mandating that the target complete the initial data phase—by asserting TRDY# for ready or STOP# for termination—within 16 clock cycles from the assertion of FRAME#. This response window, often ranging from 7 to 15 cycles in practice depending on device capabilities and bus conditions, ties up the shared bus and reduces overall throughput in multi-device configurations where fast initiators must wait for slower targets. Delayed transactions were introduced in the PCI Local Bus Specification revision 2.1 to mitigate these constraints, allowing an initiator to issue a request that the accepts but cannot immediately fulfill. The initiator then releases the bus after the signals an incomplete , parking the request internally while the processes it asynchronously; completion occurs later when the initiator retries the exact same , at which point the provides the data or acknowledgment without requiring re-decoding of the address or command. The core mechanism for delayed transactions employs the STOP# signal to disconnect the current bus cycle and the DEVSEL# signal to confirm the target's claim of the during the address phase. If DEVSEL# is asserted but no data is transferred (TRDY# remains deasserted), the target issues a retry via STOP#, prompting the initiator to relinquish the bus and attempt completion on a future cycle. This supports delayed read transactions fully and limited delayed write transactions (such as configuration writes), with compatibility for up to 64-bit data widths through PCI's optional 64-bit extensions. By decoupling the request acceptance from immediate completion, delayed transactions enhance system performance by permitting bus reuse for other masters during the target's processing delay, avoiding idle cycles that would otherwise bottleneck the interconnect. This capability is especially vital for PCI bridges interfacing with slower subsystems, such as I/O buses, where native response times exceed PCI's strict initial limits of 16 cycles.

Bridge Functionality

PCI-to-PCI Bridges

PCI-to-PCI bridges facilitate the expansion of PCI systems beyond a single bus by establishing connections between a primary bus—typically the one interfacing with the host processor—and a secondary bus, thereby supporting hierarchical topologies that allow for greater device connectivity without overwhelming the main bus. The core function of the bridge involves address translation to map requests from one bus's to the other's, selective forwarding of transactions initiated by masters on either side to appropriate targets on the opposite bus, and traffic isolation to ensure that activities on the secondary bus do not propagate unnecessarily to the primary bus or , thus preserving and electrical across segments. Central to the bridge's operation are its configuration registers, which include dedicated fields for specifying bus numbers—each an 8-bit value ranging from 0 to 255—for the primary bus, secondary bus, and subordinate bus (the highest-numbered bus downstream, inclusive of subordinates), enabling precise routing of configuration transactions during system initialization. These registers work in conjunction with programmable address windows for memory and I/O spaces, which define the ranges of addresses to be forwarded upstream (from secondary to primary) or downstream (from primary to secondary), allowing the bridge to efficiently direct traffic based on decoded address matches. In handling write transactions, PCI-to-PCI bridges implement support for posted writes, particularly for operations, by queuing the data internally and forwarding it to the without waiting for an , which prevents the originating from stalling and assumes reliable eventual delivery to minimize in pipelined systems. The bridge's internal mechanism prioritizes requests from the primary bus over those from the secondary bus to favor host-side operations and maintain overall system , while also incorporating subtractive decode capability, whereby the bridge claims and forwards transactions whose addresses do not match any positive decode windows on the primary to the secondary bus when enabled.

Write Optimization Techniques

PCI bridges employ several techniques to optimize write transactions, particularly for writes, by reducing and improving bus across the primary and secondary interfaces. A key method is the use of posted writes, where a bridge accepts write commands from the primary bus without waiting for acknowledgment from the target on the secondary bus. This posting buffers the write data internally, allowing the initiator on the primary side to proceed immediately, thereby minimizing wait states and enhancing overall system throughput. The PCI Local Bus Specification defines posted writes as applicable to write and write and invalidate commands, enabling bridges to decouple transaction completion signals between buses. To further streamline posted writes, bridges implement combining, which merges sequential single-dword writes targeting sequential doubleword addresses into a single larger burst transaction on the secondary bus. For instance, two consecutive 32-bit writes to sequential 64-bit aligned addresses can be combined into one 64-bit burst write, reducing the number of bus cycles required and preserving transaction order. This optimization is recommended for bridges handling posted memory writes, as it decreases overhead and boosts bandwidth utilization without altering the semantic outcome of the operations. The specification emphasizes that combining must maintain the order of writes to ensure data integrity. Merging complements combining by consolidating writes to adjacent addresses or partial bytes within a dword into a contiguous burst . Bridges can, for example, merge two 32-bit writes to sequential addresses into a single 64-bit write or fill byte lanes in a dword from multiple partial writes, transforming non-contiguous operations into efficient linear bursts. This technique is particularly beneficial for sequential data transfers, such as graphics or operations, where it minimizes delays and maximizes per . Byte merging, a subset of this process, specifically handles sub-dword writes by assembling them into full dwords before forwarding. The PCI-to-PCI Bridge Specification details how merging preserves address ordering and supports burst modes to optimize secondary bus performance.

Signal Protocol and Timing

Core Bus Signals

The core bus signals of the Peripheral Component Interconnect (PCI) form the electrical interface that enables communication between the host and peripheral devices on the parallel bus. These signals are defined in the PCI Local Bus Specification and are categorized into multiplexed address/data lines, control signals for transaction management, arbitration signals for bus access, and power and status lines. All signals operate synchronously to the PCI clock except for reset and interrupts, with most being tri-state to allow shared bus usage among multiple agents.

Multiplexed Signals

The multiplexed and signals are central to PCI transactions, allowing efficient use of pins by reusing lines for both addressing and transfer. The AD[31:0] lines serve as the bidirectional, tri-state multiplexed and bus, carrying a 32-bit wide during the and 32-bit during subsequent s of a . These signals support burst transfers with one or more s, where the initiator drives the lines during the and the source (initiator for writes, target for reads) drives during s. Accompanying the AD lines, the PAR signal provides even parity coverage for the AD[31:0] and C/BE[3:0]# signals during both address and data phases; it is driven by the agent asserting FRAME# for the address phase and by the data source during data phases, ensuring through checking. The C/BE[3:0]# (Command/Byte Enable) lines, also bidirectional and tri-state with active-low assertion, encode the transaction command (such as memory read, I/O write, or configuration access) during the address phase and specify which byte lanes are active during data phases, enabling partial bus width usage for efficiency.

Control Signals

Control signals manage the timing and flow of individual transactions on the PCI bus. The FRAME# signal, driven by the initiator and active low, delineates the start and duration of a : it is asserted to begin the address phase and deasserted after the final data transfer to signal completion or early termination. The IRDY# (Initiator Ready) signal, active low and driven by the current bus master, indicates when valid data or address is present on the AD lines; it is asserted one clock after FRAME# and remains asserted until the completes, allowing the initiator to control data transfer pacing. Complementing this, TRDY# (Target Ready), also active low and driven by the , signals that the is ready to accept or provide data, enabling wait states if necessary without halting the bus. The DEVSEL# (Device Select) signal, active low and driven by the target, acknowledges selection by asserting within a specified number of clocks after the address phase, confirming the device has decoded the as intended for it; fast, medium, or slow timings are supported to accommodate varying device latencies. Similarly, STOP# , active low and target-driven, requests the initiator to halt the current , either for retry (due to errors) or disconnect (to free the bus), preventing bus lockup in conditions.

Arbitration Signals

Arbitration signals facilitate fair to the shared bus among multiple potential . Each PCI device has a dedicated REQ# (Request) line, active low and open-drain, which the device asserts to signal its intent to become the bus ; the central monitors these to grant . The corresponding GNT# (Grant) line per device, active low and driven solely by the host , indicates permission to use the bus, asserted when no other is active. These point-to-point signals support centralized , with parking allowing the current to retain if to minimize . The CLK (Clock) signal provides the synchronous timing reference for all PCI operations, distributed to every device as a free-running input at 33 MHz (or 66 MHz in optional modes), with all other signals sampled or driven on the rising edge except asynchronous reset. The RST# (Reset) signal, active low and asynchronous, initializes all PCI devices upon system power-up or reset, holding for at least 1 ms and ensuring all outputs are tri-stated and configuration registers cleared.

Power and Status Signals

Power and signals ensure reliable operation, with supplying 5 V (or 3.3 V in later variants) to devices and providing the reference , supporting up to 25 W per slot. Interrupt signals INTA# through INTD#, active low and open-drain, allow up to four interrupts per device function, shared across slots with level-sensitive assertion; these are used for interrupt delivery to the host . The SERR# (System Error) signal, active low and open-drain, reports critical errors such as failures or issues not covered by per-transaction , allowing system-wide signaling across the bus. The PERR# ( Error) signal, active low and open-drain, reports data errors detected during or data phases, asserted by the affected agent two clock cycles after the . For 64-bit PCI extensions, additional signals like AD[63:32], C/BE[7:4]#, and PAR64 expand the bus width while maintaining compatibility with 32-bit modes.

Arbitration and Access Control

In PCI, bus access is managed through a centralized scheme implemented by the host controller or a dedicated arbiter, which control to bus masters via individual GNT# () lines, with one line per potential master device to ensure dedicated signaling. This approach allows multiple devices to compete for the bus without centralized contention beyond the arbiter itself, supporting up to 16 or more masters depending on system design. The algorithm is not strictly defined in the specification but typically employs methods, such as , to balance access among requesters while incorporating a parking mechanism that defaults the bus to the last active master when no other requests are pending, thereby minimizing for subsequent transactions from the same device. The request process begins when a bus master requiring access asserts its REQ# (Request) signal while the bus is idle or during an ongoing transaction from another master, as arbitration overlaps with data phases to hide latency. Upon detecting the request, the arbiter evaluates priorities and, if granting access, first deasserts the current GNT# line (if active) to release the bus, followed by a single-clock turnaround cycle to prevent signal contention and ensure stable voltage levels before asserting the new GNT# for the requesting master. The master samples its GNT# on the rising clock edge and, upon assertion, may initiate a transaction after one additional clock if the bus is idle; this overlapped arbitration enables efficient bus utilization without dedicated idle cycles for granting. To tolerate varying arbitration delays, the PCI protocol allows up to 16 clock cycles between a master's REQ# assertion and the corresponding GNT# assertion, accommodating complex arbiter decisions in multi-master environments while maintaining overall low latency. Additionally, each master's configuration space includes a programmable Latency Timer register, which counts bus clocks during ownership and forces the master to release the bus (by deasserting FRAME#) once the timer expires if other REQ# signals are pending, thereby preventing any single device from monopolizing the bus and ensuring equitable access. The REQ# and GNT# signals, as core PCI bus lines, facilitate this point-to-point communication between each master and the arbiter. Multi-function PCI devices, which integrate multiple independent functions on a single chip or card, utilize a shared REQ#/GNT# pair across all functions to present only one electrical load and arbitration interface to the bus, simplifying wiring and arbiter complexity while requiring internal coordination among functions for request prioritization. This shared-pair design ensures that the device as a whole competes as a single master, with the internal logic arbitrating among its functions before asserting REQ#.

Address and Data Phases

In PCI bus transactions, the process begins with an address phase followed by one or more data phases, enabling efficient multiplexed transfer of addressing and payload information on the shared lines. The address phase spans exactly one clock cycle in standard 32-bit operations, during which the initiator places the target on the AD[31:0] lines, encodes the transaction command on the C/BE#[3:0] lines, and asserts FRAME# low to signal the transaction's initiation to all potential targets on the bus. This command on C/BE#[3:0] during the address phase indicates the operation type, such as a memory read or I/O write. For transactions requiring 64-bit addressing—specifically memory reads or writes—a dual-cycle address phase extends the duration to two clock cycles to accommodate the full address width on the 32-bit AD bus. In this dual-cycle mode, the first clock drives the upper 32 address bits (A[63:32]) on AD[31:0] along with a command encoding that signals the dual nature (via the M-bit in C/BE#), while the second clock drives the lower 32 address bits (A[31:0]) on AD[31:0] with C/BE#[3:0] set to all ones to indicate full address validity. Following the address phase(s), data phases commence with FRAME# remaining asserted through the first data phase and then deasserted after the final one, allowing for variable-length transfers in burst mode to optimize bus utilization for sequential accesses. Each data phase operates via a target-initiated ready handshake: the initiator asserts IRDY# low when its data is stable and ready for transfer (for writes) or when it is prepared to latch incoming data (for reads), while the target asserts TRDY# low to confirm its readiness, with an actual data transfer occurring only when both signals are low during a clock edge. During write data phases, the C/BE#[3:0] lines function as byte enables, each bit qualifying the corresponding byte on AD[31:0] (e.g., C/BE# low enables the least significant byte), permitting partial-word writes without affecting unselected bytes. For read data phases, C/BE#[3:0] are ignored by the initiator, though the target may drive them for or other optional uses. Accesses to a PCI device's 256-byte configuration space, which holds registers for device identification, capabilities, and base addresses, employ specialized addressing distinct from or I/O spaces. In the original Type 00 configuration mechanism, suitable for single-bus systems, the initiator issues a configuration read or write command with the bus number zero, device select bits in AD[31:11] (bits [10:8] for function, [7:2] for device), and asserts the device's dedicated IDSEL# pin to decode the target, effectively mapping the configuration space at a bus-relative address. The enhanced Type 01 configuration access, introduced for multi-bus hierarchies with bridges, encodes the full bus number (AD[31:20]), device number (AD[19:15]), and function number (AD[14:12]) in the address during the cycle, allowing transparent routing across without relying on IDSEL# pins.

Transaction Termination and Burst Modes

In PCI, transaction termination is managed through specific signal interactions during the data phase to ensure orderly completion or interruption of bus activity. Normal termination occurs after the final data transfer, where the initiator deasserts FRAME# to signal no further phases while keeping IRDY# asserted for writes or after the target provides the last data for reads. The target responds by asserting TRDY# to acknowledge receipt of the last doubleword, after which both parties deassert their respective signals—FRAME#, IRDY#, and TRDY#—to return the bus to an idle state. This process prevents bus contention and allows immediate for the next transaction. Initiators terminate burst transactions without errors by controlling the length of the transfer, deasserting FRAME# only after the desired number of data s while ensuring the target has not asserted STOP#. This mechanism supports efficient burst extensions, where addresses increment linearly by one doubleword (four bytes) per , enabling sequential accesses without unnecessary single-cycle overhead. The initiator must complete each data within 16 clock cycles to avoid timeout errors, maintaining bus efficiency for multi-doubleword transfers. Targets handle burst termination to manage resource constraints or s, using the STOP# signal in combination with DEVSEL#, TRDY#, and FRAME#. A disconnect without —suitable for limitations during bursts—occurs when the target deasserts TRDY# and asserts STOP# in a data , prompting the initiator to complete the current and release the bus, with the eligible for . A target-abort signals an unrecoverable by asserting STOP# without DEVSEL# assertion or with specific timing, immediately halting the and reporting the fault via status registers. For delayed s (retry mode), the target asserts STOP# while DEVSEL# is active, causing the initiator to deassert FRAME# promptly and the request for later completion, avoiding bus in latency-sensitive scenarios. Burst addressing in PCI optimizes sequential transfers, with modes defined by the initiator during the address phase using AD[1:0] encoding for memory read and write commands. Linear mode employs incrementing addressing, advancing the address by one doubleword (four bytes) per phase without boundary restrictions or wrapping, which is mandatory for all targets and supports bursts across line boundaries. The wrap (modulo) mode, optional for targets, causes the address to loop back to the start of the line after reaching its boundary (e.g., after four doublewords for a 16-byte minimum line), facilitating efficient line fills. Wrap mode is limited to a maximum of four doublewords to align with the smallest supported line size, ensuring compatibility across systems. These termination and burst mechanisms integrate with data phase handshakes, where IRDY# and TRDY# synchronize transfers before any termination signals are applied.

64-Bit Extensions and Parity

The 64-bit PCI extension provides an optional enhancement to the standard 32-bit bus, enabling higher bandwidth through wider data paths for both addresses and data transfers. Defined in the PCI Local Bus Specification, this feature adds 64 pins to the connector—32 on each side—using a dual-edge (or dual-notch) design to distinguish 64-bit slots from 32-bit ones and prevent incompatible insertions. The primary signals include AD[63:32] for the upper 32 address/data lines, C/BE[7:4]# for the corresponding upper byte enables, and PAR64 for parity protection on these lines. Additionally, the M66EN signal indicates support for 66 MHz operation in compatible slots, allowing the bus to run at higher frequencies when all agents support it. Addressing in 64-bit PCI employs a address cycle mechanism to handle addresses beyond 32 bits. When a master initiates a requiring a 64-bit , it issues a Dual Address Cycle (DAC) command: the first cycle carries the lower 32 bits (AD[31:0]) with the DAC encoding on C/BE[1:0], followed immediately by the upper 32 bits (AD[63:32]) in the next cycle. To negotiate 64-bit data transfer capability, the master asserts REQ64# low during the address phase if it supports or requires the wider path. The target samples REQ64# and, if compatible, asserts ACK64# low in the following clock cycle to confirm 64-bit operation; otherwise, the defaults to 32-bit using only AD[31:0] and C/BE[3:0]. This ensures with 32-bit devices. Parity mechanisms in , including the 64-bit extensions, provide detection for across the bus. Even is generated over the 36 bits comprising AD[31:0] and C/BE[3:0] (or the full 72 bits with AD[63:32] and C/BE[7:4]# in 64-bit mode), with the PAR signal (or PAR64 for the upper bits) driven by the asserting one clock after the AD and C/BE# signals to allow computation time. All s must check if enabled via ; a detected mismatch during a prompts the affected to assert PERR# low two clocks after the erroneous beat, indicating an or that typically triggers retry or handling. For more severe issues, such as s in the , master aborts, or target aborts, the SERR# signal is asserted by the responsible to notify the of unrecoverable conditions, often leading to interrupts or . These signals build on the core framework by extending coverage to the additional 64-bit lines. Low-latency optimizations in 64-bit include fast DEVSEL# assertion, where a target capable of rapid decoding can drive DEVSEL# active in the same clock cycle that the address appears on AD lines during read transactions, minimizing master wait states compared to medium (next clock) or slow (two clocks later) timings. Complementing this, fast back-to-back transactions permit the same initiating agent to reuse the bus immediately after transaction termination—without the standard one-clock turnaround delay—provided the new transaction targets a different agent and all downstream devices support it; this reduces idle cycles and boosts efficiency for bursty workloads.

Legacy and Modern Relevance

Obsolete Features

The original PCI Local Bus specification provided optional hardware support for cache snooping to ensure coherency between bus-master initiated writes and the CPU cache, particularly for write-back caching modes. This involved dedicated pins such as SNOOP0# through SNOOP3#, which allowed cache controllers to monitor address and transaction phases on the bus and signal responses like hits to modified lines or backoff requests via the SBO# (snoop backoff) pin during write cycles. The mechanism enabled the PCI bus to notify the CPU cache of potential invalidations or flushes, preventing stale data issues in systems where the CPU cache interfaced directly with the bus. However, with the integration of on-chip L1 caches in processors like the Intel Pentium starting in 1993, external bus snooping became unnecessary for primary cache coherency, as internal cache hierarchies handled it independently; by PCI revision 2.2 in 1998, these pins were designated as obsolete and must be left unconnected in subsequent implementations. Special cycles represented a unique transaction type in early PCI designs, functioning as broadcast operations without a targeted device, intended for system-level signaling such as shutdown events or vendor-specific messages. These cycles used a specific command encoding on the C/BE# lines and propagated across the bus but did not cross PCI-to-PCI bridges, limiting their scope to individual segments. Due to infrequent adoption—stemming from challenges in ensuring compatibility across diverse hardware and software ecosystems—and the availability of more reliable alternatives like configuration space accesses, special cycles saw minimal real-world use and were fully eliminated in the transition to . PCI version 2.2 introduced foundational power management features, defining device states from D0 (fully operational) to D3 (powered off) and supporting suspend/resume operations through registers in the configuration space, allowing software to control power consumption and clock gating for peripherals. These capabilities, outlined in the PCI Power Management Interface Specification 1.0 (released in 1997 and integrated into PCI 2.2), enabled basic energy savings but lacked comprehensive system integration. They were subsequently superseded by the Advanced Configuration and Power Interface (ACPI) standard starting with version 1.0 in 1996, which extended PCI power states into an OS-managed framework for coordinated device and platform control, rendering the original PCI mechanisms redundant for modern systems. Early PCI interrupt handling relied on wired-OR signaling across four shared pins (INTA# to INTD#), where devices asserted low to request service, and the bus logic used serial enumeration or to resolve conflicts in multi-device scenarios. This pin-based approach, while simple, suffered from scalability limitations in high-density configurations, as shared lines increased and required centralized routing tables for assignment. To address these issues, (MSI) were introduced as an optional feature in PCI 2.2, enabling devices to generate interrupts via dedicated memory write transactions addressed to an APIC or similar controller, eliminating physical wires and supporting up to 32 vectors per device for better performance in and multi-function setups; consequently, wired-OR interrupts were phased out in favor of MSI (and later MSI-X) for new designs, particularly in dense server and embedded systems.

Transition to Successors

As demands escalated in the late and early , the inherent limitations of PCI's parallel shared-bus architecture became increasingly apparent, prompting the development of a successor. The shared bus design required all devices to compete for access, leading to contention and reduced effective throughput as more peripherals were added; even in its highest configuration of 64-bit width at 66 MHz, PCI delivered a theoretical maximum of only 533 MB/s. In contrast, (PCIe) introduced a , point-to-point that eliminated bus contention by dedicating dedicated lanes between devices, enabling scalable that began at 250 MB/s per lane in its initial version and later reached up to 32 GT/s per lane in advanced implementations. The PCI Special Interest Group (PCI-SIG) formalized this shift by releasing the PCIe 1.0 specification in 2003, marking the official debut of the new standard as a high-speed serial interconnect designed to supplant conventional PCI while maintaining essential compatibility features. To ensure a smooth transition, PCIe incorporated backward compatibility mechanisms at the software level, allowing legacy PCI drivers and configuration software to detect and operate PCIe devices transparently; additionally, hardware bridges such as PCI-to-PCIe converters enabled older PCI cards to function in newer PCIe-based systems by translating signals between the parallel and serial domains. Despite the dominance of PCIe, conventional PCI retains niche relevance in modern contexts, particularly within and systems where compatibility is prioritized over peak performance, such as in panels, servers, and specialized as of 2025. Adapters that convert PCIe slots to PCI interfaces further support this persistence, allowing integration of older expansion cards in contemporary setups without full system overhauls. The era of active PCI development effectively concluded with the release of the PCI Local Bus Specification Revision 3.0 in February 2004, which removed support for 5V signaling while maintaining 3.3V compatibility, after which no further updates were issued by , signaling a strategic pivot to PCIe. In consumer personal computers, full migration to PCIe occurred by the mid-2010s, as motherboard manufacturers phased out PCI slots in favor of the more efficient standard, with notably dropping native PCI support in its chipsets around 2010.

References

  1. [1]
    PCI - The Peripheral Component Interconnect Bus - Meinberg
    Conventional PCI with 5V signal levels. The original PCI standard specified 5V bus signal levels since most of the processors and peripheral chips were ...
  2. [2]
    Intel's PCI History: the Sneaky Standard - IEEE Spectrum
    May 18, 2024 · One of the best examples of a bedrock standard is the peripheral component interconnect, or PCI, which came about in the early 1990s and ...
  3. [3]
    Peripheral Component Interconnect Bus - Explore Intel's history
    Intel introduced the Peripheral Component Interconnect (PCI) bus. PCI created a standardized format for connecting hardware devices to a computer, which made ...Missing: SIG | Show results with:SIG
  4. [4]
    The History of PCIe: Getting to Version 6 - Design And Reuse
    Mar 15, 2021 · It was primarily a 32-bit bus, although the standard allowed for 64-bit. Most importantly, it was a parallel bus. Today, it is only of ...
  5. [5]
    PCIe Introduction - LR-LINK
    Jul 31, 2025 · The PCI bus standard was originally introduced by Intel in 1992. It is a 32-bit parallel bus with a maximum transmission speed of 133 MB/s.<|control11|><|separator|>
  6. [6]
    [PDF] The History of the PCI Bus Architecture - NComm
    PCI Bus. First introduced in the early 1990s, the PCI bus had a sweeping effect on the current bus structures. The PCI bus, with initial data transfer speeds ...
  7. [7]
    What is PCI Express (PCIe)? – How it Works? | Synopsys
    PCIe, or Peripheral Component Interconnect Express, is a standard for connecting a computer's motherboard with peripherals such as graphics cards, sound cards, ...
  8. [8]
    Specifications - PCI-SIG
    PCI-SIG specifications define standards driving the industry-wide compatibility of peripheral component interconnects.PCI Express 6.0 Specification · PCI Express Specification · Ordering Information
  9. [9]
    [PDF] PCI Local Bus Specification
    Dec 18, 1998 · The PCI Local Bus Specification, revision 2.2, includes an introduction, overview, features, and benefits of the PCI Local Bus.
  10. [10]
    The Graphics Bus Wars - IEEE Computer Society
    May 21, 2025 · PCI was created to replace the VL and older buses. By 1995, PCI ... Intel led the development of PCIe as a replacement for the aging PCI ...
  11. [11]
  12. [12]
    Frequently Asked Questions - PCI-SIG
    The PCI bus began with a 32-bit / 33 MHz specification. Over time, to increase performance, 64-bit and 66 MHz versions were introduced. To increase the bus ...
  13. [13]
    What is the maximum number of devices a single PCI bus can handle?
    Dec 7, 2019 · A bus can handle up to 32 devices which seems pretty obvious because in type 1 configuration cycles 5 bits (15:11) are used to encode the device number.
  14. [14]
    [PDF] Introduction to PCI - TekNirvana
    PCI data transfers can be accomplished using burst transfers. Many PCI bus masters and target devices are designed to support burst mode. It should be noted ...
  15. [15]
    PCI (Peripheral Component Interconnect) Explained - ITU Online
    PCI's parallel bus design meant all devices shared the same data path. It ... While the original PCI standard is less common in new computers, replaced ...
  16. [16]
    Direct Memory Access (DMA) Modes and Bus Mastering DMA
    Bus mastering DMA allows for the efficient transfer of data to and from the hard disk and system memory. Bus mastering DMA keeps CPU utilization low.
  17. [17]
    How PCI Works - Computer | HowStuffWorks
    Mar 5, 2024 · During the early 1990s, Intel introduced a new bus standard for consideration, the Peripheral Component Interconnect (PCI) bus. PCI presents ...
  18. [18]
  19. [19]
    Chapter 39. From ISA to PCI Express
    One of the reasons the ISA bus was slow was that it only had 16 data channels. The 486 processor, once it was introduced, worked with 32 bits each clock pulse.
  20. [20]
    [PDF] Intel Unveils First PCI Chip Set - CECS
    Dec 9, 1992 · AMD, ATI, Adaptec, and National were elected to the PCI Steering Committee in a meeting at Comdex, joining founding members Intel, IBM, Compaq, ...
  21. [21]
    [PDF] PCI Technology Overview
    Feb 1, 2003 · Conventional PCI. ➢ Initial PCI 1.0 proposal by Intel in 1991. ➢ Introduced by PCI-SIG as PCI 2.0 in 1993. ➢ Version 2.1 approved in 1995.
  22. [22]
    [PDF] Intel Provides PCI Chip Set for Pentium: 3/29/93 - CECS
    Mar 29, 1993 · To allow system vendors to take full advantage of the performance of Pentium, Intel has announced its 82430.
  23. [23]
    History of Intel Chipsets - Tom's Hardware
    Jul 28, 2018 · In late 1992, Intel launched the 420TX chipset (code-named Saturn) for its 80486 CPU. The 420TX was the first Intel chipset to support PCI 1.0, ...
  24. [24]
    PCI-SIG Introduces "PCI Express™" (Formerly 3GIO) High-Speed ...
    Apr 17, 2002 · The PCI Special Interest Group was formed in 1992, and the organization became a nonprofit corporation, officially named "PCI-SIG" in the year ...
  25. [25]
    [PDF] PCI Specification 2.1 - Bitsavers.org
    Jun 1, 1995 · PCI Specification 2.1 is a revision of the PCI Local Bus Specification, dated June 1, 1995, that includes clarifications and a 66 MHz chapter.
  26. [26]
  27. [27]
    [PDF] PCI Local Bus Specification - Bitsavers.org
    Dec 18, 1998 · Contact the PCI Special Interest Group office to obtain the latest revision of the specification. Questions regarding the PCI specification or ...
  28. [28]
    PCI Local Bus Specification Revision 2.3
    Mar 29, 2002 · This document contains the formal specifications of the protocol, electrical, and mechanical features of the PCI Local Bus Specification, Revision 2.3
  29. [29]
    [PDF] Understanding the Pwr Sup Require of PCI Bus Stnd-How to Protect ...
    PCI Local Bus Specification Revision 2.1. Spec. Page No. I. Maximum for 5 V Supply Voltage 5.25 V *see Spec V. Section 4.2.1.1. 123. Table 4.1. II. Minimum for ...
  30. [30]
    Avoiding PCI Problems
    ... 5V slots. The 3.3V expansion cards have a notch at pin positions 12 and 13 (about half an inch from the end of the edge connector nearest the backplate).
  31. [31]
    [PDF] PCI1520 Implementation Guide - Texas Instruments
    This opens the possibility of potential card damage. If a 3.3V card is inserted into the hot slot that was powered to 5V, card damage will most likely occur.
  32. [32]
    What Is a Low-Profile PCI Card? - NI - Support
    Jun 14, 2024 · There are two defined standard lengths for low-profile PCI cards, known as MD1 (to accommodate 32-bit cards) and MD2 (to accommodate either 32- ...
  33. [33]
    [PDF] Low-Profile PCI Cards:
    The low-profile PCI specification also stipulates a new bracket design.While the low-profile PCI card bracket has the same width as standard height PCI cards' ...
  34. [34]
  35. [35]
    [PDF] Mini PCI Specification
    Oct 25, 1999 · Table 2-3: Mini PCI Card Type III System Connector Pinout ... • PCI Local Bus Specification, Revision 2.2. • PCI Bus Power ...
  36. [36]
    Guide to Laptop Wifi Cards | ThinkPenguin.com
    The Mini PCI cards are 59.6 × 50.95 mm, the full height Mini PCIe cards are 30×50.95 mm, and half height Mini PCIe cards are 30×26.8 mm.
  37. [37]
    PCIe Evolution: PCI Express to Mini PCIe for IoT Innovation
    Measuring just 30 mm × 50.95 mm × 6 mm, the Mini PCI Express is more compact than its predecessor, Mini PCI.
  38. [38]
    6.3.3. Interrupt Line and Interrupt Pin Register - Intel
    The Interrupt Pin register specifies the interrupt input used to signal interrupts. The PFs may be configured with separate interrupt pins.
  39. [39]
    6.1.4. Legacy Interrupts - Intel
    The term, INTx, refers collectively to the four legacy interrupts, INTA#, INTB#, INTC# and INTD#. The app_int_sts_vec[7:0] input vector controls interrupt ...
  40. [40]
    Introduction to Message-Signaled Interrupts - Windows drivers
    Feb 21, 2025 · Message-signaled interrupts (MSIs) were introduced in the PCI 2.2 specification as an alternative to line-based interrupts.
  41. [41]
    [PDF] PCI Local Bus Specification Revision 3.0
    Feb 3, 2004 · The PCI Local Bus Specification defines the PCI hardware environment. Contact the PCI SIG ... ❑ Enables full auto configuration support of PCI ...
  42. [42]
    3.2.1. Base Address Registers - Intel
    If you select 64-bit prefetchable memory, 2 contiguous BARs are combined to form a 64-bit prefetchable BAR; you must set the higher numbered BAR to Disabled. ...
  43. [43]
    PCI/Micro Channel White Paper - Ardent Tool of Capitalism
    Multiple masters: PCI supports bus masters with a REQ/GNT signal pair for each master and expansion connector. The number of masters supported is system-arbiter ...
  44. [44]
    [PDF] System Buses
    Each device has separate grant (GNT#) and request (REQ#) lines connected to the central arbiter. The PCI spec- ification does not mandate a particular ...
  45. [45]
  46. [46]
    Chapter 2: Arbitration - GlobalSpec
    For this purpose, each bus master has a pair of REQ# and GNT# signals connecting it directly to a central arbiter as shown in Figure 2-1.Missing: centralized | Show results with:centralized
  47. [47]
    [PDF] TMS320C64x DSP Peripheral Component Interconnect (PCI ...
    According to specification, the target is limited to 16 PCI clock cycles to complete the first data transfer.Missing: response | Show results with:response
  48. [48]
    [PDF] LSI53C1030 PCI-X to Dual Channel Ultra320 SCSI Multifunction ...
    presents a single electrical load to the PCI bus. The LSI53C1030 uses a single REQ/-GNT/ pair to arbitrate for PCI bus mastership. Separate interrupt ...
  49. [49]
    [PDF] Intel 82806AA PCI 64 Hub (P64H) - Octopart
    The interrupt controller provides up to 24 interrupts. In addition, the P64H provides 6 copies of the PCI clock. Page 10. Intel® ...<|control11|><|separator|>
  50. [50]
  51. [51]
    [PDF] PCI Case Study - Intel
    The Peripheral Component Interconnect (PCI) specification was used by the computing industry from 1992 until 2004 as the primary local bus system within a.
  52. [52]
    PCI Express Base Specification
    Specifications ; PCI Express Architecture PHY Test Specification Revision 4.0, Version 1.01 (Change Bar). This document provides test descriptions for PCI Exp...
  53. [53]
    PCIe / PCI Bridges - Diodes Incorporated
    The “Reverse” (PCI / PCIX-to-PCIe) mode offers the Reverse Bridging capability, proven effective in bridging new PCI Express End Point Devices to legacy PCI ...
  54. [54]
    Advantech Upgrades Industrial Motherboards and I
    Jan 3, 2023 · 1 x PCIe x16 (Gen4), 2 x PCIe x4, and 4 x PCI slots. Application upgrade with the latest technology and legacy expandability.
  55. [55]
    Intel says farewell to PCI bus - Sir Arthur's Den
    Jun 22, 2010 · Intel will introduce in the last part of 2010 – won't have support for the old PCI bus also known as “Conventional PCI”, implementing all the communications ...<|control11|><|separator|>