Peripheral Component Interconnect
Peripheral Component Interconnect (PCI) is an industry-standard local bus architecture designed for connecting hardware components, such as add-in cards and peripherals, to a computer's motherboard.[1] Developed by Intel as a response to fragmented bus standards like ISA and VESA's VL-Bus, the original PCI specification was released in 1992 and first implemented in 1993 alongside the Pentium processor.[2][3] The PCI Special Interest Group (PCI-SIG), an open consortium established in 1992 with over 1,000 members, was formed to maintain and evolve the standard, ensuring broad industry compatibility through royalty-free licensing.[4][5] As a parallel bus operating at 33 MHz with a 32-bit data width (expandable to 64 bits), conventional PCI supported maximum theoretical throughput of 133 MB/s, featuring plug-and-play auto-configuration for resources like interrupts and memory addressing.[1][6] It quickly became ubiquitous in PCs, enabling faster data transfer for devices like graphics cards and network adapters, and was named PC Magazine's Product of the Year in 1993 for its role in standardizing hardware integration.[3] Later revisions introduced 66 MHz speeds and 3.3V signaling for improved efficiency, while PCI-X extended it for servers with higher bandwidth up to 1 GB/s.[1][7] By the early 2000s, limitations of the parallel design prompted the transition to PCI Express (PCIe), a serial point-to-point interface launched in 2003, which offers scalable lanes and dramatically higher speeds while maintaining backward compatibility with PCI software.[4][8] Today, while legacy PCI slots are rare in consumer hardware, its foundational principles underpin modern expansions like PCIe 7.0, supporting data rates up to 128 GT/s for applications in AI, storage, and networking.[9][10]Overview
Definition and Purpose
The Peripheral Component Interconnect (PCI) is a high-speed parallel computer expansion bus standard developed by Intel and introduced in 1992 as a local bus system for connecting peripheral devices to a computer's motherboard.[2] Designed to enable modular hardware expansion, PCI provides a standardized interface for add-in cards—such as graphics accelerators, sound cards, and network interfaces—to interface directly with the central processing unit (CPU) and system memory.[11] The primary purpose of PCI is to facilitate efficient, high-bandwidth communication between the host processor and peripheral devices, supporting burst-mode data transfers at speeds up to 133 MB/s in its original 32-bit configuration operating at 33 MHz.[11] This capability addressed the limitations of earlier expansion buses like the Industry Standard Architecture (ISA), which was constrained to 8.33 MB/s, and the VESA Local Bus (VLB), a short-lived interim solution offering theoretical bandwidth up to 133 MB/s at 33 MHz but lacking robust standardization, electrical stability for multiple devices, and plug-and-play support.[2] By incorporating auto-configuration mechanisms, PCI simplified device installation and resource allocation, promoting broader adoption in personal computers during the mid-1990s.[2] Fundamentally, PCI employs a shared parallel bus architecture with multiple expansion slots connected via a common set of address, data, and control lines, allowing up to five devices per bus segment.[11] Transactions occur in a master-slave model, where a bus master (such as the CPU or a peripheral card) initiates read or write operations to a target slave device, enabling direct memory access and synchronized data exchange across the system.[11] Later revisions expanded these foundations to include 66 MHz clock rates and 64-bit data widths for enhanced performance.[9]Key Features and Advantages
The Peripheral Component Interconnect (PCI) bus operates synchronously, utilizing a shared clock signal to coordinate all transactions among connected devices, which ensures predictable timing and simplifies protocol implementation compared to asynchronous buses.[12] The base specification defines a 33 MHz clock rate, delivering theoretical peak bandwidth of 133 MB/s for 32-bit transfers, with later revisions supporting 66 MHz for doubled performance.[13] It employs a multiplexed 32-bit address and data bus, which can be extended to 64 bits via optional signaling for enhanced capacity in high-bandwidth applications.[13] Architecturally, PCI supports up to 32 devices per bus through unique device numbering in its configuration mechanism, though electrical loading constraints typically limit unbuffered implementations to around 10 loads, including the host bridge and slots.[14] A primary advantage of PCI is its burst transfer mode, which enables multiple consecutive data phases following a single address phase, allowing efficient sequential access to memory or I/O without repeated addressing overhead.[15] This contrasts sharply with the ISA bus, where each data transfer requires a dedicated address cycle, capping ISA throughput at approximately 8 MB/s even at its 8 MHz clock, while PCI achieves significantly higher effective rates for burst-oriented operations like graphics or disk I/O.[16] Bus mastering capabilities further reduce CPU involvement by permitting peripheral devices to initiate direct memory access (DMA) transactions, offloading data movement and minimizing processor interrupts for sustained transfers.[17] PCI's plug-and-play auto-configuration, facilitated by a 256-byte configuration space per device accessible via standardized reads and writes during system initialization, enables dynamic resource allocation through BIOS or operating system enumeration, obviating manual jumper or switch settings common in ISA systems.[18] This promotes ease of use and scalability across diverse hardware. The bus also ensures backward compatibility with slower devices, as all components adhere to the same protocol but can signal readiness at reduced speeds without disrupting higher-speed peers.[16] In specialized implementations, the PCI Hot-Plug specification allows runtime insertion or removal of cards with power management and surprise removal detection, enhancing reliability in server or industrial environments.[9]History and Development
Origins and Initial Design
The Peripheral Component Interconnect (PCI) standard originated in the early 1990s as a response to the growing performance demands of personal computers, particularly with the impending release of Intel's Pentium processor. Intel's Architecture Labs began developing the PCI local bus around 1990 to create a high-performance, processor-independent interface for connecting peripherals directly to the CPU, bypassing the limitations of existing expansion buses.[2] The primary motivations were the shortcomings of the Industry Standard Architecture (ISA) bus, which operated at only 8.33 MHz with a 16-bit data width, resulting in a maximum throughput of about 8 MB/s and lacking support for efficient bus mastering or plug-and-play configuration, and the Extended Industry Standard Architecture (EISA) bus, which, while offering 32-bit addressing and bus mastering at up to 8.33 MHz (around 33 MB/s), was overly complex, expensive to implement, and primarily suited for servers rather than desktops.[18][19] In late 1991, Intel collaborated with key industry partners—including IBM, Compaq, and Digital Equipment Corporation (DEC)—to refine the design and promote it as an open standard, culminating in the formation of the PCI Special Interest Group (PCI-SIG) in June 1992.[20] The PCI-SIG, with these founding members at its core, aimed to ensure broad adoption by managing compliance and evolution of the specification. The initial PCI Local Bus Specification, version 1.0, was released by Intel in June 1992, defining a 32-bit bus operating at 33 MHz for a theoretical maximum bandwidth of 133 MB/s, supporting both burst transfers and plug-and-play resource allocation to simplify system integration.[16] This design targeted desktop and server systems, emphasizing simplicity, low cost, and scalability over the proprietary or fragmented alternatives like VESA Local Bus.[21] Early adoption accelerated in 1993 following the launch of Intel's Pentium processor in March, with the company's 430LX chipset (codenamed Mercury) integrating PCI support as the first such implementation for Pentium-based systems.[22] Unveiled publicly at the Comdex trade show in November 1993, PCI quickly gained traction in PC manufacturing, enabling faster I/O for graphics, networking, and storage peripherals in an era of rapidly advancing CPU speeds.[3] By integrating PCI into mainstream chipsets, Intel and its partners marked the transition to a unified, high-speed expansion standard that dominated PC architectures for the next decade.[23]Standardization and Revisions
The Peripheral Component Interconnect Special Interest Group (PCI-SIG) was established in 1992 by Intel, Compaq, IBM, DEC, and other prominent industry players to govern the PCI specification, ensuring its evolution through collaborative development and compliance testing. This consortium quickly grew to include hundreds of members, fostering widespread adoption by standardizing the interface for peripheral connectivity across diverse hardware ecosystems. Subsequent revisions to the PCI Local Bus Specification refined its capabilities to meet emerging computational demands. Version 2.0, released on April 30, 1993, formalized the core connector design, pinout, and electrical signaling, providing a stable foundation for implementation. Version 2.1, issued June 1, 1995, introduced support for 66 MHz operation to double potential bandwidth over the original 33 MHz clock and added optional 64-bit address and data extensions for enhanced performance in high-end systems.[24] These updates enabled broader compatibility with faster processors while maintaining backward compatibility with earlier designs.[25] Further enhancements came in Version 2.2, published December 18, 1998, which incorporated refinements to power management protocols, including better support for low-power states and hot-plug capabilities through companion specifications.[26] Version 2.3, effective March 29, 2002, addressed limitations in 64-bit addressing for systems exceeding 4 GB of RAM by modifying the configuration space to handle extended memory mappings, while deprecating 5 V signaling in favor of 3.3 V for improved efficiency and safety.[27] These revisions solidified PCI as a de facto industry standard, with implementations in chipsets from vendors like Intel, AMD, and VIA Technologies, enabling seamless integration in billions of personal computers and servers.[2] By 2003, the PCI-SIG shifted primary development efforts toward PCI Express, recognizing the need for serial interconnects to support escalating bandwidth requirements, though conventional PCI continued to receive errata updates and legacy support thereafter. This transition marked the maturation of PCI as a foundational technology, with its specifications remaining influential in embedded and industrial applications.[9]Physical and Electrical Specifications
Connector Design and Pinout
The PCI connector utilizes an edge-card design with gold-plated contacts, known as "gold fingers," on the add-in card that insert into a slot on the motherboard or host adapter.[11] The standard 32-bit PCI connector consists of 62 pins per side (124 total contacts), with 120 dedicated to signals and 4 serving as keying positions to prevent incompatible insertions.[11] For 64-bit PCI support, an extension adds 30 pins per side (60 total additional contacts), enabling wider data paths while maintaining backward compatibility with 32-bit cards, for a total of 92 pins per side (184 contacts).[11] Key signal pins are assigned as follows: the multiplexed address and data lines AD[31:0] occupy designated positions across both sides (e.g., A20 for AD31, A31/B31 for AD0/AD1), allowing bidirectional transfer of 32-bit addresses and data.[11] Bus command signals C/BE[3:0]# (e.g., A32 for C/BE0#, B32 for C/BE1#, A28 for C/BE2#, B28 for C/BE3#) indicate the type of transaction, such as memory read or I/O write.[11] Control signals include FRAME# (A34) to delineate the start and duration of a bus transaction, IRDY# (A35) and TRDY# (B35) for initiator and target ready states, DEVSEL# (B36) for device select assertion, and STOP# (A36) to request transaction termination.[11] Power and ground pins are distributed throughout, with +5V (e.g., A23, B23), +3.3V (e.g., A42, B42 in 3.3V keyed slots), and multiple GND connections (e.g., A3, B3) for stable operation.[11] Signals are grouped logically for efficient routing and noise reduction: address/data and parity pins form the core multiplexed bus in the middle of the connector, while frame and control signals cluster on the edges near the card's leading and trailing ends.[11] Key slots at specific positions (pins A12/A13 and B12/B13) differentiate 5V-only, 3.3V-only, and universal voltage environments, ensuring electrical compatibility.[11] A 32-bit PCI card, using only the first 62 pins, can insert into a 64-bit slot if the slot features universal keying, though the extension remains unused; conversely, 64-bit cards require a full 64-bit slot to access the additional AD[63:32] and C/BE[7:4]# pins.[11]| Signal Group | Example Pins (Side A/B) | Description |
|---|---|---|
| Address/Data (AD) | A20 (AD31), A31 (AD0) / B31 (AD1), B20 (AD30) | Multiplexed 32-bit lines for addresses and data |
| Bus Commands (C/BE#) | A32 (0#), A28 (2#) / B32 (1#), B28 (3#) | Command/byte enable signals (4 bits for 32-bit) |
| Transaction Control | A34 (FRAME#), A35 (IRDY#), A36 (STOP#) / B35 (TRDY#), B36 (DEVSEL#) | Bus phase and handshake signals |
| Power/Ground | A23 (+5V), A42 (+3.3V), A3 (GND) / B23 (+5V), B42 (+3.3V), B3 (GND) | Supply and reference voltages |
| 64-bit Extension | A64-A93, B64-B93 (approx.) | Additional AD[63:32], C/BE[7:4]#, parity, and REQ64#/ACK64# |
Voltage Levels and Keying
The original PCI Local Bus Specification, released in 1992, supported only 5V signaling and power supply for add-in cards and slots.[9] To address increasing power demands and enable lower consumption in denser systems, 3.3V signaling was introduced in Revision 2.0 of the specification in 1993, with further refinements for universal compatibility in Revision 2.1 in 1995.[11] Universal slots accommodate both voltage levels by providing separate power pins—VCC for 5V and VCC3.3 for 3.3V—allowing cards to detect the available voltage through the VI/O pin and configure their I/O buffers accordingly.[11] Mechanical keying prevents the insertion of incompatible cards into slots by using notches on the card's edge connector that align with raised tabs in the slot. 3.3V-only cards feature a notch between pins 12 and 13 (approximately 56 mm from the card's backplate), while 5V-only cards have a notch between pins 32 and 33 (approximately 104 mm from the backplate); universal cards include both notches to fit either slot type.[11] These keying positions ensure that a 3.3V card cannot be inserted into a 5V-only slot (and vice versa), avoiding potential electrical mismatches. Pin assignments for the power rails are detailed in the connector design specifications.[11] Power delivery to PCI slots occurs primarily through the +5V and +3.3V rails, with add-in cards limited to a maximum of 25 W combined from these rails, as encoded by the card's presence detect pins (PRSNT1# and PRSNT2#) in increments of 7.5 W up to that limit.[28] Auxiliary +12 V and -12 V rails are available for specialized needs, such as analog components or EEPROM programming, typically supporting up to 1 A on +12 V and 0.5 A on -12 V, though these are optional and depend on system implementation.[11] Inserting a 5V-only card into a 3.3V-only slot can lead to compatibility issues, including improper signaling levels that may cause unreliable operation or component damage due to voltage mismatches.[29] Conversely, the greater risk arises from inserting a 3.3V card into a 5V slot, where the higher signaling voltage can exceed the card's tolerances and cause immediate failure, particularly in hot-plug scenarios without proper sequencing.[30] These mechanisms collectively ensure safe and reliable voltage handling in PCI systems.[11]Form Factors and Compatibility
PCI add-in cards adhere to defined form factors to ensure compatibility with various chassis sizes while maintaining a standardized edge connector for insertion into slots. The full-length form factor measures 312 mm (12.28 inches) in length, providing ample space for components requiring extensive board area. Half-length cards are limited to 175 mm (6.9 inches), suitable for systems with restricted internal dimensions. Low-profile variants, intended for slimline cases, utilize shorter lengths—MD1 at 119.91 mm (4.72 inches) for basic 32-bit cards and MD2 up to 167.64 mm (6.6 inches) for more complex designs—with a maximum height of 64.41 mm (2.54 inches) including the connector, yet all employ the identical 32-bit or 64-bit edge connector as full-size cards.[31][32] Compatibility across form factors emphasizes backward and forward integration. A 32-bit PCI card fits securely into a 64-bit slot, occupying the initial 32-bit portion of the longer connector without requiring an adapter, though performance remains limited to 32-bit capabilities. Universal slots and cards facilitate voltage compatibility by supporting both 3.3 V and 5 V signaling through dual-keying mechanisms that prevent incorrect insertions.[33] Mini PCI, a compact variant introduced by PCI-SIG in late 1999, addresses space constraints in portable devices like laptops with a reduced board size of approximately 59.6 mm × 50.95 mm. It supports 32-bit operations at 33 MHz and integrates directly into motherboards via an edge connector. The specification defines three types for varying stacking needs: Type I for single-height cards, Type II for dual-height configurations allowing stacked components such as modems, and Type III for even taller stacking in thicker assemblies. Type I and II use a 100-pin connector, while Type III employs a 124-pin interface to accommodate additional pins for power and signals. Voltage keying in Mini PCI mirrors standard PCI practices to avoid electrical mismatches. Furthermore, Mini PCI cards can interface with CardBus bridges to enable hot-plug capabilities in supported systems.[9][34][35][36]Configuration Mechanisms
Auto-Configuration Process
The auto-configuration process in PCI allows the system to dynamically discover, identify, and initialize connected devices during boot without requiring manual jumper settings or switches. This software-driven mechanism is initiated by the host bridge under BIOS or operating system control, which systematically scans the PCI bus hierarchy starting from bus 0. The scan probes each possible bus (0-255), device (0-31), and function (0-7 for multifunction devices) by issuing configuration read transactions to the 256-byte configuration space allocated per device/function. These transactions use Type 00h cycles for devices on the local bus and Type 01h cycles for propagating to downstream buses via bridges, enabling enumeration of the entire topology.[11] PCI defines two configuration access mechanisms to facilitate this probing, with Mechanism #1 serving as the primary method in version 1.0 and later. Mechanism #1 employs I/O-mapped ports—0x0CF8 for setting a 32-bit configuration address (including bus, device, function, and register offset) and 0x0CFC for data transfer—while using address bit mapping to select the device's IDSEL line for targeted access. Version 2.0 deprecated Mechanism #2 for new designs, retaining it only for legacy compatibility using a system-defined I/O address space in the range 0xC000h-0xCFFFh (or equivalent). Mechanism #1 remains the standard for auto-configuration in subsequent revisions.[11] Central to device identification are standardized registers in the first 64 bytes of the configuration space header (offsets 00h-3Fh). The 16-bit Vendor ID at offset 00h uniquely identifies the manufacturer (e.g., 0x8086 for Intel), and a value of 0xFFFF indicates no device is present, allowing the scan to skip empty slots. The adjacent 16-bit Device ID at 02h specifies the exact product variant. The 8-bit Revision ID at offset 08h, and the 24-bit Class Code at offsets 09h-0Bh (programming interface at 09h, subclass at 0Ah, base class at 0Bh) defines the device's functional category, such as 0x010000 for SCSI controllers or 0x020000 for Ethernet adapters, enabling software to recognize and load appropriate drivers. These fields, read early in the scan, confirm device presence and type before proceeding to resource setup.[11] Resource allocation follows detection and relies on the six Base Address Registers (BARs) at offsets 10h-24h in the configuration header, which describe the device's memory or I/O space needs. To determine requirements, software writes 0xFFFFFFFF to a BAR and reads back the value, where inverted bits reveal the alignment and size (e.g., low bits cleared to 0 indicate I/O space, while bit 2 distinguishes 32-bit from 64-bit addressing). The BIOS or OS then allocates non-overlapping base addresses—writing them back to the BARs—for memory regions, I/O ports, and expansion ROM, ensuring devices can map to the host's address space. Interrupt resources are assigned similarly via the Interrupt Pin and Line registers, integrating with broader interrupt handling mechanisms. This allocation completes device enablement by setting the Command register bits for bus mastership, memory/I/O access, and other functions.[11]Interrupt Handling
In traditional PCI systems, interrupt requests from peripheral devices are managed using four dedicated signal lines per expansion slot: INTA#, INTB#, INTC#, and INTD#. These lines are optional for devices but provide a standardized mechanism for signaling events to the host processor.[11] The signals operate as level-sensitive interrupts, asserted low (active low) using open-drain output buffers, which enables wired-OR sharing among multiple devices connected to the same line without electrical conflicts.[11] The interrupt handling process begins when a device asserts its assigned INTx# line to indicate an event requiring CPU attention. This assertion is routed through PCI bridges or directly to the system's interrupt controller, such as the Intel 8259 Programmable Interrupt Controller (PIC) or Advanced Programmable Interrupt Controller (APIC), where it is mapped to a specific system IRQ line based on configuration space settings established during the auto-configuration process. The interrupt controller then notifies the CPU, which suspends its current execution, saves the context, and vectors to the corresponding interrupt service routine (ISR) via the interrupt descriptor table.[37] Since the interrupts are level-sensitive, the device must deassert the INTx# line only after the ISR has serviced the request to avoid continuous triggering; shared lines require all asserting devices to deassert before the interrupt can be cleared.[11] In multi-slot or hierarchical PCI topologies, interrupt lines are routed via PCI-to-PCI bridges, which typically remap downstream INTx# signals to upstream lines using a rotational offset (e.g., INTA# from a downstream device may map to INTD# on the bridge) to balance load and enable sharing across segments.[38] This routing ensures scalability in systems with multiple buses while maintaining compatibility. To address limitations of pin-based interrupts, such as the fixed number of lines and sharing overhead, Message Signaled Interrupts (MSI) were introduced as an optional feature in Revision 2.2 of the PCI Local Bus Specification.[39] With MSI, a device signals an interrupt by issuing a dedicated memory write transaction to a locally assigned address and data value, rather than asserting a physical pin; this write is treated as a posted transaction and routed through the PCI fabric to the interrupt controller.[39] MSI supports up to 32 vectors per device (using a 16-bit message data field) and employs edge semantics, where each write is a distinct event without requiring deassertion, enhancing efficiency in high-device-density environments.[39] Configuration occurs via capability structures in the device's PCI configuration space, where the system allocates the target address during initialization. Interrupt signaling in PCI operates independently of bus arbitration for data transactions; while devices compete for bus mastery using separate REQ# and GNT# signals, interrupt assertion on INTx# lines or MSI writes can occur concurrently without requiring bus ownership.[11] This separation allows low-latency event notification even when the bus is occupied by other operations.Bus Architecture and Operations
Address Spaces and Memory Mapping
The PCI bus utilizes three primary address spaces to enable host-to-device communication: the configuration space, the I/O space, and the memory space. The configuration space is a per-function register space limited to 256 bytes, accessed through specialized mechanisms distinct from standard I/O or memory transactions, allowing enumeration and setup of devices during system initialization.[40] The I/O space provides a flat addressing model for legacy device control, supporting either a 16-bit address range (up to 64 KB total) or a 32-bit extension (up to 4 GB), depending on the host bridge implementation.[41] In contrast, the memory space facilitates memory-mapped I/O operations, offering a 32-bit address range by default (up to 4 GB) with optional 64-bit extensions for larger systems.[40] Device memory mapping is managed through Base Address Registers (BARs) located in the configuration space header (offsets 0x10 to 0x24 for standard devices), where each BAR specifies the type, size, and location of the device's addressable regions.[40] During enumeration, the operating system probes each BAR by writing all 1s to it and reading back the value; the fixed bits (typically low-order) that remain 0 indicate the device's requested region size, which must be a power of 2 (e.g., 4 KB, 16 KB, 1 MB, or up to 2 GB per BAR).[41] The OS then assigns non-overlapping base addresses from the available I/O or memory space, writing these values back to the BARs to map the device's registers or buffers into the system's address map, ensuring isolation and avoiding conflicts across multiple devices.[40] Within the memory space, BARs distinguish between prefetchable and non-prefetchable regions to optimize performance. A prefetchable BAR (indicated by bit 3 set in the BAR) denotes a memory region without read side effects, allowing the host CPU or bridges to perform speculative burst reads across 4 KB boundaries and cache line alignments without risking data corruption or unnecessary stops, which enhances throughput for sequential access patterns like DMA transfers.[40] Non-prefetchable regions (bit 3 clear) are used for areas with potential side effects on reads, such as control registers, and restrict prefetching to prevent errors, though they may incur higher latency due to aligned access requirements.[41] For systems exceeding 4 GB of addressable memory, PCI supports 64-bit addressing through extensions in the memory space. A 64-bit BAR is signaled by setting bits [2:1] to 10b in the lower BAR, consuming two consecutive 32-bit BARs: the first holds the lower 32 bits of the base address, while the second provides the upper 32 bits (MAB[63:32]).[40] Transactions targeting these addresses employ a dual-address cycle mechanism, where the high 32 bits are transferred in the first address phase followed by the low 32 bits in the second, enabling devices to respond to addresses beyond the 32-bit limit while maintaining compatibility with legacy 32-bit systems.[40] This extension is particularly vital for prefetchable regions in high-memory environments, as it allows mapping large device buffers without fragmentation.[41]Command Codes and Transaction Types
In the PCI bus protocol, bus commands are encoded on the C/BE[3:0]# lines during the address phase to specify the type of transaction a master device intends to perform.[11] These four-bit encodings allow for 16 possible commands, though some are reserved or specific to extensions. The primary commands include Interrupt Acknowledge (0000), Special Cycle (0001), I/O Read (0010), I/O Write (0011), Memory Read (0110), Memory Write (0111), Configuration Read (1010), and Configuration Write (1011), with additional memory-related variants such as Memory Read Multiple (1100), Dual Address Cycle (1101), Memory Read Line (1110), and Memory Write and Invalidate (1111).[11]| Command | Encoding (C/BE[3:0]#) | Description |
|---|---|---|
| Interrupt Acknowledge | 0000 | Master reads interrupt vector from an interrupting device; implicitly addressed to interrupt controller.[11] |
| Special Cycle | 0001 | Broadcast message to all agents on the bus, without a target response; used for system-wide signals like shutdown.[11] |
| I/O Read | 0010 | Master reads from I/O space; supports single or burst transfers, non-posted to ensure completion acknowledgment.[11] |
| I/O Write | 0011 | Master writes to I/O space; non-posted, requiring target acknowledgment before completion.[11] |
| Reserved | 0100 | Not used in standard PCI.[11] |
| Memory Read | 0110 | Master reads from memory space; supports single or burst transfers, targeting specific address spaces like system or expansion ROM.[11] |
| Memory Write | 0111 | Master writes to memory space; posted, allowing the master to proceed without waiting for target acknowledgment to improve performance.[11] |
| Reserved | 1000 | Not used in standard PCI.[11] |
| Configuration Read | 1010 | Master reads from a device's configuration space for initialization; uses Type 0 or Type 1 addressing.[11] |
| Configuration Write | 1011 | Master writes to a device's configuration space; non-posted.[11] |
| Memory Read Multiple | 1100 | Optimized memory read supporting cache-line bursts across multiple cache lines.[11] |
| Dual Address Cycle | 1101 | Precedes a 64-bit address transaction for 64-bit addressing support.[11] |
| Memory Read Line | 1110 | Memory read optimized for filling a full cache line in a burst.[11] |
| Memory Write and Invalidate | 1111 | Memory write that invalidates cache lines, combining write and coherency operations.[11] |