Accelerated Graphics Port
The Accelerated Graphics Port (AGP) is a high-speed, point-to-point interface standard developed by Intel for connecting graphics accelerators directly to a computer's motherboard, enabling faster data transfer between the graphics card and system memory compared to the shared PCI bus.[1] Introduced in 1996 to address the growing demands of 3D graphics and video applications, AGP allowed graphics cards to access main memory more efficiently, reducing the need for expensive local video memory while supporting pipelined operations and sideband addressing for near-optimal bus utilization.[2] Its initial specification (version 1.0) operated at a 66 MHz clock with 1× (266 MB/s) and 2× (533 MB/s) transfer modes using 3.3 V signaling, providing up to four times the bandwidth of PCI.[1] Subsequent revisions expanded AGP's capabilities to keep pace with advancing graphics hardware. Version 2.0, released in 1998, introduced 4× mode (1.07 GB/s bandwidth) and 1.5 V signaling for better efficiency, along with features like Fast Write transactions and enhanced flow control to support deeper pipelining up to 256 requests.[3] The final major update, AGP 3.0 in 2002 (also known as AGP8X), doubled the peak bandwidth to 2.1 GB/s in 8× mode through source-synchronous strobing and 0.8 V signaling, adding isochronous data transfer for low-latency applications and dynamic bus inversion to reduce noise, while maintaining backward compatibility with prior versions via a universal connector.[4] AGP became the dominant graphics interface for consumer PCs from the late 1990s through the early 2000s, powering widespread adoption in gaming and multimedia systems until it was progressively supplanted by PCI Express (PCIe) starting around 2004. PCIe offered scalable, higher-bandwidth lanes (up to 4-8 GB/s initially) and multi-device support, rendering AGP obsolete by 2008 as motherboard manufacturers shifted entirely to the new standard.[5] Despite its decline, AGP's innovations in dedicated graphics interconnects laid foundational groundwork for modern GPU architectures.History
Development and Introduction
The Accelerated Graphics Port (AGP) was developed by Intel in 1996 as a high-speed point-to-point interface designed specifically for attaching dedicated graphics cards directly to the motherboard, aiming to optimize performance for graphics-intensive applications.[1] This approach addressed the limitations of shared bus architectures like PCI, which constrained graphics data transfer rates during the emerging era of 3D acceleration.[6] The initial AGP specification, Revision 1.0, was released by Intel on July 31, 1996, defining a 32-bit bus operating at 66 MHz that supported up to 533 MB/s of bandwidth when utilizing sideband addressing for concurrent data and address transfers.[1] Intel led the development independently of the PCI Special Interest Group but collaborated with leading graphics vendors, including NVIDIA and ATI, to ensure broad industry adoption through a royalty-free licensing model.[7] The primary motivation stemmed from the mid-1990s boom in PC gaming and 3D graphics, where applications demanded faster access to system memory for textures and framebuffers to enable efficient rendering without excessive local video memory costs.[6][1] AGP was publicly introduced in March 1997, with the first hardware implementations appearing later that year.[7] Intel's 440LX chipset, launched in August 1997, provided the initial motherboard support for AGP, integrating the interface into Pentium II systems to accelerate 3D workloads.[8] Concurrently, early AGP-compatible graphics cards emerged, such as NVIDIA's Riva 128, released in August 1997, which leveraged the port's bandwidth for integrated 2D/3D acceleration and became a key driver in consumer adoption.[9]Adoption and Evolution
The Accelerated Graphics Port (AGP) saw rapid integration into consumer PC hardware following its specification release, with Intel's 440BX chipset providing early widespread support starting in early 1998, enabling 100 MHz front-side bus speeds alongside AGP 1x and 2x modes for enhanced graphics bandwidth over PCI. AMD followed with initial third-party chipset support via partners like VIA, but introduced native AGP 2x compatibility in its own AMD-750 chipset for Athlon processors in August 1999, broadening adoption across x86 platforms.[10] By 1998, AGP had become the dominant interface for graphics accelerators in PCs, remaining the standard through 2004 as motherboard vendors like ASUS, Gigabyte, and MSI incorporated it into over 90% of mid-to-high-end boards, facilitating the shift from 2D to 3D-focused computing.[11] AGP's adoption profoundly influenced graphics card development, with NVIDIA's GeForce 256, launched in October 1999 as the first dedicated GPU, leveraging AGP 4x to deliver hardware transform and lighting (T&L) capabilities that offloaded complex 3D calculations from the CPU, supporting higher resolutions like 1024x768 in demanding titles.[12] Similarly, ATI's Radeon 256 series, released in April 2000, utilized AGP to enable advanced 3D rendering pipelines with dual texture units and HyperZ bandwidth optimization, achieving up to 1 gigatexel/second fill rates for smoother gameplay at elevated settings.[13] These cards exemplified AGP's role in enabling features like hardware T&L and anisotropic filtering, which were essential for rendering detailed environments in early 2000s games without bottlenecking system memory. The market evolved alongside AGP's versioning, transitioning from 2x modes (533 MB/s bandwidth) in 1998 chipsets to 4x (1.07 GB/s) by 2000 with Intel's i815 and VIA KT133A, and culminating in 8x (2.13 GB/s) support in 2002 via Intel's 848 and AMD's 760MPX, correlating with Microsoft DirectX 8's programmable shaders and titles like Quake III Arena (1999). In Quake III, AGP 4x configurations with GeForce cards yielded up to 30-50% frame rate gains over PCI equivalents at high-quality settings with multiple lights and polygons, underscoring AGP's impact on real-time 3D gaming during its peak. Graphics vendors like NVIDIA and ATI adapted AGP protocols, incorporating fast writes and sideband addressing to optimize 3D pipelines for texture streaming and vertex processing, which became foundational for consumer 3D acceleration through 2004.[14]Decline and Obsolescence
The emergence of PCI Express (PCIe) in 2004 marked the beginning of AGP's decline, as the new standard offered superior scalability through its lane-based architecture, allowing for configurable bandwidth allocation and support for multiple high-speed devices simultaneously.[15] Unlike AGP's dedicated single-port design, PCIe enabled multi-GPU configurations such as NVIDIA's SLI and ATI's CrossFire, which required parallel high-bandwidth connections that AGP could not accommodate.[16] NVIDIA accelerated the transition by adopting PCIe in its GeForce 6 series, launched in April 2004, while Intel integrated PCIe support into its 9xx chipset series later that year, providing motherboard manufacturers with a unified interface for graphics and other peripherals.[17][18] AGP's final major advancements occurred around 2003–2004, with the release of AGP 8x cards like NVIDIA's GeForce FX 5950 and ATI's Radeon 9800 XT, but production of high-end AGP graphics cards ceased by mid-2004 as manufacturers shifted focus to PCIe.[19] By 2006, mainstream personal computers had fully phased out AGP slots in favor of PCIe, with NVIDIA's GeForce 7800 GS representing one of the last viable AGP options before the interface vanished from new motherboards. Several factors contributed to AGP's obsolescence, including bandwidth saturation at its AGP 8x maximum of 2.1 GB/s, which proved insufficient for the escalating data demands of increasingly complex graphics workloads by the mid-2000s.[19] Additionally, AGP's single-slot architecture inherently lacked support for multi-GPU setups, limiting scalability as gaming and professional applications increasingly relied on parallel processing.[5] Rising power and heat issues further exacerbated the problem, as high-performance GPUs exceeded AGP's standard power delivery limits of approximately 50 W—extended to 110 W only via the optional AGP Pro specification—necessitating auxiliary power connectors that PCIe integrated more efficiently through its 75 W per-slot capability and standardized auxiliary supplies.[20] Today, AGP persists primarily in legacy and niche applications, such as retro computing enthusiasts building period-accurate systems for early 2000s gaming, where compatible cards like the Radeon X800 remain sought after.[21] In server repurposing, AGP slots have been creatively adapted for non-graphics roles; for instance, a 2024 project demonstrated an AGP-to-PCI adapter on a Socket 7 motherboard, enabling the use of 66 MHz PCI network cards and SATA controllers for NAS storage expansion, leveraging AGP's dedicated bridge for improved performance over standard PCI.[22] As of 2025, no active development or new standards for AGP exist, solidifying its status as a legacy technology confined to hobbyist and archival contexts.Design and Advantages
Improvements over PCI
The Accelerated Graphics Port (AGP) addressed key limitations of the Peripheral Component Interconnect (PCI) bus for graphics-intensive applications by providing significantly higher bandwidth dedicated to the graphics accelerator. While the standard PCI bus offered a theoretical peak bandwidth of 133 MB/s on a 32-bit interface at 33 MHz, AGP 1x achieved 266 MB/s at 66 MHz through optimized signaling, effectively doubling the available throughput without requiring the graphics card to compete with other peripherals for bus resources.[1] Later versions scaled this further, with AGP 8x reaching 2.1 GB/s while maintaining backward compatibility, allowing sustained high-speed transfers for complex 3D workloads that PCI could not support efficiently.[19] A core architectural improvement was AGP's point-to-point design, which created a dedicated channel between the graphics card and the system's memory controller, in contrast to PCI's shared multi-device bus topology. This dedication eliminated arbitration delays and contention from other PCI devices, increasing data transfer throughput for graphics operations by up to an order of magnitude in pipelined scenarios.[1] AGP's support for deep pipelining—allowing up to 256 outstanding memory requests—further minimized latency by enabling the graphics accelerator to issue multiple commands before receiving responses, a capability limited in PCI's simpler transaction model.[1] AGP also introduced optimizations for memory access tailored to graphics needs, including optional sideband addressing via an 8-bit parallel port that separated address and data signals to increase efficiency during texture fetches. The Graphics Address Remapping Table (GART) enabled the graphics card to treat scattered system memory pages as a contiguous address space, facilitating direct access to main memory for textures and vertex data without the overhead of frequent local frame buffer swaps required on PCI.[1] These features collectively accelerated texture mapping and geometry processing by shifting more data handling to cost-effective system RAM.[23] In practical terms, these enhancements translated to faster frame buffer updates and improved 3D rendering performance. By isolating graphics traffic, AGP ensured consistent performance for emerging high-resolution and accelerated rendering applications, marking a pivotal advancement in consumer graphics hardware integration.Key Architectural Features
The Accelerated Graphics Port (AGP) employs a point-to-point bus architecture optimized for graphics accelerators, featuring a 32-bit multiplexed address/data (AD) bus operating at 66 MHz, which supports pipelined transactions to achieve high efficiency in data transfers between the graphics card and system memory.[1] This structure includes an optional 8-bit sideband address (SBA[7:0]) port that enables parallel address transmission, reducing latency compared to fully multiplexed in-band operations.[3] The design's dedicated nature provides superior access for graphics workloads over shared PCI buses, though detailed contrasts are covered elsewhere.[1] Central to AGP's graphics acceleration is the Graphics Address Remapping Table (GART), a memory-based table that maps scattered 4 KB pages of system RAM into a contiguous 32-bit physical address space accessible by the graphics processing unit (GPU) in AGP 1.0, with 64-bit support in later extensions, facilitating efficient texture and vertex data handling without excessive CPU intervention.[1] AGP also incorporates a high-priority queue in version 2.0 and later for low-latency delivery of graphics data such as display refreshes through split transactions, with isochronous data transfer added in AGP 3.0 for real-time applications.[3][4] These features prioritize bandwidth for graphics-specific operations, supporting prefetchable memory regions for sequential access patterns common in rendering.[4] Signaling in AGP operates in two primary modes to minimize overhead: in-band addressing uses the PIPE# signal to frame requests and commands on the AD bus, while sideband addressing leverages the SBA port for concurrent address delivery, which is optional for masters but required for targets and becomes mandatory in advanced configurations.[1] Source-synchronous strobes (e.g., AD_STB[1:0] and SB_STB) synchronize data capture in higher transfer modes, enhancing reliability over the 66 MHz clock.[3] AGP's electrical interface uses 3.3 V signaling in its initial version for compatibility with PCI environments, evolving to 1.5 V in subsequent revisions to support faster modes while maintaining backward compatibility via detection pins like TYPEDET#.[1] The 3.0 specification further introduces 0.8 V signaling with dynamic bus inversion for 8x transfers, reducing power and electromagnetic interference in universal slots that tolerate both 1.5 V and legacy 3.3 V through dedicated power pins (VDDQ).[4] This progression ensures robust, low-voltage operation tailored to graphics-intensive point-to-point connections.[3]Versions and Extensions
Standard AGP Versions
The Accelerated Graphics Port (AGP) evolved through three primary official versions, each building on the previous to increase bandwidth and efficiency for graphics data transfers between the CPU and graphics accelerator. AGP 1.0, released in 1996, established the foundational interface with support for 1x and 2x transfer modes operating at a 66 MHz clock rate and 3.3V signaling.[24][1] In 1x mode, it provided a peak bandwidth of 266 MB/s, while 2x mode doubled this to 533 MB/s by transferring data on both rising and falling edges of the clock; these rates were calculated as effective throughput approximating (66 MHz × 32 bits × mode multiplier) / 8 bits per byte, enabling basic pipelining for up to 256 outstanding memory requests to reduce latency in graphics operations.[1] AGP 2.0, introduced in May 1998, extended the specification by adding a 4x mode, achieving up to 1.06 GB/s bandwidth while maintaining backward compatibility with prior modes at either 3.3V or 1.5V signaling (with 4x requiring 1.5V for signal integrity).[3][25] This version introduced the Fast Write protocol, allowing direct CPU-to-graphics card memory transfers without intermediate buffering in system memory, which improved efficiency for texture and vertex data uploads in 2x and 4x modes using PCI-like memory write commands with block flow control.[3] Chipsets such as the Intel 815 family supported AGP 2.0, providing 1x, 2x, and 4x modes at 1.5V/3.3V for enhanced graphics performance in mid-2000s systems.[26] AGP 3.0, finalized in August 2002, further doubled the maximum speed with an 8x mode delivering 2.1 GB/s bandwidth at a 66 MHz clock, using a 1.5V power supply with 0.8V peak-to-peak signaling to minimize noise and power consumption.[4][19] Bandwidth for 8x was derived similarly as (66 MHz × 32 bits × 8) / 8 ≈ 2.1 GB/s, supporting up to 533 MT/s transfers, while incorporating improved error handling through features like Dynamic Bus Inversion (DBI) to reduce switching noise and isochronous error detection in the NISTAT register for better reliability in high-throughput scenarios.[4] This version was backed by chipsets like the Intel 875P, which implemented 1x/4x/8x AGP support for demanding graphics applications before the transition to PCI Express.[27]| Version | Release Year | Max Mode | Peak Bandwidth | Key Signaling Voltage | Notable Features |
|---|---|---|---|---|---|
| 1.0 | 1996 | 2x | 533 MB/s | 3.3V | Basic pipelining (up to 256 requests) |
| 2.0 | 1998 | 4x | 1.06 GB/s | 1.5V (for 4x) | Fast Write protocol |
| 3.0 | 2002 | 8x | 2.1 GB/s | 1.5V (0.8V swing) | DBI, improved error codes |
Official Extensions
The Accelerated Graphics Port (AGP) received official extensions from Intel to address limitations in power delivery and memory addressing for high-end and professional graphics applications. One key enhancement was AGP Pro, introduced in 1998 and finalized in specifications released in April 1999.[28] This extension extended the standard AGP connector to support higher power requirements, enabling graphics cards with greater thermal design power (TDP) suitable for demanding workloads.[25] AGP Pro incorporated auxiliary power connectors, including an optional 8-pin and 4-pin configuration, to deliver up to 110 W total power—combining 25 W from the base AGP slot with additional 12 V and 3.3 V rails from the extension.[29] It also defined mechanical standards, such as I/O bracket designs for two- or three-slot cards and a minimum 200 linear feet per minute (LFM) airflow for cooling, ensuring reliability in workstation environments.[28] Backward compatibility was maintained, allowing standard AGP cards to function in AGP Pro slots, though AGP Pro cards required the extended connector and were incompatible with legacy slots.[29] These features targeted professional use cases like CAD and 3D modeling, where high-TDP GPUs needed stable power without relying solely on the motherboard slot.[24] Another official extension involved 64-bit addressing support, integrated into the AGP 3.0 specification released in August 2002, with development beginning around 2001.[4] This enhancement extended the effective address bus to 64 bits via the Graphics Address Remapping Table (GART), allowing access to memory beyond the 4 GB limit of 32-bit systems through features like 64-bit GART entries and the OVER4G bit in AGP status and command registers.[4] The 64-bit mode doubled the addressable space for framebuffers without altering transfer speeds, relying on Dual Address Cycle (DAC) mechanisms from the PCI specification for compatibility.[4] Implemented in select workstation and server chipsets, it facilitated extended memory mapping for large datasets in graphics-intensive server applications.[4] For instance, AGP 3.0's 8x mode could leverage this addressing for improved performance in high-resolution rendering, though the core protocol remained unchanged from prior versions.[4]Unofficial Variations
Several manufacturers developed non-standard adaptations of the AGP interface to extend compatibility in systems lacking dedicated AGP slots, often integrating the protocol internally or via bridges to PCI or PCIe buses. These variations were not ratified by the official AGP specification and typically sacrificed full feature support for broader hardware integration.[30] One common unofficial variation was the internal AGP interface, exemplified by Silicon Integrated Systems (SiS) chipsets such as the SiS 630 northbridge. This implementation used the AGP protocol internally to connect an integrated graphics core directly to the memory controller, eliminating the need for an external AGP slot and reducing motherboard complexity. Known as Ultra-AGP, it provided up to 2 GB/s memory bandwidth via an internal AGP 4x implementation while supporting features like pipelined transactions for onboard 3D acceleration, though it inherently limited upgradability since discrete AGP cards could not be added. A later iteration, Ultra-AGP II, increased maximum bandwidth to 3.2 GB/s in compatible SiS controllers.[30][31] PCI-based AGP ports emerged through bridges like VIA Technologies' VT8601 Apollo ProMedia chipset, which incorporated an internal AGP-to-PCI bridge to enable AGP graphics operation over a PCI bus. This allowed AGP cards to function in systems without native AGP support by converting AGP signals to PCI transactions, supporting AGP 1.0 compliance with pipelined split-transaction bursts up to 533 MB/s. However, the bridge provided only partial feature implementation, such as basic GART support via a 16-entry TLB, but lacked higher-speed modes like AGP 4x or 8x.[32] PCIe-based AGP ports relied on emulation bridges to insert AGP cards into PCIe slots, with products like Albatron's ATOP adapter released in 2005 using NVIDIA's BR02 chip to translate AGP 8x signals to PCIe x16. Motherboard vendors such as DFI and Asus incorporated similar bridges in 2005-2006 boards (e.g., DFI's nForce4-based LanParty series with ULi southbridge extensions), allowing legacy AGP cards in transitioning PCIe systems. These adapters emulated AGP protocol over PCIe lanes but achieved only incomplete AGP 3.0 compliance, often capping at AGP 4x speeds.[33][34] These unofficial variations shared key limitations, including reduced effective bandwidth due to bridge overhead—e.g., VT8601's 533 MB/s ceiling versus AGP 8x's 2.1 GB/s—and compatibility issues with advanced features like Fast Writes or sideband addressing, leading to performance degradation of up to 20-30% in graphics-intensive tasks. Additionally, not all AGP cards functioned reliably, as emulation often failed to handle proprietary signaling, resulting in instability or non-detection in some configurations.[32][30]Technical Protocol
Command and Request Mechanisms
The Accelerated Graphics Port (AGP) employs distinct mechanisms for initiating data transfers between the graphics accelerator and system memory, primarily through command codes and request packets transmitted via either in-band or sideband addressing. These mechanisms support pipelined operations, allowing multiple outstanding requests to enhance throughput without blocking the bus. Commands and requests are framed using status signals ST[2:0], where the value "111" denotes the request phase on the AD bus.[1][3] AGP command codes are 4-bit encodings (CCCC) that specify the type of operation, transmitted on the C/BE[3:0]# signals for in-band requests or as part of the sideband address cycle. Common codes include 0000 for standard reads (transferring sequential Q-words starting at the specified address), 0100 for writes, 1100 for fence operations (ensuring all prior requests complete before subsequent ones), and 1010 for flush commands (to make pending writes visible to the system). In AGP 3.0, additional isochronous codes were introduced, such as 0011 for isochronous reads and 0111 for fenced isochronous writes, to support low-latency multimedia transfers. These codes are sent during the ST[2:0]="111" state to initiate transactions targeting system memory. No-ops are handled via sideband patterns such as all 1s on SBA[7:0].[1][3][4] In-band AGP requests utilize the PIPE# signal to multiplex addressing and control information directly over the main 32-bit AD bus and 4-bit C/BE# lines, enabling compatibility with PCI infrastructure while adding AGP-specific framing. When PIPE# is asserted low by the master (typically the graphics device), it enqueues one request per rising clock edge, with the address (29 bits, aligned to Q-words) and 3-bit length field (LLL, specifying 1 to 8 Q-words or 8 to 64 bytes) placed on AD[31:3] and AD[2:0], respectively, alongside the command code on C/BE#. This method avoids the need for a separate address bus but limits throughput in higher modes due to contention with data transfers; it supports up to 256 outstanding requests, source-throttled by available pipeline slots. In-band addressing was deprecated in AGP 3.0, with PIPE# repurposed for other functions.[1][3][4] Sideband AGP requests, in contrast, employ a dedicated 8-bit parallel bus SBA[7:0] to transmit address and command information independently of the AD bus, reducing contention and enabling higher bandwidth in 2x, 4x, and 8x modes. The master drives SBA[7:0] as outputs to the target (system logic), with requests structured in up to four sequential types per transaction: Type 1 (lower address bits [14:3] and LLL length), Type 2 (command code CCCC and mid-address [23:15]), Type 3 (upper address [35:24]), and optional Type 4 (extended address for >64 GB). These are strobed via SB_STB (and SB_STB# in 4x/8x modes) at source-synchronous rates, starting after a synchronization cycle where SBA[7:0] = FEh. Sideband addressing is optional in AGP 1.0 and 2.0 but mandatory in AGP 3.0, supporting pipelining without AD bus involvement.[1][3][4] The AGP request format incorporates a turn count mechanism to manage pipelining of multiple outstanding requests, specified via the RQ field in the AGPSTAT register (values from 1 to 256), which indicates the target's maximum queue depth and prevents overflow. Data routing is embedded in the request, directing transfers from the graphics device to system memory (or vice versa) through the Graphics Address Remapping Table (GART) for aperture-mapped access, with priority sub-queues (high/low, in AGP 1.0/2.0) for read/write operations to optimize latency. For example, a read request routes data back to the GPU after completion, while writes push texture or command data directly to shared memory.[1][3][4]Response Handling
In the AGP protocol, responses are primarily managed by the target device, which initiates signals to control transaction completion, flow, and status. The RBF# signal, asserted by the target, indicates that the master's read buffer is full, thereby stalling the return of low-priority read data to prevent overflow while allowing high-priority data to proceed uninterrupted (in AGP 1.0/2.0).[3] This flow control mechanism ensures efficient pipelining by requiring the master to have at least 16 bytes of buffering available (in 2x mode for transfers ≤16 bytes) before deasserting RBF# alongside GNT#.[3] Similarly, the STOP# signal enables the target to abort a transaction, such as during a disconnect or target-abort in Fast Write operations, prompting the master to deassert REQ# for at least two clock cycles unless it is the final data phase.[3] DEVSEL# is deasserted during native AGP transactions and is primarily used in PCI compatibility modes to indicate device selection and transaction acceptance, with timing constraints that prevent master-aborts if decoded within the specified slow decode period; a turnaround cycle is required if bus ownership changes following its use.[3] Error handling in AGP emphasizes reliability for graphics workloads, where transient data reduces the need for extensive reporting. Parity errors trigger retries via underlying PCI protocols, as AGP does not independently detect or report PERR# during native transactions, instead relying on WBF# to block initiations if the write buffer is full.[3] Fence commands provide pipeline synchronization by establishing boundaries in the master's access stream, ensuring that all preceding low-priority reads complete before subsequent writes, without consuming pipeline slots; high-priority requests may bypass these fences to maintain performance (in AGP 1.0/2.0).[3] In AGP 3.0, fences further enforce ordering for isochronous transfers, guaranteeing that writes before a fence are globally visible before those after, often using shared memory buffers for additional synchronization.[4] Completion of transactions occurs through data phases that match responses to original requests using identifiers like ST[2:0] encodings, which distinguish types such as low-priority reads (000) or high-priority reads (001) in AGP 1.0/2.0, supporting split-transaction pipelining with depths up to 256 entries.[3] This matching enables out-of-order processing while ensuring correct reassembly, with TRDY# and IRDY# handshakes at throttle points allowing either master or target to pause transfers by deasserting two clocks in advance.[3] Burst transfers are completed in blocks of up to 256 bytes, aligned to 8-byte boundaries; in 4x mode, this equates to 16 bytes per clock, with isochronous payloads scalable to 32, 64, 128, or 256 bytes without block-level stalling in AGP 3.0.[4] The AGP state machine governs these responses across defined phases to maintain protocol integrity. In the Idle state, the bus is free with no outstanding requests, transmitting NOPs (all 1s at low voltage) to preserve signaling.[4] The Address phase enqueues requests via PIPE# or Sideband Address (SBA) ports, latching GNT# and ST[2:0] for one clock.[3] During the Data phase, actual transfers occur with handshake signals, supporting 128-byte blocks in 8x mode for uninterrupted isochronous data.[4] Turnaround phases insert dead clocks for bus ownership transitions or protocol shifts, such as from AGP to PCI, ensuring clean signal propagation without contention.[3]Hardware Aspects
Connector Pinout
The Accelerated Graphics Port (AGP) connector is a specialized edge connector measuring approximately 1.5 inches (38 mm) in length, featuring 66 edge fingers per side for a total of 132 electrical contacts, designed to accommodate add-in cards up to 1.57 mm thick in a right-angle orientation on the motherboard.[3] It incorporates mechanical keying to ensure voltage compatibility: 3.3V-only slots have a plastic key between pins A22-A25 and B22-B25, 1.5V-only slots between A42-A45 and B42-B45, and universal slots have keying in the 3.3V position (A22-A25/B22-B25) but no key in the 1.5V position, allowing support for both 3.3V and 1.5V signaling environments through the TYPEDET# signal for automatic detection.[3] This design maintains backward compatibility across AGP versions while preventing insertion of mismatched cards, with the universal variant enabling operation in all modes from 1x to 8x.[4] Key signals on the AGP connector include the multiplexed 32-bit address/data bus AD[31:0], which handles both address and data transfers in a time-division manner; the 4-bit command/byte enable signals C/BE[3:0]#, used to specify transaction types and byte lane enables; the optional 8-bit sideband address bus SBA[7:0], providing an independent channel for queuing additional requests to supplement the main bus; and the PIPE# signal, which selects between standard PCI-like addressing and pipelined AGP modes when asserted low.[1] These signals are distributed across the connector's three rows (A, B, and C) to optimize signal integrity, with strobe signals like AD_STB[1:0] and SB_STB ensuring source-synchronous timing for high-speed transfers in 2x and 4x modes.[3] Power and ground pins are strategically placed for stable delivery and noise reduction, with multiple +3.3V pins (labeled VCC3.3 or Vddq3.3) supplying the primary I/O voltage up to 1A per pin in 3.3V configurations, +1.5V pins (Vddq1.5) for low-voltage modes limited to 1A total, and legacy +5V pins for compatibility with early peripherals though rarely used in core signaling.[1] Ground (GND) pins are densely distributed, typically one per two signal pins, to provide low-impedance return paths and minimize crosstalk, especially critical in 8x AGP 3.0's 0.8V signaling environment.[4] For AGP Pro implementations, the connector extends on both ends with additional pins dedicated to auxiliary power: up to 7.6A on +3.3V and 9.2A on +12V (VCC12), enabling higher total power budgets up to 110W for demanding graphics cards without relying solely on the main slot.[29] The pin assignments are organized into rows A (signals toward the card's leading edge), B (middle row), and C (trailing edge), with some pins reserved or no-connect (N/C) for future use. Below is a summarized table of representative assignments, highlighting key signals, power, and grounds; full specifications vary slightly by version (e.g., AGP 3.0 reuses some pins for DBI signals).[3]| Row | Pin | Signal | Description |
|---|---|---|---|
| A | 1 | Vddq3.3/Vddq1.5 | Primary I/O power (3.3V or 1.5V) |
| A | 2 | GND | Ground |
| A | 3 | AD31 | Address/Data bus MSB |
| A | 11 | C/BE3# | Command/Byte Enable MSB |
| A | 15 | SBA7 | Sideband Address MSB |
| A | 42 | PIPE# | Pipelined mode select |
| A | 59 | AD0 | Address/Data bus LSB |
| A | 62 | CLK | 66 MHz clock |
| B | 1 | GND | Ground |
| B | 2 | +12V | Auxiliary power (AGP Pro extension) |
| B | 5 | INTA# | Interrupt A |
| B | 11 | RBF# | Read Buffer Full |
| B | 14 | SB_STB | Sideband strobe |
| C | 1 | +3.3V Aux | Auxiliary power (AGP Pro) |
| C | 2 | GND | Ground |
| C | 8 | GNT# | Bus Grant |
| C | 22 | FRAME# | Transaction frame (inverted in 3.0) |
| C | 35 | TRDY# | Target Ready |
| C | 66 | GND | Ground (trailing) |