Fact-checked by Grok 2 weeks ago

Graphics card

A graphics card, also known as a video card, is an expansion card inserted into a computer's motherboard that generates a feed of output images to a display device such as a monitor, offloading graphics rendering tasks from the central processing unit (CPU) to accelerate visual processing. It contains a specialized electronic circuit called a graphics processing unit (GPU), which is a single-chip processor designed to rapidly manipulate memory and perform parallel computations for creating 2D or 3D graphics, video, and animations. The core function of a graphics card is to handle mathematically intensive operations like , , and polygon transformations, enabling high-frame-rate rendering for applications such as , , and scientific . Key components include the GPU chip itself, which features thousands of smaller processing cores optimized for tasks; video RAM (VRAM), such as high-bandwidth , for storing image data; and supporting elements such as modules (VRMs), cooling fans or heatsinks, and output ports like or . Graphics cards come in two main types: integrated GPUs, which are built into the (or CPU in some designs) and share system for basic tasks; and discrete GPUs, standalone cards with dedicated VRAM that provide superior performance for demanding workloads. Historically, graphics cards evolved from simple frame buffers in the , which relied heavily on CPU assistance for wireframe rendering, to sophisticated hardware in the with the introduction of acceleration chips like the series, marking the shift toward dedicated pipelines for rasterization and lighting. The term "GPU" was coined by in 1999 with the , the first card to integrate a complete on a single chip, paving the way for programmable shaders in the early and unified architectures by that extended GPUs beyond graphics to general-purpose computing (GPGPU). Today, graphics cards power not only entertainment but also , , and , with recent advancements like NVIDIA's Blackwell architecture in 2025 enhancing AI-driven features such as neural rendering and ray tracing for more realistic visuals.

Types

Discrete Graphics Cards

A discrete graphics card is a standalone consisting of a separate (PCB) that houses a dedicated (GPU), its own (VRAM), and specialized power delivery components, enabling high-performance rendering for demanding visual and computational workloads. Unlike integrated solutions, these cards operate independently of the (CPU), offloading complex graphics tasks such as , ray tracing, and to achieve superior speed and efficiency. This dedicated architecture allows for greater processing bandwidth and memory isolation, making discrete cards essential for applications requiring real-time visual fidelity. The primary advantages of discrete graphics cards include significantly higher computational power, often exceeding integrated options by orders of magnitude in graphics-intensive scenarios, along with support for advanced customizable cooling systems like multi-fan designs or liquid cooling to manage output. Additionally, their facilitates easy upgradability, permitting users to enhance graphics performance without replacing the CPU, , or other system components, which extends the lifespan of a PC build. These benefits come at the cost of higher power consumption and physical space requirements, but they enable tailored configurations for peak performance. Discrete graphics cards excel in use cases demanding intensive graphics processing, such as high-end rigs for immersive experiences with ray tracing, professional workstations for 8K rendering and effects, and AI training setups leveraging parallel compute capabilities for model development. Representative examples include NVIDIA's RTX 50 series, such as the RTX 5090, which delivers over 100 teraflops of AI-accelerated for next-generation and as of 2025, and AMD's RX 9000 series, like the RX 9070 XT, offering 16GB of GDDR6 for high-fidelity visuals in professional simulations. These cards provide a stark contrast to integrated graphics processors, which function as a lower-power alternative suited for basic display and light tasks. Installation of a graphics card typically involves inserting the card into a compatible PCIe x16 on the , securing it with screws, and connecting supplemental power cables from the power supply unit if the card's exceeds the slot's provision. Following hardware setup, users must download and install manufacturer-specific s—such as NVIDIA's Game Ready Drivers or AMD's Adrenalin Software—to ensure full feature support and OS compatibility across Windows, , or other platforms. Proper is crucial for optimizing and enabling technologies like for seamless integration with the system.

Integrated Graphics Processors

Integrated graphics processors (iGPUs) are graphics processing units embedded directly into the (CPU) die or integrated as part of the , enabling visual output without requiring a separate graphics card. Prominent examples include Intel's UHD Graphics series, found in Core processors, and AMD's Radeon Graphics, integrated into such as those based on the Vega or RDNA architectures. These solutions are designed for general-purpose , providing essential rendering capabilities for operating systems, video playback, and basic applications. The primary advantages of iGPUs lie in their cost-effectiveness and , as they eliminate the need for additional , reducing overall system expenses and power consumption—particularly beneficial for laptops and budget desktops. Their seamless with the CPU allows for faster data sharing and simpler thermal management, contributing to compact designs in mobile devices. However, limitations include reliance on shared system for memory allocation, which can lead to performance bottlenecks during intensive tasks, and inherently lower computational power compared to discrete GPUs for complex rendering. The evolution of iGPUs began in the late 1990s with basic 2D acceleration and rudimentary 3D support in chipsets, such as Intel's 810 platform released in , which introduced integrated rendering pipelines for entry-level visuals. By the early 2010s, on-die integration advanced significantly, with AMD's Llano APUs in 2011 and Intel's processors marking the shift to unified CPU-GPU architectures for improved efficiency. Modern developments, as of 2025, enable support for video decoding, hardware-accelerated encoding, and light gaming, exemplified by Intel's Arc-based iGPUs in Core Ultra series processors like Lunar Lake, which leverage Xe architecture for enhanced ray tracing and AI upscaling. In terms of performance, contemporary iGPUs deliver playable frame rates in scenarios, typically achieving 30-60 in titles like at low to medium settings, though they fall short of GPUs for high-end workloads requiring sustained high resolutions or complex effects.

Historical Development

Early Innovations

The development of cards began in the early 1980s with the introduction of the (CGA) in 1981, which marked the first standard for color on personal computers, supporting resolutions up to 320x200 with a palette of 16 colors from a total of 4 available simultaneously. This utilized a frame buffer—a dedicated area storing for the —to enable basic , fundamentally shifting from text-only displays to visual . In 1982, the emerged as a third-party innovation, providing high-resolution at 720x348 while maintaining compatibility with IBM's Monochrome Display Adapter (MDA), thus addressing the need for sharper text and simple in professional applications without color. These early cards relied on scan converters to transform or outline into raster images stored in the frame buffer, a process essential for rendering on () monitors. The rise of PC gaming and (CAD) software in the 1980s and 1990s drove demand for enhanced graphics capabilities, as titles like (1984) and early CAD tools such as required better and for immersive experiences and precise modeling. By the mid-1990s, this momentum led to multimedia accelerators like the ( Graphics Engine), released in 1995, which was among the first consumer-oriented chips to integrate 2D acceleration, basic , and video playback support, featuring a 64-bit for smoother motion handling. The same year saw the debut of early application programming interfaces (APIs) like DirectX 1.0 from , providing developers with standardized tools for accessing in Windows environments, thereby facilitating the transition from software-rendered to hardware-assisted graphics. Breakthroughs in 3D acceleration defined the late 1990s, with 3dfx's Voodoo Graphics card launching in November 1996 as a dedicated 3D-only accelerator that offloaded polygon rendering and texture mapping from the CPU, dramatically improving frame rates in games like Quake through its Glide API. Building on this, NVIDIA's RIVA 128 in 1997 introduced a unified architecture combining high-performance 2D and 3D processing on a single chip with a 128-bit memory bus, enabling seamless handling of resolutions up to 1024x768 while supporting Direct3D, which broadened accessibility for both gaming and professional visualization. These innovations laid the groundwork for frame buffers to evolve into larger video RAM pools, optimizing scan conversion for real-time 3D scenes and fueling the PC's emergence as a viable platform for graphics-intensive applications.

Modern Evolution

The of graphics cards, beginning in the early , marked a shift toward programmable and versatile architectures that extended beyond fixed-function rendering pipelines. NVIDIA's 3, released in , introduced the first consumer-level programmable and shaders, enabling developers to customize shading effects for more realistic visuals in games and applications. This innovation laid the groundwork for greater flexibility in graphics processing, allowing for dynamic lighting and texture manipulation that previous fixed pipelines could not achieve. By the mid-2000s, the industry transitioned to unified shader architectures, where a single pool of processors could handle , , and tasks interchangeably, improving efficiency and scalability. pioneered this with the G80 architecture in the 8800 series launched in 2006, which supported 10 and unified processing cores for balanced workload distribution. Concurrently, 's acquisition of in October 2006 for $5.4 billion consolidated graphics expertise, paving the way for ATI's evolution into 's lineup and fostering competition in unified designs. followed with its TeraScale architecture in the in 2007, adopting a similar unified approach to enhance performance in high-definition gaming. Entering the , advancements focused on compute capabilities and memory enhancements to support emerging workloads like general-purpose GPU (GPGPU) computing. NVIDIA's introduction of in 2006 with the G80 enabled parallel programming for non-graphics tasks, such as scientific simulations, while the Khronos Group's standard in 2009 provided cross-vendor support, allowing and others to leverage GPUs for . Hardware tessellation units, debuted in 11-compatible GPUs around 2009-2010, dynamically subdivided polygons for detailed surfaces in real-time, with NVIDIA's Fermi architecture ( GTX 400 series) and 's () leading early implementations. Video RAM capacities expanded significantly, progressing from GDDR5 in the early to GDDR6 by 2018, offering up to 50% higher bandwidth for gaming and applications. The 2020s brought integration of and , transforming graphics cards into hybrid compute engines. NVIDIA's RTX 20-series, launched in September 2018, incorporated dedicated RT cores for real-time ray tracing, simulating accurate light interactions, alongside tensor cores for AI-accelerated upscaling via (DLSS). entered the fray with its architecture in the in 2020, adding ray accelerators for hardware-accelerated ray tracing to compete in photorealistic rendering. DLSS evolved rapidly, reaching version 4 by 2025 with multi-frame generation and enhanced super resolution powered by fifth-generation tensor cores, enabling up to 8x performance uplifts in ray-traced games on RTX 50-series GPUs. Key trends included adoption of PCIe 4.0 interfaces starting with 's VII in 2019 for doubled bandwidth over PCIe 3.0, followed by PCIe 5.0 support in consumer GPUs starting with NVIDIA's GeForce RTX 50 series in 2025, building on platforms like Intel's that introduced PCIe 5.0 slots in 2021, though full utilization awaited higher-bandwidth needs. Amid the cryptocurrency mining boom from 2017 to 2022, which strained GPU supplies due to Ethereum's proof-of-work demands, manufacturers emphasized energy-efficient designs, reducing power per transistor via 7nm and smaller processes to balance performance and sustainability. By 2025, held approximately 90% dominance in the GPU market, driven by its and Blackwell architectures tailored for workloads.

Physical Design

Form Factors and Dimensions

Graphics cards are designed in various form factors to accommodate different PC sizes and configurations, primarily defined by their occupancy, length, height, and thickness. Single- designs occupy one expansion on the and are typically compact, featuring a single fan or , making them suitable for slim or office-oriented builds. Dual- cards, the most common for and applications, span two slots and support larger heatsinks with two or three fans for improved thermal performance. High-end models often extend to three or four slots to house massive coolers, enabling better heat dissipation in demanding workloads. These form factors ensure compatibility with standard motherboards, which provide multiple PCIe slots for installation. Low-profile variants, limited to about 69mm in height, fit (SFF) PCs and often use half-height brackets for constrained cases. For multi-GPU setups like legacy SLI configurations, specialized brackets align cards physically and maintain spacing, preventing interference while supporting parallel operation in compatible systems. Overall lengths vary significantly; mid-range cards measure approximately 250-320mm, while 2025 flagships like the RTX 5090 Founders Edition reach 304mm, with partner models exceeding 350mm to incorporate expansive cooling arrays. A key structural challenge in larger cards is GPU sag, where the weight of heavy coolers—often exceeding 1kg in high-end designs—causes the card to bend under gravity, potentially stressing the PCIe slot over time. This issue became prevalent with the rise of dual-GPU cards in the , as thicker heatsinks and denser components increased mass. Solutions include adjustable support brackets that prop the card from below, distributing weight evenly and preserving PCIe connector integrity without impeding airflow. These brackets, often made of aluminum or , attach to the case frame and have been widely adopted since the mid- for cards over 300mm long. Typical dimensions for a mid-range graphics card, such as the RTX 5070, are around 242mm in length, 112mm in height, and 40mm in thickness (dual-slot), influencing case selection by requiring at least 250mm of clearance in the GPU mounting area. Larger dimensions in high-end models can restrict airflow within the chassis, as extended coolers may block adjacent fans or radiators, necessitating cases with optimized ventilation paths. For instance, cards over 300mm often demand mid-tower or full-tower cases to maintain . Recent trends emphasize adaptability across device types. In laptops, thinner designs use (MXM) standards, with modules measuring 82mm x 70mm or 105mm, enabling upgradable graphics in compact while integrating cooling for sustained performance. For servers, modular form factors like NVIDIA's MGX platform allow customizable GPU integration into rackmount systems, supporting up to eight cards in scalable configurations without fixed desktop constraints. These evolutions prioritize fitment and while addressing heat dissipation through integrated cooling structures.

Cooling Systems

Graphics cards generate significant heat due to high power draw from the and other components, necessitating effective cooling to maintain performance and longevity. Cooling systems for graphics cards primarily fall into three categories: , air-based, and liquid-based. relies on natural and without moving parts, typically used in low-power integrated or entry-level cards where (TDP) remains below 75W, allowing operation without fans for silent performance. Air cooling, the most common for graphics cards, employs heatsinks with fins, heat pipes, and fans to dissipate heat; these systems dominate consumer GPUs due to their balance of cost and efficacy. cooling, often implemented via all-in-one (AIO) loops or custom setups, circulates through a block on the GPU die and a with fans, excelling in high-TDP scenarios exceeding 300W by providing superior . Key components in these systems include heat pipes, which use phase-change principles to transport heat from the GPU die to fins via evaporating and condensing fluid; vapor chambers, flat heat pipes that spread heat evenly across a larger area for uniform cooling; thermal pads for insulating non-critical areas while conducting heat from memory chips; and copper baseplates in modern 2025 models for direct contact and high thermal conductivity. For instance, NVIDIA's Blackwell architecture GPUs, such as the RTX 5090, feature advanced vapor chambers and multiple heat pipes designed for high thermal loads, improving cooling efficiency over predecessors. Thermal challenges arise from junction temperatures reaching up to 90°C in GPUs and 110°C in models under load, where exceeding these limits triggers throttling to reduce clock speeds and prevent damage, particularly in cards with TDPs over 110W. Blower-style air coolers, which exhaust hot air directly out the case via a single radial fan, suit multi-GPU setups by avoiding heat recirculation but generate more noise; in contrast, open-air designs with multiple axial fans offer quieter operation and 10-15°C better cooling in well-ventilated cases, though they may raise ambient temperatures. Innovations address these issues through undervolting, which lowers voltage to cut power consumption and by up to 20% without loss, extending clocks; integrated RGB lighting on fans for aesthetic appeal without compromising airflow; and like fluid dynamic bearings in 2025 fans for durability. Efficient 2025 GPUs, such as NVIDIA's RTX 5090, maintain core temperatures around 70°C under sustained load with these systems, minimizing throttling.

Power Requirements

Graphics cards vary significantly in their power consumption, measured as (TDP), which represents the maximum heat output and thus the electrical power draw under typical loads. Entry-level discrete graphics cards often have a TDP as low as 75 W, sufficient for basic tasks and light gaming when powered solely through the PCIe slot. In contrast, high-end models in 2025, such as the RTX 5090, reach TDPs of 575 W to support demanding workloads like ray-traced gaming and acceleration. To deliver this power beyond the standard 75 W provided by the PCIe slot, graphics cards use auxiliary connectors. The 6-pin PCIe connector supplies up to 75 W, commonly found on mid-range cards from earlier generations. The 8-pin variant doubles this to 150 W, enabling higher performance in modern setups. Introduced in as part of the PCIe 5.0 standard, the 12VHPWR (12 Volt High Power) 16-pin connector supports up to 600 W through a single cable, essential for flagship cards like the RTX 5090, which may use one such connector or equivalents like four 8-pin cables via adapters. RTX 50 series cards utilize the revised 12V-2x6 connector, an improved version of 12VHPWR with enhanced sense pins for better safety and detection, reducing melting risks. Integrating a high-TDP graphics card requires a robust unit (PSU) to ensure stability. NVIDIA recommends at least a 1000 W PSU for systems with the RTX 5090, with higher wattage advised for configurations with high-end CPUs to account for total system draw. This power consumption generates substantial heat, which cooling systems must dissipate effectively. Modern graphics cards exhibit power trends influenced by dynamic boosting, where consumption spikes transiently during peak loads to achieve higher clock speeds. NVIDIA's GPU Boost technology monitors power and thermal limits, potentially throttling clocks if exceeded, leading to brief surges that can approach or surpass the TDP. Users can tune these via software tools like NVIDIA-SMI, which allows setting custom power limits to balance performance and efficiency, or third-party applications such as MSI Afterburner for granular control. Safety considerations are paramount with high-power connectors like 12VHPWR, which include protection to prevent damage from faults. However, post-2022 incidents revealed risks of connector melting due to improper seating or bending, often from poor causing partial contact and localized overheating. Manufacturers now emphasize secure installation and native cabling over adapters to mitigate these issues, with revised 12V-2x6 variants improving sense pins for better detection.

Core Components

Graphics Processing Unit

The graphics processing unit (GPU) serves as the computational heart of a graphics card, specialized for parallel processing tasks inherent to rendering complex visuals. Modern GPUs employ highly parallel architectures designed to handle massive workloads simultaneously, featuring thousands of smaller processing cores that operate in unison. In NVIDIA architectures, such as the Blackwell series introduced in 2025, the fundamental building block is the streaming multiprocessor (SM), which integrates multiple CUDA cores for executing floating-point operations, along with dedicated units for specialized tasks. Similarly, AMD's RDNA 4 architecture, powering the Radeon RX 9000 series in 2025, organizes processing around compute units (CUs), each containing 64 stream processors optimized for graphics workloads, with configurations scaling up to 64 CUs in high-end models like the RX 9070 XT. These architectures enable GPUs to process vertices, fragments, and pixels in parallel, far surpassing the capabilities of general-purpose CPUs for graphics-intensive applications. A key evolution in GPU design since 2018 has been the integration of dedicated ray tracing cores, first introduced by in the Turing to accelerate ray tracing simulations for realistic , shadows, and reflections. These RT cores handle the computationally intensive traversals and ray-triangle intersections, offloading work from the main cores and enabling hybrid rendering pipelines that combine traditional rasterization with ray-traced effects. In 2025 flagships like 's GeForce RTX 5090, core counts exceed 21,000 cores, while equivalents feature over 4,000 stream processors across their CUs, with boost clock speeds typically ranging from 2.0 to 3.0 GHz to balance performance and . This scale allows high-end GPUs in 2025 to deliver over 100 TFLOPS of FP32 compute performance, while mid-range models achieve around 30 TFLOPS, establishing benchmarks for smooth rendering in and professional . The within a GPU encompasses stages like rasterization, which converts primitives into fragments; texturing, which applies surface details; and , which computes final colors and effects for each . Prior to 2001, these stages relied on fixed-function hardware, limiting flexibility to predefined operations set by the manufacturer. The shift to programmable pipelines began post-2001 with NVIDIA's GeForce 3 and ATI's 8500, introducing and shaders that allowed developers to write custom code for these stages, transforming GPUs into versatile programmable processors. By 2025, these pipelines are fully programmable, supporting advanced techniques like variable-rate to optimize performance by varying computation per pixel based on visibility. Contemporary GPUs are fabricated using advanced processes, with NVIDIA's Blackwell GPUs on TSMC's custom 4N and AMD's RDNA 4 on TSMC's 4nm-class , enabling denser integration for higher efficiency. Die sizes for 2025 flagships typically range from 350 to 750 mm², accommodating the expanded core arrays and specialized hardware while managing challenges. For instance, AMD's Navi 48 die measures approximately 390 mm², supporting efficient scaling across market segments. This integration with high-bandwidth video memory ensures seamless data flow to the processing cores, minimizing bottlenecks in memory-intensive rendering tasks.

Video Memory

Video memory, commonly referred to as VRAM, is the dedicated () integrated into graphics cards to store and quickly access graphical data during rendering processes. It serves as a high-speed buffer separate from the system's main , enabling the () to handle large datasets without relying on slower system memory transfers. This separation is crucial for maintaining performance in graphics-intensive tasks, where data locality reduces latency and improves throughput. Modern graphics cards primarily use two main types of video memory: GDDR (Graphics Double Data Rate) variants and HBM (High Bandwidth Memory). GDDR6X, introduced in 2020 by Micron in collaboration with for the RTX 30 series, employs PAM4 signaling to achieve higher data rates than standard GDDR6, reaching up to 21 Gbps per pin. , standardized by in 2022 and first deployed in high-end GPUs like 's , uses stacked dies connected via through-silicon vias (TSVs) for ultra-high bandwidth in compute-focused applications. Capacities have scaled significantly, starting from 8 GB in mid-range consumer cards to over 48 GB in professional models by 2025, such as the Radeon Pro W7900 with 48 GB GDDR6. High-end configurations, like 's RTX 4090 with 24 GB GDDR6X, support demanding workloads including gaming and training. Bandwidth is a key performance metric for video memory, determined by the memory type, clock speed, and bus width. High-end cards often feature a 384-bit memory bus, enabling bandwidth exceeding 700 GB/s; for instance, the RTX 4090 achieves 1,008 GB/s with GDDR6X at 21 Gbps. Professional cards frequently incorporate Error-Correcting Code (ECC) support in their GDDR memory to detect and correct data corruption, essential for reliability in scientific simulations and data centers, as seen in AMD's Radeon Pro series. VRAM plays a pivotal role in graphics rendering by storing textures, frame buffers, and Z-buffers, which hold depth information for occlusion culling. Textures, which define surface details, can consume substantial VRAM due to their high resolution and mipmapping chains. Frame buffers capture rendered pixels for each frame, while Z-buffers manage scene depth to prevent overdraw. Exhaustion of VRAM forces the GPU to swap data with system RAM, leading to performance degradation such as in games, where frame times spike due to increased . The , integrated into the GPU die, manages data flow between the VRAM modules and processing cores, handling addressing, error correction, and refresh cycles to optimize access patterns. Users can overclock VRAM using software tools like MSI Afterburner, which adjusts memory clocks beyond factory settings for potential gains, though this risks instability without adequate cooling. Historically, graphics memory evolved from standard to specialized GDDR types for higher speeds and efficiency, addressing the growing demands of in GPUs. Recent trends emphasize stacked architectures like HBM for and , where massive parallelism requires terabytes-per-second bandwidth to avoid bottlenecks in training large models. By 2025, HBM3 and emerging GDDR7 continue this shift, prioritizing density and power efficiency for data-center GPUs.

Firmware

The firmware of a graphics card, known as (VBIOS), consists of low-level software embedded in the card's that initializes the (GPU) and associated hardware during system startup. This firmware executes before the operating system loads, ensuring the GPU is configured for basic operation and providing essential structures for subsequent handoff. For GPUs, the VBIOS includes the BIOS Information Table (BIT), a structured set of pointers to initialization scripts, performance parameters, and hardware-specific configurations that guide the boot process. Similarly, GPUs rely on comparable firmware structures to achieve initial hardware readiness. VBIOS is stored in an (Electrically Erasable Programmable ) chip directly on the graphics card, allowing for reprogramming while maintaining data persistence without power. During boot, it performs the () to verify GPU functionality, programs initial clock frequencies via (PLL) tables, and establishes fan control curves based on temperature thresholds to prevent overheating. For power management, VBIOS defines performance states (P-states), such as NVIDIA's P0 for maximum performance or lower states for efficiency, including associated clock ranges and voltage levels; AMD equivalents use power play tables to set engine and memory clocks at startup. It also supports reading (EDID) from connected monitors via the (DDC) to identify display capabilities like resolutions and refresh rates, enabling proper output configuration. Updating VBIOS involves flashing a new image using vendor tools, such as NVIDIA's nvflash utility or AMD's ATIFlash, often integrated with OEM software like ASUS VBIOS Flash Tool, to address bugs, improve compatibility, or adjust limits. However, the process carries significant risks, including power interruptions or incompatible files that can brick the card by corrupting the EEPROM, rendering it non-functional until recovery via external programmers. Following vulnerabilities in the 2010s that exposed firmware to tampering, modern implementations incorporate digital signing and Secure Boot mechanisms; NVIDIA GPUs, for example, use a hardware root of trust to verify signatures on firmware images, preventing unauthorized modifications and integrating with UEFI for chain-of-trust validation during boot. OEM customizations tailor VBIOS variants to specific platforms, with desktop versions optimized for higher power delivery and cooling headroom, while laptop editions incorporate stricter thermal profiles, reduced power states, and hybrid graphics integration to align with mobile constraints like battery life and shared chassis heat. These differences ensure compatibility but limit cross-platform flashing without risking instability. The VBIOS briefly interacts with OS drivers post-initialization to transfer control, enabling advanced runtime features.

Display Output Hardware

Display output hardware in graphics cards encompasses the specialized chips and circuits responsible for and converting digital video signals from the GPU into formats suitable for transmission to displays. These components handle the final stages of signal preparation, ensuring compatibility with various output standards while maintaining image integrity. Historically, this hardware included analog conversion mechanisms, but contemporary designs emphasize to support high-resolution, multi-display setups. The Digital-to-Analog Converter () was a core element in early display output hardware, functioning to translate digital pixel data stored in video into analog voltage levels for and early LCD monitors. By accessing a programmable color in , the generated precise analog signals for red, green, and blue channels, enabling resolutions up to 2048x1536 at 75 Hz with clock speeds reaching 400 MHz in high-end implementations during the . It played a crucial role in VGA and early DVI-I outputs, where analog components were required for legacy compatibility. As interfaces proliferated, RAMDACs became largely obsolete in cards by the early , supplanted by fully pipelines that eliminated the need for analog . The transition was driven by the adoption of standards like DVI-D and , which transmit uncompressed video digitally without signal degradation over distance. Modern GPUs retain minimal analog support only for niche VGA ports via integrated low-speed DACs, but primary outputs rely on digital encoders. For digital outputs, (TMDS) encoders and transmitters form the backbone of , serializing parallel RGB data into high-speed differential pairs while minimizing . These encoders apply 8b/10b encoding to convert 24-bit (8 bits per channel) video data into 30-bit streams, with serialization at up to 10 times the —enabling support for at 60 Hz with 36-bit or higher in 1.3 and beyond. Integrated within the GPU's engine, they handle and channel balancing for reliable transmission over DVI and ports. Content protection is integral to these digital encoders through (HDCP), which applies AES-128 in counter mode to video streams before TMDS encoding, preventing unauthorized copying of premium audiovisual material. HDCP occurs between the graphics card (as transmitter) and (as receiver), generating a 128-bit exchanged via TMDS control packets; then XORs the key stream with data in 24-bit blocks across the three TMDS channels. This ensures compliance for and 8K content delivery, with re-authentication triggered by link errors detected through error-correcting codes in data islands. Multi-monitor configurations leverage the display output hardware's ability to drive multiple independent streams, with daisy-chaining via Multi-Stream Transport (MST) in enabling up to 4 native displays and extending to 6-8 total through chained hubs on 2025-era cards like NVIDIA's RTX 50 series. The hardware manages bandwidth allocation across streams, supporting simultaneous outputs while synchronizing timings to prevent tearing. This scalability is vital for professional workflows, where the GPU's display controller pipelines parallel signal generation without taxing the core rendering units. Display scalers within the output hardware perform real-time resolution upscaling and format adaptation, interpolating lower-resolution content to match native display panels—such as bilinear or Lanczos algorithms to upscale to —while converting color spaces like RGB to for efficient transmission over bandwidth-limited links. These circuits apply matrix transformations to separate (Y) from (CbCr), reducing data volume by subsampling channels (e.g., 4:2:2 format) without perceptible loss in perceived quality. Hardware acceleration ensures low-latency processing, often integrated with the TMDS encoder for seamless pipeline operation in video playback and gaming scenarios.

Connectivity

Host Bus Interfaces

Host bus interfaces connect graphics cards to the , enabling data transfer between the GPU and the CPU, system memory, and other components. These interfaces have evolved to support increasing demands driven by advancements in graphics processing and computational workloads. Early standards like and laid the foundation for dedicated graphics acceleration, while modern PCIe dominates due to its scalability and performance. The bus, introduced in June 1992 by and managed by the , served as the initial standard for graphics cards, providing a shared 32-bit bus at 33 MHz for up to 133 MB/s . PCI allowed graphics adapters to integrate with general-purpose slots but suffered from limitations for tasks. To address this, developed the (AGP) in 1996 as a dedicated for video cards, offering point-to-point connectivity to main memory with of 266 MB/s (1x mode) in AGP 1.0, increasing to 533 MB/s (2x) and 1.07 GB/s (4x) in AGP 2.0, specifically targeting acceleration. AGP improved latency and texture data access compared to PCI, becoming the standard for consumer graphics cards through the early 2000s. The (PCIe) interface, introduced by in 2003 with version 1.0, replaced and by using serial lanes for higher throughput and full-duplex communication. Each subsequent version has doubled the data rate per lane while maintaining . PCIe 2.0 (2007) reached 5 GT/s, PCIe 3.0 (2010) 8 GT/s, PCIe 4.0 (2017) 16 GT/s, and PCIe 5.0 (2019 specification, with updates through 2022) 32 GT/s. Graphics cards typically use x16 configurations, providing up to 64 GB/s of in PCIe 5.0, sufficient for high-resolution gaming and workloads.
PCIe VersionRelease YearData Rate per Lane (GT/s)x16 Bandwidth (GB/s, bidirectional)
1.020032.5~8
2.020075.0~16
3.020108.0~32
4.0201716.0~64
5.0201932.0~128
Power delivery via the PCIe slot is standardized at 75 W through three 12 V rails and a 3.3 V rail, with higher-power graphics cards requiring auxiliary connectors like 6-pin (up to 75 W additional) or 8-pin (up to 150 W) from the power supply. For systems without internal PCIe slots, such as laptops, serves as an alternative for external GPUs (eGPUs). Introduced in eGPU enclosures like Alienware's Graphics Amplifier in late 2014, 3 and later versions provide PCIe tunneling over , supporting up to 40 Gbps bandwidth for portable graphics acceleration. Another external option is OCuLink, a direct PCIe connection using small form-factor connectors, providing up to PCIe 4.0 x4 (64 Gbps) bandwidth for eGPUs in desktop and portable setups since 2024. Looking ahead, PCIe 6.0, finalized in 2022, doubles bandwidth to 64 GT/s per lane (up to 256 GB/s for x16) using PAM4 signaling, with products launching in 2025 for and high-end consumer applications (as of November 2025). In modular systems, the Open Compute Project's OCP Accelerator Module (OAM), specified since 2019, offers a standardized for integrating GPUs and accelerators with up to 700 W TDP and flexible interconnects like PCIe or Ethernet.

Display Interfaces

Display interfaces on graphics cards provide the physical and protocol standards for transmitting video signals from the GPU to external displays, monitors, or other output devices. These interfaces have evolved from analog to digital technologies to support higher resolutions, refresh rates, and advanced features like (HDR) imaging. Analog interfaces, once dominant, have largely been supplanted by digital ones due to limitations in signal quality over distance and support for modern content.

Analog Interfaces

The (VGA) interface, introduced by in 1987, uses a DE-15 ( 15-pin) connector and transmits analog RGB video signals. It supports resolutions from 640×480 at 60 Hz (its namesake VGA mode) up to 2048×1536 at 75 Hz, depending on cable quality and signal integrity. However, VGA has been fading from graphics cards since the early , as digital interfaces offer superior image quality without the susceptibility to and signal degradation.

Digital Interfaces

Digital interfaces transmit uncompressed or compressed video data via serial links, enabling higher bandwidth and features like embedded audio and content protection. The (DVI), developed by the Digital Display Working Group in 1999, uses a digital-only (Transition-Minimized Differential Signaling) protocol. Single-link DVI supports up to 3.96 Gbps (165 MHz pixel clock), sufficient for resolutions like 1920×1200 at 60 Hz. Dual-link DVI doubles this to 7.92 Gbps (330 MHz pixel clock), handling up to 2560×1600 at 60 Hz. DVI remains on some legacy cards but is increasingly rare on new graphics hardware. High-Definition Multimedia Interface (), standardized by the HDMI Forum starting in 2002, integrates video, audio, and control signals over a single cable. 2.1, released in 2017, provides up to 48 Gbps bandwidth via Fixed Rate Link (FRL) signaling, supporting 8K at 60 Hz with 4:4:4 and . It includes Audio Return Channel (ARC) for bidirectional audio and enhanced eARC for uncompressed formats like . 2.0, its predecessor from 2013, offers 18 Gbps bandwidth, enabling at 60 Hz with 4:4:4 . DisplayPort (DP), developed by VESA since 2006, employs a packetized protocol for scalable bandwidth. 2.0, released in 2019 and updated to 2.1 in 2022, delivers up to 80 Gbps (UHBR20 mode with four 20 Gbps lanes), supporting 16K (15360×8640) at 60 Hz using (DSC). It natively includes adaptive sync technologies like Adaptive-Sync (the basis for ) for tear-free gaming. Earlier versions like DP 1.4 provide 32.4 Gbps for 8K at 60 Hz.

Other Interfaces

USB-C with DisplayPort Alt Mode, standardized by VESA in 2014, repurposes the connector for video output by tunneling signals alongside USB data and power delivery. It supports full DP bandwidth (up to 80 Gbps in DP 2.0/2.1 configurations) over passive cables up to 2 meters, enabling single-cable solutions for 8K video and multi-monitor setups. Video-In Video-Out (), a legacy feature on select high-end graphics cards from the to (e.g., series), uses a 9-pin or 10-pin for analog TV signal capture and output. It handles (Y/C separated /) and (combined signal), typically supporting /PAL standards up to 720×480 or 720×576 resolutions for and broadcast applications. has been discontinued on modern cards due to the rise of digital capture methods.

Key Features

Multi-Stream Transport (MST), introduced in 1.2 (2010), allows a single cable to carry up to 63 independent audio/video streams, enabling daisy-chaining of multiple displays (e.g., three monitors at 60 Hz from one DP 1.4 port). This is particularly useful for professional multi-monitor setups. High Dynamic Range (HDR) support enhances contrast and color by transmitting metadata for dynamic tone mapping. It became available in HDMI 2.0a (April 2015) and DisplayPort 1.4 (March 2016), requiring at least 18 Gbps bandwidth for 4K HDR at 60 Hz with 10-bit color. Both interfaces now support HDR10 and Dolby Vision in later revisions.

Compatibility and Limitations

Adapters like DVI-to-HDMI or DP-to-VGA convert signals but may reduce bandwidth or require active electronics for digital-to-analog conversion, limiting resolutions (e.g., VGA adapters cap at 1920×1200). Bandwidth constraints affect high-end use; for instance, HDMI 2.0's 18 Gbps supports 4K@60Hz but not 4K@120Hz without compression, while DP 2.0's higher throughput avoids such bottlenecks. All modern interfaces include HDCP for protected content.
InterfaceMax BandwidthExample Max ResolutionKey Features
VGAAnalog (variable MHz clock)2048×1536@75HzDE-15 connector, no audio
DVI (Dual-Link)7.92 Gbps2560×1600@60HzTMDS signaling, optional analog
48 Gbps8K@60Hz (4:4:4)eARC, Dynamic , audio/video
DisplayPort 2.080 Gbps16K@60Hz (with )MST, Adaptive-Sync, tunneling
USB-C Alt ModeUp to 80 Gbps (DP 2.0)8K@60HzPower delivery, USB data
Analog (/PAL)720×480@60Hz/composite I/O

Performance and Scaling

Multi-GPU Configurations

Multi-GPU configurations enable the combination of multiple graphics cards to increase rendering performance, primarily through specialized interconnects and rendering modes that distribute workloads across GPUs. These setups require compatible hardware, such as motherboards with multiple PCIe slots and bridging connectors, along with software support from drivers and applications. NVIDIA's (SLI), introduced in 2004, linked multiple GPUs using a high-speed bridge to divide rendering tasks. It supported up to four cards in configurations like 2x2 or 4x1 setups. However, SLI support for new profiles ended on January 1, 2021, with no updates for newer games, and bridges were removed starting with the RTX 40 series, limiting it to legacy RTX 20 series and older hardware. Key modes included Alternate Frame Rendering (AFR), where GPUs alternate complete frames for balanced load distribution, and Split Frame Rendering (), which divides the screen into regions for each GPU with dynamic balancing to handle uneven scenes. AMD's , launched in 2005, provided a similar multi-GPU solution for cards, supporting two to four identical GPUs from the same series via a bridge or direct PCIe connection. The brand was retired in 2017, with ongoing support limited to 11 applications on older hardware; 12 uses mGPU branding but relies on inconsistent developer profiles. It employed modes such as Alternate Rendering (AFR), where GPUs render successive frames, and Split Rendering (SFR) for partitioning the , with driver profiles optimizing compatibility for 9-12 and applications. In modern systems, NVIDIA's offers a high-bandwidth interconnect exceeding 1.8 TB/s bidirectional throughput, primarily for compute workloads like training rather than , enabling seamless scaling across dozens of GPUs in environments. Traditional SLI and have declined in due to CPU bottlenecks, where single-threaded game engines limit scaling efficiency, and inconsistent developer support for multi-GPU parallelism in APIs like DirectX 12. Historical performance scaling in multi-GPU setups using SLI or typically yielded 1.5x to 1.8x gains over a single GPU in supported titles at high resolutions (pre-2021), though results varied by game and mode. However, issues like micro-stuttering—brief frame time inconsistencies from asynchronous rendering—can degrade smoothness, particularly in AFR modes without explicit application optimization. With the discontinuation of hardware-based multi-GPU for , such scaling is no longer viable in modern titles. As of 2025, software-based alternatives have emerged for dual-GPU , such as Lossless Scaling, a tool that uses a secondary GPU for AI-driven frame generation and upscaling to boost frame rates without requiring bridged or support. This approach can achieve 2x-3x effective improvements in various games by offloading tasks, though it depends on the secondary GPU's capabilities and may introduce minor . Alternatives to bridged multi-GPU include integrated multi-GPU (mGPU) in APUs, where the CPU's integrated graphics pairs with a discrete GPU via Dual Graphics for modest performance boosts in light workloads. External GPUs (eGPUs) connected via 3/4 allow multi-GPU extension from laptops, though bandwidth limitations cap scaling to around 1.3x-1.5x compared to internal setups.

Graphics APIs

Graphics APIs serve as standardized software interfaces that enable applications to harness the computational power of graphics processing units (GPUs) for tasks such as , , and general-purpose . These APIs abstract the underlying hardware complexities, allowing developers to issue commands for rendering pipelines, execution, and while optimizing for performance across diverse GPU architectures. Primarily developed by industry consortia or vendors, they emphasize efficiency in modern real-time applications like and , where low-latency access to GPU resources is critical. Microsoft's represents a suite of tailored for Windows platforms, with serving as the core for graphics. DirectX 11, released in 2009, introduced and compute shaders to enhance geometric detail and in rendering. Subsequent versions advanced further: DirectX 12, launched in 2015, reduced CPU overhead by enabling explicit control over GPU resources, such as command lists and descriptor heaps, for better multi-threading in complex scenes. DirectX 12 Ultimate, announced in 2020, consolidates advanced features including (DXR), which was first introduced in 2018 to support hardware-accelerated tracing for realistic lighting, reflections, and shadows in environments. DXR integrates generation and shaders, allowing developers to trace rays against acceleration structures for efficient path simulation without fully replacing rasterization. Vulkan, developed by the and released in 2016, is an open-standard, cross-platform designed for high-efficiency access to GPUs in both and environments. It addresses limitations of prior by providing low-overhead abstractions, where developers manually manage , allocation, and command buffers to minimize intervention and CPU bottlenecks. This explicit model supports multi-threaded command recording, enabling scalable performance on multi-core systems for demanding applications. Vulkan's SPIR-V intermediate language facilitates pre-compiled shaders, reducing runtime compilation overhead and ensuring portability across vendors like , , and . OpenGL, initially released in 1992 by and now maintained by the , established the foundation for cross-platform 3D graphics programming with its state-machine-based interface for rendering primitives, textures, and lighting. As a legacy , Khronos prioritizes for future innovations, though OpenGL remains in active maintenance with new extensions added as of 2025, such as GL_EXT_mesh_shader for enhanced gaming support; the core specification has seen no major versions since OpenGL 4.6 in 2017. It continues to see use in specialized fields like (CAD) software, where its simplicity suits precise modeling and visualization without the complexity of modern low-level APIs. For general-purpose GPU (GPGPU) computing, vendor-specific extend graphics capabilities to non-graphics workloads like scientific simulations and . NVIDIA's , introduced in 2006, is a platform that allows developers to program GPUs using C/C++ extensions for tasks such as matrix operations and data-parallel algorithms, leveraging thousands of CUDA cores for high-throughput execution. AMD's , an stack launched in 2016, provides analogous functionality for AMD GPUs, including runtime libraries, debuggers, and the HIP language for portable kernel code, supporting in (HPC) environments. A key differentiator among these APIs is overhead, defined as the CPU cycles spent on driver translation of high-level calls to GPU instructions. Legacy APIs like incur higher overhead due to implicit state management and automatic error checking, potentially limiting frame rates in CPU-bound scenarios. In contrast, low-overhead designs in and 12 shift more responsibility to applications—such as explicit barrier —yielding up to 20-30% better performance in multi-threaded rendering benchmarks on modern hardware. This efficiency is crucial for real-time 3D applications, where reduced latency enables smoother experiences. Driver models underpin API implementation on operating systems, with Microsoft's (WDDM), introduced in 2006 with , providing the architecture for integration. WDDM enables , preemption for responsive scheduling, and timeout detection for stability, allowing multiple applications to share GPU resources without crashes. Evolving through versions like WDDM 2.0 (2015) for multi-monitor efficiency, it supports the low-overhead paradigms of 12 by handling context switching in user-mode drivers.

Applications and Industry

Major Manufacturers

NVIDIA Corporation, founded on April 5, 1993, by , , and , pioneered accelerated computing with a focus on 3D graphics for gaming and multimedia applications. The company invented the (GPU) in 1999, which revolutionized PC gaming and laid the foundation for its and RTX series, renowned for high-performance rendering and ray tracing technologies central to gaming and workloads. By the second quarter of 2025, NVIDIA held approximately 94% of the discrete GPU market share, underscoring its dominance in standalone graphics cards for consumer and professional use. Advanced Micro Devices (AMD) entered the graphics market through its $5.4 billion acquisition of in 2006, integrating ATI's expertise into its portfolio and enabling advancements in CPU-GPU convergence. Post-acquisition, AMD continued the brand with series like the Radeon HD 2000 and subsequent generations, emphasizing versatile graphics solutions for gaming and computing. A key differentiator is AMD's commitment to open-source drivers, such as the AMDGPU kernel module for , which supports hardware from the architecture onward, and the open-source Vulkan driver for broad compatibility. This approach fosters developer accessibility and integration in open ecosystems, including AMD's Accelerated Processing Units () that blend CPU and GPU capabilities for efficient hybrid computing. Intel entered the discrete graphics market with the launch of its Arc series in March 2022, initially targeting mobile devices before expanding to desktops and workstations in the second quarter of that year. The lineup, based on the Xe architecture, emphasizes across integrated and configurations, supporting features like acceleration and high-resolution displays to complement Intel's ecosystem. This hybrid focus aims to streamline graphics performance in laptops and desktops, bridging embedded solutions with standalone cards for broader market penetration. Add-in-board (AIB) partners such as and play a crucial role in the graphics card ecosystem by customizing reference designs from , , and with enhanced cooling solutions and capabilities. These partners develop proprietary heatsinks, fans, and vapor chambers—often spanning multiple expansion slots—to manage thermal loads in high-end models like the RTX 4090, enabling quieter operation and sustained performance under heavy loads. Their innovations allow for factory-overclocked variants that exceed stock specifications, catering to enthusiasts while maintaining compatibility with original chip architectures. The graphics industry has evolved from independent hardware pioneers like 3dfx Interactive, founded in 1994 and known for its Voodoo Graphics accelerator that popularized in PCs, to a fabless model dominated by design-focused firms. acquired 3dfx's assets in 2000, and the company filed for bankruptcy in 2002, marking the decline of in-house fabrication among smaller players. Similarly, , which began producing graphics chips in 1986, operated as an integrated manufacturer until its 2006 acquisition by shifted it toward a fabless strategy. Today, major manufacturers like and design GPUs but outsource production to foundries such as , prioritizing and software ecosystems over physical fabrication. The graphics card market, encompassing GPUs for and use, is estimated at USD 82.68 billion in 2025, reflecting robust growth fueled by escalating demand in and applications. This expansion is propelled by the need for in sectors such as video , which accounts for a substantial share of shipments, and AI-driven data centers, where GPUs accelerate workloads. Key market trends include lingering effects from the 2021 supply shortages, exacerbated by mining demand that absorbed up to 25% of GPU shipments in early quarters, leading to inflated prices and reduced availability for gamers and professionals. Post-2022, mining's influence has waned significantly due to Ethereum's shift to proof-of-stake, resulting in declining GPU utilization for and a stabilization of supply chains. However, by 2025, renewed GPU scarcity has emerged due to surging demand from artificial intelligence applications, with Nvidia prioritizing production for enterprises and AI labs over consumer and gaming markets, leading to potential delays and reduced availability for mid-to-high-end gaming GPUs. Additionally, initiatives are gaining traction, with manufacturers prioritizing energy-efficient designs that reduce (TDP) to align with global environmental standards and lower operational costs. For instance, leading firms aim for substantial efficiency gains, such as 30-fold improvements in AI server accelerators by 2025 compared to 2020 baselines. Graphics cards find diverse applications across industries, with and representing a driver through real-time rendering for immersive experiences in titles like competitive multiplayer games. In professional realms, they enable complex (VFX) in and production, accelerating ray tracing and particle simulations for studios handling high-resolution content. Scientific simulations, including climate modeling and , leverage GPU to handle vast datasets far beyond CPU capabilities. Consumer graphics cards typically range from $200 for entry-level models suitable for basic to over $2,000 for high-end variants offering advanced ray tracing and enhancements, while enterprise-grade options command premiums of 20-50% higher for certified reliability in environments. Looking ahead, integration of capabilities in graphics cards is poised to expand their role in on-device inference for and mobile applications, enhancing real-time processing without cloud dependency.

References

  1. [1]
    [PDF] Body of Knowledge for Graphics Processing Units (GPUs)
    This document will discuss an overview of what a GPU is, how they are used in computing systems, their susceptibility to degradation and radiation effects, due ...
  2. [2]
    All about graphics processing units (GPUs) - Microsoft Support
    A GPU, also known as a video or graphic card, handles graphics-related work. There are two types: integrated and discrete. Discrete GPUs are for heavy tasks.Missing: components | Show results with:components
  3. [3]
    [PDF] History and Evolution of GPU Architecture
    The graphics processing unit (GPU) is a specialized and highly parallel microprocessor designed to offload and accelerate 2D or 3D.
  4. [4]
    NVIDIA Blackwell GeForce RTX 50 Series Opens New World of AI ...
    Jan 6, 2025 · Next generation of GeForce RTX GPUs deliver stunning visual realism and 2x performance increase, made possible by AI, Neural Shaders and DLSS.Missing: facts | Show results with:facts
  5. [5]
    discrete-vs-integrated-laptop-gpu-which-one-to-choose|ASUS USA
    Aug 31, 2023 · A discrete or dedicated GPU, aka dGPU, is a graphics processor that is separate from the CPU chip. This type of GPU is usually used in high-end, ...
  6. [6]
  7. [7]
  8. [8]
    What Is a Discrete GPU and Why Should It Matter to You?
    Discrete graphics cards offer significantly better performance than integrated GPUs in visually intensive applications, such as video editing and gaming. A high ...Missing: definition | Show results with:definition
  9. [9]
    Integrated Vs Dedicated Graphics Cards | HP® Tech Takes
    Jan 18, 2023 · Dedicated graphics cards deliver better performance but use more power, cost more money, and they're found more commonly in desktops than ...
  10. [10]
    What is a discrete graphics card? - Assured Systems
    Discrete graphics cards are far more effective at rendering complex images and scenes, including different lighting sources and shadows. They are designed to be ...
  11. [11]
    Discrete Graphics Card in the Real World: 5 Uses You'll Actually ...
    Oct 2, 2025 · They are used in gaming, professional design, artificial intelligence, and scientific simulations. As technology advances, these cards are ...
  12. [12]
    GeForce RTX 50 Series Graphics Cards - NVIDIA
    Equipped with a massive level of AI horsepower, the RTX 50 Series enables new experiences and next-level graphics fidelity.RTX 5090 · RTX 5050 · RTX 5080 · RTX 5070 FamilyMissing: examples | Show results with:examples
  13. [13]
    AMD Introduces New Radeon Graphics Cards and Ryzen ...
    May 20, 2025 · The new Radeon AI PRO R9700 is expected to be available from leading board partners starting in July 2025. Ryzen Threadripper 9000 Series ...Missing: examples | Show results with:examples
  14. [14]
    [Motherboard/Graphics Card]How to install the graphics card ... - ASUS
    May 22, 2025 · Ensure power is off, remove chassis panel, align card's notch with the PCIe slot, insert vertically until a click, and secure with screws.
  15. [15]
    How to Install a Graphics Card or GPU into a PC - YouTube
    May 16, 2025 · ... install your GPU drivers Whether you're a beginner or just need a quick refresher, this guide will help you get your graphics card installed ...
  16. [16]
    PCI Express FAQ for Graphics - Windows drivers | Microsoft Learn
    However, a minimum of 40 bits of physical address bits must be implemented. The unimplemented bits should be forced to zero. These requirements are not ...
  17. [17]
    Intel® Arc™ Graphics Overview
    Intel Arc GPUs enhance gaming experiences, assist with content creation, and supercharge workloads at the edge.
  18. [18]
    What is integrated graphics? - Assured Systems
    The Advantages of Integrated Graphics ... Nearly all computer CPUs today have an integrated GPU, with the exception of high-end CPUs. The integrated GPU model is ...
  19. [19]
    Famous Graphics Chips: Intel's GPU History - IEEE Computer Society
    Nov 26, 2020 · 2009 Whitney Whitney was Intel's first CPU with integrated GPU. And Intel was the first to introduce a CPU with a built-in GPU. Its Clarkdale ...
  20. [20]
    Which CPU Has The Most Powerful Integrated GPU Available Today?
    Jul 26, 2025 · In tests conducted by ETA Prime, "Forza Horizon 5" ran at an average of 63 fps on the Intel chip at 1080p using Intel XeSS and medium settings, ...
  21. [21]
    Computer Video Card History
    Sep 12, 2023 · 1981. IBM developed its first two video cards, the MDA (Monochrome Display Adapter) and CGA (Color Graphics Adapter), in 1981. The MDA had 4 ...Missing: sources | Show results with:sources
  22. [22]
    [PDF] Overview of Graphics Systems - Texas Computer Science
    Aug 8, 2003 · Early raster-scan computer systems ... For characters that are defined as outlines, the shapes are scan converted into the frame buffer.
  23. [23]
    The History of the GPU - Steps to Invention | springerprofessional.de
    The first competitive graphics card using the IBM specification was the Hercules Graphics Card (HGC) in 1982. That board helped establish the PC as a ...
  24. [24]
    Graphics Cards - DOS Days
    Arguably the lowest grade of early PC colour graphics was IBM's Color Graphics Adapter (CGA). Introduced in 1981 it was the first to be able to display colour, ...Missing: history sources
  25. [25]
    Famous Graphics Chips: S3 ViRGE - IEEE Computer Society
    Aug 27, 2020 · S3 responded to the demand and in 1995 introduced the S3 Virtual Reality Graphics Engine (ViRGE) graphics chipset; one of the first 2D/3D ...
  26. [26]
    How DirectX defined PC gaming... with help from a shotgun-toting ...
    Jul 27, 2020 · All thanks to the DirectX APIs. The release of DOOM95 was hugely important for Microsoft as not only was DOOM estimated to be installed on ...
  27. [27]
    Famous Graphics Chips 3Dfx's Voodoo - IEEE Computer Society
    Jun 5, 2019 · 3Dfx released its Voodoo Graphics chipset in November 1996. Voodoo was a 3D-only add-in board (AIB) that required an external VGA chip.
  28. [28]
  29. [29]
    GeForce RTX: A Whole New Way To Experience Games - NVIDIA
    Sep 26, 2018 · NVIDIA pioneered programmable shading with the introduction of the world's first programmable GPU, GeForce 3, in 2001. And over the course ...
  30. [30]
    Press Release - SEC.gov
    Oct 25, 2006 · The value of the ATI acquisition of approximately $5.4 billion is based upon the closing stock price of AMD common stock on October 24, 2006 of ...
  31. [31]
    NVIDIA DLSS 4 Technology
    DLSS 4, brings new Multi Frame Generation and enhanced Super Resolution, powered by GeForce RTX™ 50 Series GPUs and fifth-generation Tensor Cores.
  32. [32]
  33. [33]
    Graphics Card Form Factors Explained! - Overclockers UK
    Sep 15, 2022 · GPU form factor depends on two things, length and width. The length of a GPU can be determined by how many fans it has, ranging from zero to three.
  34. [34]
  35. [35]
    A Comprehensive Guide to GPU Length Compatibility - darkFlash
    Feb 13, 2025 · From the NVIDIA RTX 40 & 30 series to the AMD Radeon 7000 series, the longest flagship GPUs range from 350mm to 360mm. This means a case must ...
  36. [36]
    New GeForce RTX 50 Series Graphics Cards & Laptops ... - NVIDIA
    Jan 6, 2025 · For the GeForce RTX 5090, our engineers have created an unbelievable 2-slot, 304mm long x 137mm high x 2-slot wide, SFF-Ready ... At CES 2025 ...Missing: dimensions | Show results with:dimensions
  37. [37]
    How do I avoid GPU sag?
    ### GPU Sag Summary
  38. [38]
    NVIDIA GeForce RTX 5070 Family Graphics Cards
    ### Summary of RTX 5070 Series Dimensions and Form Factors
  39. [39]
    MGX Platform for Modular Server Design - NVIDIA
    NVIDIA MGX, a modular reference design that can be used for a wide variety of use cases, from remote visualization to supercomputing at the edge.
  40. [40]
    GeForce RTX 3080 Family of Graphics Cards - NVIDIA
    The hybrid vapor chamber includes a heat pipe heat-sink that efficiently distributes heat and allows air to flow through. The short PCB and high-density ...
  41. [41]
    GPU Cooling: How to Keep Your Graphics Card Cool
    Jan 13, 2025 · Passive GPU cooling is a type that does not use any moving parts such as fans or pumps. Instead, it relies on natural convection and radiative ...
  42. [42]
    Up Your Game - What Graphics Card Cooling Options are There?
    Nov 26, 2020 · Unlike traditional air coolers, liquid cooling works by pumping liquid through a radiator and then moving that cooled liquid across the GPU to ...
  43. [43]
    [PDF] AMD Instinct MI355X Platform
    While an air-cooled option is available, the direct liquid cooling option enables AMD Instinct MI355X GPUs to consume up to 1400W. Liquid cooling helps reduce ...
  44. [44]
    Some RTX 3080, RTX A6000 GPUs Are Prone to Vapor Chamber ...
    Oct 14, 2023 · Vapor chambers effectively manage high heat loads, ensuring uniform heat distribution and maintaining optimal operating temperatures of chips.
  45. [45]
    [PDF] NVIDIA ADA CRAFT
    Optimized vapor chamber and heat pipes: o Increased the number of heat pipes extending to the east fan from 4 to 6 to increase temperature uniformity on ...Missing: systems | Show results with:systems
  46. [46]
    Safe GPU Temperature Range: What is a Normal GPU Temp?
    Apr 25, 2022 · Modern NVIDIA GPUs should stay below 85C under full load to be safe, although many can exceed this by a few degrees before hitting their max temperatures.
  47. [47]
    RX 5700 XT MSI Gaming X Junction temperature 110 C
    May 18, 2020 · It is running as per AMD's specification. Operating at up to 110C Junction Temperature during typical gaming usage is expected and within spec.<|separator|>
  48. [48]
    Video Card Cooling: Blower-Style vs. Open-Air Shootout
    Jan 29, 2015 · A blower-style cooler will be much more predictable, performing within its design parameters as long as it has access to outside air.<|separator|>
  49. [49]
    3 reasons why I undervolt every Nvidia GPU I own - XDA Developers
    Sep 5, 2025 · So, if you're thermally limited, undervolting can actually improve your frame rates a bit, especially the 1% and 0.1% lows. And even if you aren ...
  50. [50]
    Does undervolting cause performance loss? | TechPowerUp Forums
    Jun 11, 2021 · It's actually increasing performance in most cases. Lower power consumption = lower heat = no thermal throttling = increased performance.
  51. [51]
    GeForce RTX 5090 Graphics Cards - NVIDIA
    The GeForce RTX 5090 is powered by the NVIDIA Blackwell architecture and equipped with 32 GB of super-fast GDDR7 memory, so you can do it all. Starting at $1999.
  52. [52]
    RTX 5060: Price, Specs, Benchmarks, and Comparison (2025)
    Sep 3, 2025 · Official specifications of the RTX 5060 · GPU : Ada Lovelace Refresh (AD107) · Video memory : 8 GB GDDR7 · Memory bus : 128-bit · TDP : 150 W · Boost ...Rtx 5060: Price, Specs... · Rtx 5060 Price: How Much... · Rtx 5060 Vs Rtx 3070: The...<|separator|>
  53. [53]
    What Is GPU Auxiliary Power? 12VHPWR, 8-Pin, 6-Pin Connectors ...
    Sep 27, 2025 · A 6-pin auxiliary power connector provides up to 75 watts. Combined with the 75 W provided by the PCI Express slot itself, a total of 150 W can ...
  54. [54]
    PCIe Power Pin Layout: Best Practices - Free Online PCB CAD Library
    Jun 27, 2024 · As shown above, the 6-pin PCIe connectors provide two 12 V sources for DC power, while the 8-pin connectors deliver three 12 V ports for power ...
  55. [55]
    What is the 12VHPWR cable, and why was it introduced?
    Jul 16, 2025 · The 12VHPWR cable is a 16-pin high-power cable (12+4 pins) that can deliver up to 600W of power to modern graphics cards.
  56. [56]
    Recommended PSU Table | GPU Power Requirements
    ### Summary of Recommended PSU Wattage for High-End GPUs and Multi-GPU Setups
  57. [57]
    [PDF] NVIDIA GPU BOOST FOR TESLA
    Nov 11, 2014 · If during the run the board starts exceeding power/thermal limit, the power monitoring algorithm may lower the GPU clock for a brief period as a ...<|control11|><|separator|>
  58. [58]
    How to set Nvidia GPU Power Limit (nvidia-smi)?
    Aug 26, 2018 · To query your power limit: sudo nvidia-smi -q -d POWER And to set it sudo nvidia-smi -pl (base power limit+11) And add that to a shell script that runs at ...
  59. [59]
    How to Change NVIDIA GPU Power Limit Settings?
    Feb 26, 2025 · By using tools like MSI Afterburner, you can fine-tune your GPU's power consumption to match your performance needs, whether aiming for ...
  60. [60]
    Users Encounter Melting Issue With Nvidia's RTX 4090 12VHPWR ...
    Oct 28, 2022 · The adapter cable from Nvidia can suffer overheating issues due to its poor build quality, according to tests from journalists and reviewers.
  61. [61]
    12VHPWR is a Dumpster Fire | Investigation into Contradicting ...
    Mar 24, 2025 · This was hinted at as early as August 2022, when NVIDIA made that PCI-SIG presentation about 12VHPWR cables melting when pulled at sharp angles.
  62. [62]
  63. [63]
    [PDF] NVIDIA RTX BLACKWELL GPU ARCHITECTURE
    The NVIDIA Streaming Multiprocessor (SM) is a core component of the NVIDIA GPU architecture, ... Architecture, 2025). AI is embedded into parts of the ...
  64. [64]
    AMD Unveils Next-Generation AMD RDNA™ 4 Architecture with the ...
    Feb 28, 2025 · Unified AMD RDNA™ 4 Compute Units – Features up to 64 advanced AMD RDNA™ 4 compute units delivering up to 40% higher gaming performance than ...
  65. [65]
    NVIDIA Turing Architecture In-Depth | NVIDIA Technical Blog
    Sep 14, 2018 · The introduction of Tensor Cores into Turing-based GeForce gaming GPUs makes it possible to bring real-time deep learning to gaming applications ...Turing Tu102 Gpu · Turing Memory Architecture... · Turing Rt Cores
  66. [66]
    NVIDIA GeForce RTX 5090 Specs: Everything You Need to Know
    Mar 8, 2025 · CUDA Cores: 21,760 cores, ideal for high-performance graphics and AI workloads. L2 Cache: Expanded to 98MB, up from 73MB on the RTX 4090 ...Missing: count | Show results with:count
  67. [67]
    Radeon™ RX 9070 XT - AMD
    Compute Units: 64. Boost Frequency: Up to 2970 MHz. Game Frequency: 2400 MHz. Ray Accelerators: 64. AI Accelerators: 128. Peak Pixel Fill-Rate: Up to 380.2 GP/s.
  68. [68]
  69. [69]
    [PDF] History and Evolution of GPU Architecture
    In 2001,. NVIDIA released the GeForce 3 which gave programmers the ability to program parts of the previously non-programmable pipeline [1]. Instead of ...
  70. [70]
    Evolution of the Graphics Pipeline: a History in Code (part 2)
    Aug 28, 2013 · ATI's introduction of R200 GPU (commercially branded as Radeon 8500) in October 2001 marked the introduction of the first truly programmable ...
  71. [71]
    AMD's first RDNA 4 GPU die pictured: Navi 48 around 390 mm2 ...
    Jan 8, 2025 · TSMC's 4nm-class manufacturing technologies (e.g., N4P) belong to the same process development kit as the foundry's 5nm-class fabrication ...
  72. [72]
    GeForce RTX 50 series - Wikipedia
    Announced at CES 2025, it debuted with the release of the RTX 5080 and RTX 5090 in January 2025. It is based on Nvidia's Blackwell architecture featuring Nvidia ...GeForce RTX 40 series · Deep Learning Super Sampling · Unified shader model<|separator|>
  73. [73]
    GDDR6 vs HBM - Different GPU Memory Types | Exxact Blog
    Feb 29, 2024 · GDDR6 is the most recent memory standard for GPUs with a peak per-pin data rate of 16Gb/s and a max memory bus width of 384-bits. Found in the ...
  74. [74]
    Introducing Low-Level GPU Virtual Memory Management
    Apr 15, 2020 · The new CUDA 10.2 virtual memory management functions enable more efficient dynamic data structures and better control of GPU memory usage in ...
  75. [75]
    World's Fastest Discrete Graphics Memory From Micron Powers ...
    Sep 1, 2020 · Working with visual computing technology leader NVIDIA, Micron debuted GDDR6X in the new NVIDIA® GeForce RTX™ 3090 and GeForce RTX 3080 ...Missing: introduction | Show results with:introduction
  76. [76]
    Micron Reveals GDDR6X Details: The Future of Memory, or a ...
    Sep 6, 2020 · Micron's GDDR6X is the industry's first mass-produced memory that uses four-level pulse amplitude modulation signaling, or PAM4.
  77. [77]
    NVIDIA Hopper Architecture In-Depth | NVIDIA Technical Blog
    Mar 22, 2022 · The H100 SXM5 GPU supports 80 GB of HBM3 memory, delivering over 3 TB/sec of memory bandwidth, effectively a 2x increase over the memory ...Missing: AMD | Show results with:AMD
  78. [78]
    SK hynix at NVIDIA GTC 2022: Demonstrating the World's Fastest ...
    Mar 30, 2022 · SK hynix's HBM3 uses over 8,000 TSVs per stack (i.e. over 100,000 TSVs in a 12-Hi stack) and can feature up to 12-Hi stack, which is an upgrade ...
  79. [79]
    AMD's Radeon Pro W7900 Gets RDNA 3, 48GB, 12K Support
    Apr 13, 2023 · The range-topping Radeon Pro W7900 is based on a full-fat Navi 31 GPU with 6144 stream processors (96 compute units) and 384-bit 48GB GDDR6 ECC memory ...
  80. [80]
    NVIDIA GeForce RTX 4090 Specs - GPU Database - TechPowerUp
    The RTX 4090 has 16384 cores, 24GB GDDR6X memory, 384-bit bus, 2235 MHz base clock (2520 MHz boost), 1x HDMI 2.1, 3x DisplayPort 1.4a, and 450W power draw.
  81. [81]
    how much video memory is used by framebuffer? - Khronos Forums
    Dec 31, 2003 · A 1024x768x32-bit framebuffer takes 3MB * 3 (front, back, and z-buffer), or 9MB of room. So the framebuffer will take up no less than 9MB of room, possibly ...Missing: processing | Show results with:processing
  82. [82]
    Mipmap levels and video memory - OpenGL - Khronos Forums
    Feb 6, 2004 · It's common knowledge nowadays that all consumer cards store the whole mipmapping chain in video memory as soon as the texture is used at all.Missing: buffer | Show results with:buffer
  83. [83]
    can a low-wattage PSU cause stutter in games? - PC Gamer Forums
    Oct 8, 2023 · VRAM Usage: Monitor your GPU's VRAM usage. If you're running out of VRAM, it can lead to stuttering. PSU Capacity: While 550W is generally ...Missing: exhaustion | Show results with:exhaustion
  84. [84]
    How to Overclock Your Graphics Card | Tom's Hardware
    Apr 25, 2023 · Our guide for how to overclock your graphics card covers the software you need to use, the various ways you can overclock, and the expected ...
  85. [85]
  86. [86]
    2023 IRDS Systems and Architectures
    Integrated memory modules with high bandwidth memory (HBM) or graphics DDR (GDDR) are usually used to increase memory to processor bandwidth. Typical ...
  87. [87]
    Nvidia's RTX Blackwell workstation GPU spotted with 96GB GDDR7
    Jan 23, 2025 · This mode allows two 32-bit memory ICs to be driven by one 32-bit memory controller by sharing the address and command bus while reducing the ...Missing: ECC | Show results with:ECC
  88. [88]
    [PDF] Section 10: Computer Graphics and RAM-DACs - Analog Devices
    The update rate is typically the gateing specification used in determining if a DAC is fast enough to be used to drive a monitor of a given resolution. Figure.Missing: RAMDAC | Show results with:RAMDAC
  89. [89]
    [PDF] CMOS Monolithic 256318 Color Palette RAM-DAC ADV476
    It is typically the pixel clock rate of the video system. PCLK should be driven by a dedicated TTL buffer. P0–P7 Pixel select inputs (TTL compatible). These ...Missing: cards | Show results with:cards
  90. [90]
    GPU Dictionary: Understanding GPU & Video Card Specs
    Jan 27, 2012 · Since analog signals are rapidly becoming obsolete and deprecated, RAMDAC has become a standardized, uninteresting component of modern GPUs.
  91. [91]
  92. [92]
    None
    Summary of each segment:
  93. [93]
    RGB to YCrCb Color-Space Converter - AMD
    The AMD RGB to YCrCb Color Space Conversion LogiCORE is an optimized hardware block for converting RGB video data to the YCrCb color space.<|separator|>
  94. [94]
    Understanding Color Space Conversions in Display | Synopsys Blog
    Sep 20, 2020 · Transformation between YCbCr to RGB generally occurs within the DTV after it receives a YCbCr encoded picture. Conversion between YCbCr color ...
  95. [95]
    Frequently Asked Questions - PCI-SIG
    Formed in June 1992, PCI-SIG effectively places ownership and management of the PCI specifications in the hands of the developer community. PCI-SIG works to ...
  96. [96]
    A Deep Dive into the Evolution of PCIe - KingSpec
    Jun 24, 2024 · Emergence of PCI Bus. In June 1992, Intel invented an interface standard called Peripheral Component Interconnect, abbreviated as PCI. The ...
  97. [97]
    [PDF] Accelerated Graphics Port Interface Specification
    Jul 31, 1996 · This is the Accelerated Graphics Port (AGP) Interface Specification, Revision 1.0, from Intel, dated July 31, 1996. Intel may have related ...
  98. [98]
    What is AGP(Accelerated Graphics Port)? - GeeksforGeeks
    Feb 15, 2023 · History. The AGP was developed by Intel in the year 1996 and was launched in Socket 7 Intel P5 Pentium and Slot 1 P6 Pentium II processors.
  99. [99]
    Specifications - PCI-SIG
    PCI-SIG specifications define standards driving the industry-wide compatibility of peripheral component interconnects.PCI Express Specification · PCI Express 6.0 Specification · Ordering Information
  100. [100]
    PCI Express 6.0 Specification
    The PCIe 6.0 specification doubles the bandwidth and power efficiency of the PCIe 5.0 specification (32.0 GT/s), while continuing to meet industry demand for a ...
  101. [101]
    What is PCIe? Understanding PCIe Slots, Cards and Lanes
    PCIe 5.0: · Released: 2019 · Max Bandwidth per Lane: 4 GB/s · Max Bandwidth x16 Slot: 64 GB/s · Doubled the data rate per lane compared to PCIe 4.0, reaching ...
  102. [102]
    PCI-Express (PCIe*) Add-in Card Connectors (Recommended) - 2.1
    Sep 13, 2023 · The PCIe* CEM Specification defines different connectors based on the power used by the Add-in Card which can range from 75 watts up to 600 watts.Missing: 75W | Show results with:75W
  103. [103]
    Alienware Graphics Amplifier Review – Faster than Thunderbolt 3 ...
    Jun 16, 2017 · The Alienware Graphics Amplifier (AGA) was one of the very first production external graphics enclosures. It was introduced in late 2014 as ...
  104. [104]
    PCIe 6.0 devices on track for 2025 launch | PCWorld
    Jun 11, 2025 · The final 1.0 specification of PCIe 6 was actually announced in 2022, and the SIG has actually moved on to PCI Express 6.3—even though we haven' ...
  105. [105]
    Open Accelerator Infrastructure - Open Compute Project
    The scope of this OAI subgroup is to define below 9 schedules for physical modules include logical aspects such as electrical, mechanical, thermal, management, ...
  106. [106]
    [PDF] DisplayPort Technical Overview - VESA
    Jan 10, 2011 · Two connector types: Standard DisplayPort connector (USB size). Mini DisplayPort connector (introduced by Apple). Cable adapter, and adapter ...
  107. [107]
    Video Signal Interfaces - RGB Spectrum
    Dual-link connections can support a maximum pixel rate of 330 MHz, and resolutions up to 3840x2400 although the more common resolution is 2560x1600. Dual-link ...Missing: VIVO | Show results with:VIVO
  108. [108]
    HDMI 2.2 Specification Overview
    ### HDMI 2.1 Specifications Summary
  109. [109]
  110. [110]
    VESA Publishes DisplayPort™ 2.0 Video Standard Enabling ...
    Jun 26, 2019 · VESA Publishes DisplayPort™ 2.0 Video Standard Enabling Support for Beyond-8K Resolutions, Higher Refresh Rates for 4K/HDR and Virtual Reality ...
  111. [111]
    VESA Releases Updated DisplayPort™ Alt Mode Spec to Bring ...
    Apr 29, 2020 · With DisplayPort Alt Mode, the USB-C connector can transmit up to 80 Gigabits per second (Gbps) of DisplayPort video data utilizing all four ...
  112. [112]
    Video Card I/O Ports and Interfaces - NeweggBusiness
    DisplayPort is designed to replace Digital Visual Interface (DVI) and Video Graphics Array (VGA). DisplayPort can also provide the same functionality as HDMI. S ...
  113. [113]
    Device Control Block 4.x Specification - Index of / - NVIDIA
    Dec 8, 2014 · The CVBS (Composite) signal will always follow the B/Pb signal on the 7-pin HDTV component dongle (because the B/Pb connector is labeled for use ...
  114. [114]
  115. [115]
    VESA Publishes DisplayPort™ Standard Version 1.4
    Mar 1, 2016 · Its Multi-Stream Transport (MST) capability enables high-resolution support of multiple monitors on a single display interface. In September ...
  116. [116]
    Page Not Found
    **Summary:**
  117. [117]
    SLI - NVIDIA Docs
    Scalable Link Interface (SLI) is a multi-GPU configuration that offers increased rendering performance by dividing the workload across multiple GPUs.
  118. [118]
    How to Configure Discrete Graphics Cards to Run In AMD CrossFire ...
    The motherboard must be AMD CrossFire™ certified with at least two PCIe x16 slots available, running at a minimum of PCIe x8 speed. Please check with the ...
  119. [119]
    Review: NVIDIA's SLI - An Introduction - Graphics - HEXUS.net
    Nov 22, 2004 · While we didn't cover the initial SLI press release for various reasons, it's my job to bring you an introduction to NVIDIA's SLI today, ahead ...
  120. [120]
    AMD Phasing Out CrossFire Brand With DX 12 Adoption, Favors ...
    Sep 25, 2017 · The CrossFire branding has been an AMD staple for years now, even before it was even AMD - it was introduced to the market by ATI on 2005, as a ...
  121. [121]
    [PDF] AMD CrossFire guide for Direct3D® 11 applications
    In CrossFire multiple GPUs appear to the programmer as a single device and the CrossFire driver employs a technique called Alternate Frame Rendering (AFR). In ...Missing: CFR | Show results with:CFR
  122. [122]
    NVLink & NVSwitch: Fastest HPC Data Center Platform | NVIDIA
    NVLink is a 1.8TB/s bidirectional, direct GPU-to-GPU interconnect that scales multi-GPU input and output (IO) within a server. The NVIDIA NVLink Switch chips ...Maximize System Throughput... · Raise Reasoning Throughput... · Nvidia Nvlink Fusion
  123. [123]
    connecting multiple eGPUS to a Thunderbolt 4 PC using a ... - eGPU.io
    Apr 26, 2023 · A configuration of three GPUs in Thunderbolt 3 enclosures connected to a Thunderbolt 4 hub that is connected to a Thunderbolt 4 port of a PC with a 12th ...Dual eGPU enclosure? | Thunderbolt & USB4 Enclosures - eGPU.iomake 2 thunderbolt work in parallel? - eGPU.ioMore results from egpu.io
  124. [124]
    Announcing DirectX 12 Ultimate - Microsoft Developer Blogs
    Mar 19, 2020 · DX12 Ultimate is the result of continual investment in the DirectX 12 platform made over the last five years to ensure that Xbox and Windows 10 ...
  125. [125]
    Direct3D 12 Raytracing - Win32 apps - Microsoft Learn
    Dec 30, 2021 · This article provides a listing of the documentation that is available for Direct3D raytracing. C++ raytracing referenceMissing: Ultimate 2018
  126. [126]
    [PDF] vulkan-overview.pdf - The Khronos Group
    generation cross-platform GPU API. Including an unprecedented level of ... - Faster performance, lower overhead, less latency. • Portable. - Cloud ...
  127. [127]
  128. [128]
    [PDF] OpenGL 4.6 (Core Profile) - May 5, 2022 - Khronos Registry
    May 1, 2025 · Khronos grants a con- ditional copyright license to use and reproduce the unmodified specification for any purpose, without fee or royalty, ...
  129. [129]
    About CUDA | NVIDIA Developer
    The CUDA compute platform extends from the 1000s of general purpose compute processors featured in our GPU's compute architecture.
  130. [130]
    AMD ROCm™ Software
    AMD ROCm™ is an open software stack including drivers, development tools, and APIs that enable GPU programming from low-level kernel to end-user applications.Discover ROCm for AI · AMD Infinity Hub · What's New in ROCm 7
  131. [131]
    A Comparison of Modern Graphics APIs - Alain Galvan
    Jan 30, 2021 · Modern graphics APIs like Vulkan, DirectX 12, Metal, and WebGPU are converging, while OpenGL's design differs greatly. DirectX 11 is closer to ...
  132. [132]
    WDDM Overview - Windows drivers - Microsoft Learn
    Jul 12, 2025 · The Windows Display Driver Model (WDDM) is the graphics display driver architecture for Windows. WDDM was introduced in Windows Vista (WDDM 1.0)
  133. [133]
    Our History: Innovations Over the Years - NVIDIA
    Founded on April 5, 1993, by Jensen Huang, Chris Malachowsky, and Curtis Priem, with a vision to bring 3D graphics to the gaming and multimedia markets.
  134. [134]
    [PDF] NVIDIA in Brief
    Founded in 1993, NVIDIA is the world leader in accelerated computing. Our invention of the GPU in 1999 sparked the growth of the PC gaming market,.
  135. [135]
    NVIDIA Discrete GPU Market Share Dominance Expands to 94 ...
    Sep 3, 2025 · NVIDIA Discrete GPU Market Share Dominance Expands to 94%, Notes Report. by. AleksandarK. Sep 3rd, 2025 04:05 Discuss (237 Comments).
  136. [136]
  137. [137]
    AMD GPU: History of Computer Graphics - geekom
    AMD's entry into GPUs came with its acquisition of ATI in 2006. ATI had developed the Rage series of embedded GPUs, from 1996 through 2000. These provided an ...
  138. [138]
    AMDGPU - ArchWiki
    Oct 28, 2025 · AMDGPU is the open source graphics driver for AMD Radeon graphics cards since the Graphics Core Next family.
  139. [139]
    AMD Open Source Driver for Vulkan - GPUOpen
    The AMD Open Source Driver for Vulkan is designed to support a wide range of AMD GPUs: Radeon™ RX 7900/7600 Series; Radeon™ RX 6900/6800/6700/6600/6500 ...
  140. [140]
    Intel's Discrete Mobile Graphics Family Arrives
    Mar 30, 2022 · On March 30, 2022, Intel launched Intel Arc A-series graphics for laptops.
  141. [141]
    Intel's first discrete Arc desktop GPUs are coming in Q2 2022
    Feb 17, 2022 · Arc desktop GPUs will still have a bit of a wait ahead of them: Intel says that they won't be arriving until sometime in Q2, while workstation ...<|separator|>
  142. [142]
    [PDF] Intel Launches Arc A-Series Discrete Graphics Family for Mobile
    These are the first discrete GPUs to arrive from the Intel Arc A-Series graphics portfolio that will span laptops, desktops and workstations this year.
  143. [143]
    4 GPU AIB features that are (and aren't) worth paying extra for
    Jul 8, 2025 · Because of this beefier cooling solution, your card will stay a lot quieter under full load as well, but some GPUs will just run hotter than ...
  144. [144]
    Board partners reveal custom-cooled RTX 4090 and RTX 4080 ...
    Sep 21, 2022 · Almost every AIB partner has shared at least one RTX 4090/4080 graphics card. That includes MSI, Asus, Gigabyte, Zotac, Inno3D, Gainward, PNY, Palit, Galax, ...
  145. [145]
    3Dfx History: The GPU's Great Turning Point? - Tedium
    Feb 14, 2018 · Initially, 3Dfx and its Voodoo technology were focused intently on the arcades, and the company's big debut came at the 1996 edition of the ...
  146. [146]
    The 30 Year History of AMD Graphics, In Pictures | Tom's Hardware
    Aug 19, 2017 · From the ATI Wonder in 1986 to the AMD Radeon RX in 2016, we take a look at the evolution of AMD graphics.
  147. [147]
    Graphics Processing Unit (GPU) Market Size and Share
    Jun 18, 2025 · The graphics processing unit market size stands at USD 82.68 billion in 2025 and is forecast to reach USD 352.55 billion by 2030, delivering a 33.65% CAGR.
  148. [148]
    Graphics Processing Unit (GPU) Market Dynamics Report 2025-2033
    Sep 10, 2025 · Gaming, artificial intelligence, and data center demand are driving the GPU market's robust expansion in North America, Europe, Asia-Pacific ...
  149. [149]
    25% of the GPUs sold in the first part of 2021 went to crypto miners
    Jun 16, 2021 · The GPU shortage and the inflated prices for graphics cards had no impact on the industry's shipment volume during 2021. In fact, the amount of ...
  150. [150]
    A look at the trends driving down GPU prices - The Block
    May 5, 2022 · Prices for GPUs have been steadily declining for the past few months, as demand from Etherium miners seems to be dwindling.
  151. [151]
    Data Center Sustainability - AMD
    Our goal is to deliver a 30x increase in energy efficiency for AMD processors and accelerators powering servers for AI-training and HPC from 2020-2025.
  152. [152]
    What are GPUs Useful For? 9 Key Applications and Emerging Trends
    Mar 3, 2025 · 9 GPU applications in 2025 · 1. Gaming · 2. AI and machine learning · 3. Scientific computing · 4. Cryptocurrency and blockchain · 5. Video editing ...Missing: eSports | Show results with:eSports
  153. [153]
    GPU Applications: What Are the Main Applications for GPUs?
    Jul 21, 2025 · Beyond gaming, GPUs are popular for professional 3D graphics rendering. Artists and designers in fields like architecture, animation, and visual ...Missing: eSports | Show results with:eSports
  154. [154]
    GPU prices and availability (Q3 2025): How much are GPUs today?
    Aug 7, 2025 · Graphics cards are expensive again, but are there any good deals to be had? Here's how much GPUs cost in the third quarter of 2025.
  155. [155]
    Nvidia's AI chip boom is creating shortages that could hike prices for gadgets
    CNBC article from December 2, 2025, detailing Nvidia's shift to AI infrastructure causing component shortages and impacting consumer markets.
  156. [156]
    AMD, Nvidia Consider Cuts to Gaming GPUs Due to AI Memory Shortages
    TechPowerUp report from November 18, 2025, discussing production cuts for gaming GPUs to prioritize AI-related components.
  157. [157]
    NVIDIA GPU Shortages in 2025: The AI Boom and Enterprise Focus Analysis
    LinkedIn analysis from August 13, 2025, examining Nvidia's pre-sold AI chips through late 2025 and resulting shortages for other sectors.