Fact-checked by Grok 2 weeks ago

GeForce 256

The GeForce 256 (GPU) developed by Corporation. Announced on August 31, 1999, and released on October 11, 1999, it was marketed as the world's first GPU, introducing hardware-accelerated transform and lighting (T&L) for in PC . Built on the NV10 chip fabricated using TSMC's 220 nm process with 17 million transistors, it featured a 120 MHz core clock, four pixel pipelines, and support for up to 32 MB of (initially SDR, later variants), delivering peak performance of 15 million polygons per second and a fill rate of 480 million pixels per second. This single-chip solution offloaded complex geometry processing from the CPU, enabling smoother frame rates, higher polygon counts, and more realistic visuals in games like , while supporting 7.0 and standards. As the inaugural product in NVIDIA's lineup, the 256 targeted mid-to-high-end consumer cards via the 4x interface, with reference designs priced at $249 for the SDR model and subsequent versions offering improved of 4.8 /s through a 128-bit bus.

Development and Release

Announcement and Launch

announced the 256 on August 31, 1999, introducing it as a groundbreaking advancement in processing technology. The chip, codenamed NV10, served as the direct successor to the , building on 's prior RIVA series to deliver enhanced performance for PC . This announcement marked the debut of 's , emphasizing a shift toward more integrated solutions. Marketed aggressively as "the world's first GPU," the GeForce 256 highlighted its integrated hardware transform and (T&L) engine, which offloaded calculations from the CPU to enable richer visual experiences in games. claimed the card could process a minimum of 10 million polygons per second, a significant leap that promised smoother rendering of detailed environments without bottlenecking the host processor. The design incorporated innovations like QuadPipe rendering for efficient pixel processing, positioning the GeForce 256 as a foundational step in dedicated graphics acceleration. The SDR version of the GeForce 256 launched on October 11, 1999, with an initial MSRP of $299, making it accessible to enthusiasts seeking high-end performance. A DDR variant followed on December 23, 1999, offering improved for demanding applications at a higher around $300. Initial availability came through key add-in-board (AIB) partners, notably Creative Labs, which released models under its 3D Blaster Annihilator Pro branding to leverage strong retail and OEM distribution channels. These partnerships ensured broad market reach, with Creative positioned as a leading supplier for GeForce 256-based boards.

Variants and Production

The GeForce 256 was produced in two primary consumer variants: the initial SDR model equipped with SDR SDRAM and the subsequent DDR model utilizing from Electronics (now ). Both variants shared the same core architecture but differed in memory technology to address bandwidth limitations in the original , with the DDR version offering improved performance for demanding applications. Fabrication of the GeForce 256 occurred on TSMC's 220 nm process, resulting in a compact die measuring 139 mm² and containing 23 million transistors. This manufacturing approach enabled to integrate transform and lighting hardware directly onto the graphics chip, a key innovation for the era. Standard memory configurations across variants provided 32 MB of RAM, paired with an 4x interface for compatibility with contemporary motherboards. A professional variant, the , leveraged the identical NV10 but featured specialized drivers optimized for CAD and tasks, such as enhanced precision in line rendering and support for professional software certifications. Production of the 256 series concluded around 2000, with manufacturing runs curtailed following the introduction of its successor, the GeForce 2, which addressed issues and expanded market adoption.

Architecture

Core Design and Innovations

The GeForce 256, powered by 's NV10 chip, represented a pivotal advancement in by integrating , , and video acceleration into a single chip, which designated as the world's first (GPU). This unified design eliminated the need for separate hardware components typically required in prior graphics solutions, enabling more efficient handling of complex rendering tasks. The NV10, fabricated on a 220 nm process, incorporated 23 million transistors to support these multifaceted operations, marking a shift toward specialized processors optimized for visual computation. A cornerstone of the GeForce 256's was its integrated transform and (T&L) engine, which offloaded tasks—such as transformations and calculations—from the CPU to dedicated on the GPU. This separation allowed the transform to handle coordinate conversions and the engine to compute illumination effects independently, maximizing throughput for scenes. By performing up to 10 million polygons per second, the T&L unit significantly reduced CPU bottlenecks, enabling smoother performance in applications. The NV10's QuadPipe architecture further enhanced rendering efficiency through four parallel , each capable of processing pixel filling and dual texturing operations simultaneously. This setup, often described as a 256-bit rendering engine due to the combined 64-bit , allowed for advanced effects like multi-texturing without sacrificing speed, supporting up to two textures per clock cycle per . The parallel structure improved overall pixel throughput, making it particularly effective for fill-rate intensive scenes in . In alignment with DirectX 7.0 specifications, the GeForce 256 provided full hardware support for T&L and multi-texturing, facilitating techniques such as detail texturing and light mapping in a single pass. The chip also featured dedicated hardware for cube environment mapping, which used six orthogonal faces to simulate realistic reflections, and approximations via environment-mapped methods, enhancing surface realism without additional geometry. These innovations in the NV10's die layout underscored NVIDIA's focus on programmable-like flexibility through register combiners, laying groundwork for future evolution.

Rendering Pipeline Features

The 256's rendering pipeline consisted of four parallel pipelines, each capable of generating one pixel per clock cycle, enabling efficient handling of 32-bit color rendering with integrated for depth testing, alpha blending for translucent effects, and support for ordered grid supersampling modes to reduce edge aliasing. These pipelines supported up to 480 million bilinear filtered pixels per second, with each pipeline incorporating texture units for multi-texturing operations. A key innovation in the pipeline was hardware support for advanced bump mapping techniques, including dot-product (Dot3) bump mapping and environment-mapped bump mapping, implemented via programmable register combiners. Dot-product bump mapping used per-pixel maps to compute lighting interactions through operations (e.g., · surface ), allowing realistic diffuse and specular effects without altering ; this was achieved in a single pass using the combiners' signed arithmetic in the [-1, 1] range, with scales like 2.0 and biases like -0.5 for . Environment-mapped bump mapping extended this by perturbing reflection vectors with bump normals to sample cube environment maps, producing dynamic specular highlights on irregular surfaces via general combiner stages that performed arithmetic and blending. The integration of transform and lighting (T&L) upstream fed these operations with world-space normals and tangents for accurate per-pixel calculations. The pipeline also included multimedia acceleration, notably hardware motion compensation for video decoding, which offloaded inverse (IDCT) and motion vector processing to enable smooth DVD playback at full frame rates without CPU intervention. Texture handling in the pipeline introduced S3TC (DXT) texture compression support, reducing demands by compressing 4x4 blocks lossily at ratios up to 6:1 while maintaining real-time decompression and filtering. Despite these advances, the pipeline lacked hardware vertex shaders, relying instead on fixed-function T&L for geometry processing; programmable vertex shading was introduced later with the GeForce 3.

Specifications

Hardware Components

The GeForce 256 utilized the NV10 as its primary GPU core, fabricated by TSMC on a 220 nm CMOS process with a die area of 139 mm² and 23 million transistors. This chipset represented NVIDIA's initial foray into integrated transform and lighting (T&L) hardware, central to the card's overall design. Memory configurations for the GeForce 256 included 32 MB of either SDRAM or , connected via a 128-bit memory interface; production variants offered both types to cater to different performance needs. The card connected to the host system through an 4x interface, providing up to 1.066 /s of theoretical . Video outputs on the GeForce 256 typically featured a , with some board implementations including a DVI port for digital displays, though it lacked native support for . Power requirements were modest, with a (TDP) of around 13 W, and reference designs employed without additional power connectors beyond the slot. Several board partners produced custom GeForce 256 implementations, including ELSA, , ASUSTeK, Creative Labs, and VisionTek, which often varied in cooling solutions and output configurations while retaining the core NV10 chipset.

Clock Speeds and Performance

The GeForce 256 was available in two primary memory configurations: SDR and variants. The SDR variant featured a core clock speed of 120 MHz and a memory clock of 166 MHz, delivering a memory bandwidth of 2.656 GB/s. In contrast, the DDR variant operated at a core clock speed of 120 MHz, with a memory clock of 150 MHz yielding an effective of 300 MHz due to technology, resulting in a of 4.8 GB/s. These clock speeds contributed to key performance metrics enabled by the card's quad-pipeline architecture. The fill reached 480 megapixels per second and 480 megatexels per second at the 120 MHz core clock. Additionally, the integrated transform and lighting (T&L) engine supported polygon throughput of up to 15 million polygons per second. Memory bandwidth for these variants can be calculated using the formula: bandwidth = (memory clock × bus width × multiplier) / 8, where the bus width is 128 bits and the multiplier is 1 for SDR or 2 for DDR; for example, the DDR configuration yields (150 MHz × 128 bits × 2) / 8 = 4.8 GB/s.

Software and Driver Support

Initial Driver Releases

The initial drivers for the GeForce 256 were part of NVIDIA's series, with early versions emerging in late 1999 to support the card's launch on and operating systems. Subsequent beta releases, such as version 5.13 in early 2000, provided foundational software support for the card's hardware transform and lighting (T&L) capabilities, marking a shift toward hardware-accelerated processing. However, adoption was limited initially, as most 1999 games relied on 6 or earlier APIs that did not leverage T&L, requiring software fallback modes that reduced performance benefits. The drivers ensured compliance with 7 and 1.2 standards, enabling features like hardware T&L for compatible applications and improving overall 3D rendering efficiency when supported. A key aspect of the early ecosystem was the unified driver architecture shared between consumer products and professional cards, both based on the NV10 chip, allowing a single codebase with tailored configurations for gaming and workstation use. Subsequent updates in the Detonator series addressed performance and stability issues; for example, version 6.50, released in December 2000, included optimizations for interfaces and fixes for display artifacts, enhancing compatibility on various motherboards. These early drivers demonstrated strong compatibility with pioneering titles that exploited T&L, such as and , where the GeForce 256 delivered smoother frame rates and advanced effects like realistic reflections compared to prior generations.

End of Support and Legacy Compatibility

NVIDIA ceased official driver development for the GeForce 256 after releasing version 71.84 on March 11, 2005, for and Me operating systems. For and XP, the final driver was version 71.89, released on April 14, 2005, which continued to include support for the NV10-based GeForce 256 among older hardware. These releases marked the end of new features and optimizations, with subsequent drivers like version 77.72 in June 2005 dropping explicit support for the GeForce 256. The GeForce 256 received no official driver support for or subsequent operating systems, as NVIDIA's Vista-compatible drivers began with the and required the (WDDM), which the NV10 chip did not support. This incompatibility extended to Vista's graphical interface, which demanded 9 hardware acceleration and shader model 2.0 capabilities beyond the GeForce 256's DirectX 7 compliance. Game compatibility waned by around 2006, with titles like (released in 2004) representing one of the last major releases playable on the card at minimum settings, as it requires DirectX 8.1 but supports a DirectX 7 rendering path for older hardware. In legacy environments, the GeForce 256 remains functional on 32-bit installations using the final official drivers. Community-developed unofficial drivers and modifications, such as those adapting older ForceWare releases, have enabled limited operation on newer operating systems like , though stability and feature support are inconsistent. As of 2025, provides no new official updates for the GeForce 256, confining its use to preserved systems. For retro gaming, emulation solutions like or virtual machines running offer viable alternatives to run period-appropriate software without native hardware. The card's hardware architecture imposes fundamental limitations, rendering it incompatible with modern APIs such as 10 and beyond, which require programmable shaders and unified architectures introduced in later generations like the .

Comparisons and Competitors

Performance Benchmarks

The GeForce 256 delivered substantial performance gains over contemporary graphics cards in key benchmarks, establishing it as a leader in at launch. In synthetic tests like 3DMark 2000, it outperformed the 3500 by up to 50%, with representative scores of approximately 5,200 points for the GeForce 256 compared to 3,500 for the on similar systems. In gaming workloads, the card showed a 30-40% uplift over NVIDIA's prior RIVA TNT2 Ultra, particularly in at 1024x768 resolution, where it achieved around 65-75 FPS versus 55 FPS for the TNT2 Ultra under high-quality settings. The onboard Transform and Lighting (T&L) engine further highlighted its strengths, providing roughly 2x speedup in geometry-intensive scenes such as those in Microsoft's 7 samples. The memory variant amplified these advantages, yielding about 25% higher frame rates than the SDR model in texture-bound scenarios, such as high-resolution rendering tests. Despite these hardware capabilities, initial drivers introduced software bottlenecks that constrained peak performance, with optimizations in 2000 updates unlocking fuller potential.

Market Rivals

The primary competitors to the GeForce 256 in the late consumer graphics market included offerings from , ATI, , and S3, each emphasizing different strengths in 3D acceleration while grappling with the shift toward integrated transform and lighting (T&L) hardware. These rivals relied heavily on CPU-assisted processing for complex rendering tasks, contrasting with the GeForce 256's dedicated T&L engine, which offloaded geometry computations to the GPU for improved efficiency in and applications. The 3dfx Voodoo3, released in April 1999, utilized scan-line rendering on its chipset to deliver strong multitexturing performance at 16-bit color depths, achieving fill rates up to 333 megapixels per second with 16 MB of SDRAM. It excelled in setups and featured 2x full-scene (FSAA) alongside DXTC texture compression, but lacked T&L, resulting in lower polygon throughput compared to the GeForce 256—typically 8 million polygons per second versus the latter's 10 million—and dependency on host CPU for transformations. This positioned the Voodoo3 as a solid choice for Glide-optimized games but less versatile for emerging 7 titles. This competitive edge contributed to 3dfx's bankruptcy in December 2000, after which acquired its patents and technology in 2001. ATI's Rage 128, launched in August 1998, and its multi-chip Rage Fury MAXX variant from early 2000, offered partial T&L acceleration through software assistance, paired with 2x support and up to 32 MB of for competitive / integration. Priced aggressively at around $149 for the Fury Pro model, it emphasized multimedia capabilities like hardware-accelerated DVD playback and video encoding, outperforming the at higher resolutions in some scenarios due to its efficient memory architecture. However, its features lagged behind the GeForce 256 in and overall polygon rates, with weaker video output quality limiting appeal for gaming enthusiasts. Matrox's G400, introduced in March 1999, prioritized productivity with DualHead technology for simultaneous 2D and 3D displays across two monitors, supporting up to 32 MB of SDRAM and environment-mapped bump mapping (EMBM) for enhanced visual effects. It delivered superior image quality and outperformed the Voodoo3 and NVIDIA's prior RIVA TNT2 in select benchmarks, thanks to its 200 MHz core clock and multitexturing pipeline. Yet, its 3D acceleration was inferior to the GeForce 256, particularly in OpenGL performance and without native T&L, capping polygon rates at around 8 million per second and hindering complex scene handling. S3's Savage 2000, arriving in late 1999, boldly claimed full GPU status with integrated T&L (though later revealed as defective) and a quad-texture engine capable of processing four textures per clock cycle at 4x speeds, bundled with 32 MB SDRAM and S3TC compression. It achieved about 80% of the GeForce 256's speed in real-world tests at lower resolutions, benefiting from affordability in the budget segment. Driver instability and the flawed T&L implementation undermined its potential, contributing to S3's acquisition by in 2000, after which focus shifted to integrated chipsets rather than discrete cards. By 2000, NVIDIA captured a significant portion of the GPU market, propelled by the GeForce 256's T&L advantage, which reduced CPU bottlenecks in rivals' CPU-dependent designs and accelerated adoption in gaming PCs. This edge, evident in benchmarks where the GeForce often doubled polygon rates over competitors like the Voodoo3, solidified 's leadership amid 3dfx's declining share and ATI's multimedia pivot.

Legacy and Impact

Technological Influence

The GeForce 256 pioneered hardware transform and lighting (T&L), offloading from the CPU to dedicated GPU units, which significantly reduced computational demands on the host and enabled more complex scenes in real-time rendering. This innovation influenced the evolution of graphics APIs, including 8's introduction of programmable vertex , by establishing T&L as a foundational hardware capability that successors like the GeForce 2 (2000) built upon with enhanced multi-texturing and improved pipeline efficiency. By integrating these features into a single-chip solution, the GeForce 256 set a precedent for GPUs handling both transformation and rasterization, paving the way for more sophisticated shader models in subsequent hardware generations. NVIDIA's introduction of the term "GPU" with the GeForce 256 marked a pivotal shift in industry terminology, redefining accelerators as unified processing units rather than mere video cards, and accelerated the market transition from fixed-function to programmable . This change encouraged developers to leverage for advanced effects, as the card's fixed-function design—while not fully programmable—exposed the limitations of CPU-bound rendering and spurred innovations in pipeline flexibility seen in later products. Its QuadPipe rendering exemplified early efforts toward parallelized fixed-function stages, influencing the broader adoption of modular GPU designs. The GeForce 256's early embrace of concepts laid groundwork for the convergence of graphics and workloads, with its multiple rendering pipelines providing a blueprint for the architectures that underpin modern frameworks like . By processing up to 10 million polygons per second, it demonstrated scalable that reduced CPU bottlenecks in applications, allowing developers to accelerate game development with richer environments and dynamic lighting without overwhelming system resources. NVIDIA's 2024 25th anniversary recognition underscored this , highlighting how the card's innovations in transformed pipelines and foreshadowed AI's reliance on GPU compute for tasks like .

Cultural and Collectible Significance

The GeForce 256 occupies an iconic place in gaming history, often central to early PC builds designed to run demanding titles like with enhanced visual fidelity. Its pioneering role as the first consumer GPU is recognized in historical timelines, including that of the , exemplifying the shift toward dedicated graphics processing in personal computing. Amid the retro gaming revival, the GeForce 256 remains popular in authentic setups for experiencing Windows 98-era software, allowing enthusiasts to recreate the original of period-specific games. Community discussions on hardware sites highlight practices, with users reporting stable boosts to around 135-150 MHz on well-cooled variants to improve frame rates in classics like . In collectible markets, mint-condition SDR and DDR models of the GeForce 256 typically sell for $100 to $300 on platforms like as of 2025, reflecting demand from vintage hardware collectors. Rare professional variants, such as those rebranded under the line, command higher premiums, often surpassing $400 due to their scarcity and workstation heritage. In March 2025, a video showcased a rare Quadro edition, further highlighting its appeal among collectors. The GeForce 256's cultural footprint extends to modern media, featured prominently in Digital Foundry's 2024 retrospective marking its 25th anniversary, which examines its transformative effects on visual . Additionally, it contributed to the nascent esports scene by enabling high-performance rendering in early competitive multiplayer games like , helping establish PC gaming as a viable platform for organized tournaments.

References

  1. [1]
    How the World's First GPU Leveled Up Gaming and Ignited the AI Era
    it was introduced as the world's first GPU, setting the stage for future advancements in ...
  2. [2]
    NVIDIA GeForce 256 aka 'world's first GPU' is now 25 years old
    Sep 1, 2024 · The world's first GPU has just turned 25 years old: NVIDIA GeForce 256 was released on October 11, 1999 and changed the world of gaming forever.
  3. [3]
    NVIDIA GeForce 256: "The world's first GPU" marks its 25th ...
    Aug 31, 2024 · On August 31, 1999, NVIDIA announced the GeForce 256, which was released on October 11 of the same year. ... At the time of its release, NVIDIA ...
  4. [4]
    Famous Graphics Chips: Nvidia's GeForce 256
    Feb 25, 2021 · Built on TSMC's 220 nm process the 120 MHz NV10 had 17 million transistors in a 139 mm² die and used DirectX 7.0. ... Figure 2: Nvidia GeForce 256 ...
  5. [5]
    NVIDIA GeForce 256 DDR Specs | TechPowerUp GPU Database
    The GeForce 256 DDR was a graphics card by NVIDIA, launched on December 23rd, 1999. Built on the 220 nm process, and based on the NV10 graphics processor.Missing: history | Show results with:history
  6. [6]
    Nvidia GeForce 256 celebrates its 25th birthday - Tom's Hardware
    Oct 11, 2024 · It has a die size of 257 mm^2 and around 12 billion transistors, made using the Intel 7 process. Nvidia's RTX 4090 meanwhile packs an AD102 chip ...Missing: count | Show results with:count
  7. [7]
    NVIDIA GeForce 256 SDR Specs - GPU Database - TechPowerUp
    Display outputs include: 1x VGA. GeForce 256 SDR is connected to the rest of the system using an AGP 4x interface. Its price at launch was 249 US Dollars.
  8. [8]
    Today's The 25th Anniversary of The World's First GPU: The NVIDIA ...
    Oct 11, 2024 · The GeForce 256 was able to process at least 10 million polygons per second. This took off the load from the CPU, allowing developers to ...
  9. [9]
    Creative's Announces New GeForce 256 Cards - IGN
    Oct 4, 1999 · The Pro version, which won't be out until later this year, will sell for $350 and offer 32 MB of Double Data Rate (DDR) RAM, which reportedly ...
  10. [10]
    New Products To Feature Highly Anticipated Graphics Technology
    Leveraging its strengths in both the retail and OEM channels, Creative will be the leading supplier for boards based on the GeForce 256 chipset. "We are ...Missing: partners | Show results with:partners
  11. [11]
    nVidia GeForce 256 (1999) - DOS Days
    nVidia GeForce 256 ; Chipset, nVidia NV6 ; Standards, Hercules, MDA, CGA, EGA, VGA, SVGA ; Memory, 32 MB of SDR or DDR RAM ; Ports, 15-pin DSUB (video out) ; RAMDAC ...Missing: count | Show results with:count
  12. [12]
    September'2001 3Digest: NVIDIA GeForce 256 SDR - iXBT Labs
    NVIDIA GeForce256 SDR chip, 120 MHz clock rate; · 32 MBytes SDRAM memory in four Samsung 6ns microchips, 166 MHz clock rate , 128-bit bus; · Peak speed of scene ...
  13. [13]
    Oldest Nvidia graphic card
    Dec 5, 2021 · The GeForce 256 is the original release in Nvidia's "GeForce ... DDR version uses DDR SDRAM memory from Hyundai Electronics (now SK Hynix).
  14. [14]
    NVIDIA NV10 GPU Specs - TechPowerUp
    NVIDIA's NV10 GPU uses the Celsius architecture and is made using a 220 nm production process at TSMC. With a die size of 139 mm² and a transistor count of 17 ...
  15. [15]
    NVIDIA Quadro Specs - GPU Database - TechPowerUp
    The NV10 graphics processor is an average sized chip with a die area of 139 mm² and 17 million transistors. It features 4 pixel shaders and 0 vertex shaders ...Missing: 256 | Show results with:256
  16. [16]
    History of nVIDIA Graphics cards Vol. 2 GPU competition - AnandTech
    Sep 12, 2021 · GeForce 256 series products include GeForce 256 and GeForce 256 DDR. The professional version is called Quadro and is based on GeForce 256.Missing: variants | Show results with:variants
  17. [17]
    Full Review NVIDIA's new GeForce256 'GPU' | Tom's Hardware
    Oct 11, 1999 · This is the GeForce256 reference board with single data rate SDRAM, actually made by Creative Labs and thus identical to '3D Blaster Annihilator ...
  18. [18]
    [PDF] Transform and Lighting | NVIDIA
    The GeForce 256 GPU uses separate transform and lighting engines so that each engine can run at maximum efficiency. Without discrete engines, the transform ...
  19. [19]
    NVIDIA GeForce 256 25th Anniversary - Celebrating the world's first ...
    Oct 13, 2024 · ... GeForce 256 - where the 256 refers to the '256-bit QuadPipe Rendering Engine' of the NV10 chip's four 64-bit pixel pipelines. The ...
  20. [20]
    Comparing NVIDIA's Quad Pipe Architectures: GeForce2, 3, 4, and FX
    Mar 25, 2024 · Released in 1999, the GeForce 256 was NVIDIA's first GPU to feature four dual-texturing pixel pipelines. This doubling of the standard two- ...
  21. [21]
    Cube Map OpenGL Tutorial | NVIDIA
    Examples of cube map textured scenes are shown demonstrating environment mapping, stable specular highlights, and "bump map" like per-pixel lighting effects.
  22. [22]
    [PDF] GeForce 256 and RIVA TNT Combiners - NVIDIA
    Nov 2, 1999 · •Base OpenGL texture environment. •supports modulate, replace, blend, decal. •all OpenGL implementation support this. • EXT_texture_env_add.
  23. [23]
    [PDF] User's Guide - RS Online
    The 3D Blaster GeForce 256 Annihilator provides outstanding 2D and video acceleration as well. It supports motion compensation for MPEG-2 decoding. It ...
  24. [24]
    S3TC / DXTC - VGA Legacy MKIII
    Oct 24, 2015 · Compared to standard texture size 256x256, it was great enhancement even through texture compression wasn't lossless (compression ratio is 6:1 ...
  25. [25]
    [PDF] GeForce3 Architecture Overview - NVIDIA
    GeForce3 features include hardware T&L, high order surface evaluation, programmable geometry/lighting, 2-5x performance over GeForce2, and 4 textures per pass.Missing: first | Show results with:first
  26. [26]
    NVIDIA GeForce 256 SDR Graphics Card - VideoCardz.net
    NVIDIA GeForce 256 SDR Graphics Card ; Memory Clock. 143 MHz ; Effective Memory Clock. 0.1 Gbps ; Memory Configuration. Memory Size. 32 MB. Memory Type. SDR.
  27. [27]
    Retro Review: nVidia Geforce 256 DDR - Part 1 - DOS Days
    Mar 1, 2025 · Reference clock speeds per nVidia's recommendations are 120 MHz for the core and 150 MHz for memory, though these cards can be pushed to 135 MHz ...
  28. [28]
    Say happy 25th birthday to 'the world's first GPU', the almighty 120 ...
    Oct 11, 2024 · The original SDR version of this mighty piece of PC gaming hardware was released on October 11, 1999, and certified its place in history (and ...
  29. [29]
    d10q.txt - SEC.gov
    ... board and motherboard manufacturers, primarily ASUSTeK Computer Inc., ELSA AG Corporation, and Guillemot ... VisionTek Inc. These manufacturers incorporate our ...<|separator|>
  30. [30]
    0001012870-00-001346.txt - SEC.gov
    We sell graphics processors to add-in board and motherboard manufacturers, primarily ASUSTeK, Canopus, Creative, ELSA and Guillemot and CEMs, including ...
  31. [31]
    Full Review NVIDIA's new GeForce256 'GPU' - THG.RU
    Oct 11, 1999 · GeForce's memory is currently clocked at 166 MHz, while TNT2-Ultra runs it at 183+ MHz and both chips have the same memory bus width of 128-bit.
  32. [32]
    GeForce 256 architecture features | NVIDIA video cards - GameGPU
    Nov 5, 2009 · The result was the NV10 architecture, which includes 23 million transistors and is manufactured using a 220-nanometer process technology. This ...
  33. [33]
    NVIDIA GeForce 256 32 MB DDR GPU - GPUZoo
    The GeForce 256 32 MB DDR incorporates 32 MB of DDR memory, utilizing 128 bit bus. The memory is clocked at 150 MHz, which gives 4.8 GB/s memory bandwidth.Missing: core | Show results with:core
  34. [34]
    Detonator 5.13 - www.guru3d.com
    ... NVIDIA GeForce (XP 32-bit) · Detonator 5.13 ... Date, 21 years ago. Operating Systems. License. Price ... Release Date, Specs & Price in USD 7 hours ago ...
  35. [35]
    NVIDIA Detonator 3 6.50 display driver - Internet Archive
    Jul 22, 2024 · NVIDIA Display Driver for Windows 9x version 4.12.01.0650, 12/04/2000. Operating systems supported: Microsoft Windows 95, Microsoft Windows 98.
  36. [36]
    ForceWare Release 70 71.84 | Windows ME - NVIDIA
    The driver will begin downloading immediately after clicking on the "Download" button. NVIDIA recommends users update to the latest driver version.
  37. [37]
    ForceWare Release 70 71.89 | Windows XP - NVIDIA
    Resolved problem with overclocking not working when running GPUs in SLI mode. For a full list of other issues resolved, please view the driver release notes ...
  38. [38]
    [PDF] Release 75 Notes - NVIDIA
    Jun 30, 2005 · ... Release 75 Notes includes information about version 77.72 of the. Release 75 driver. It discusses changes made to the driver since version 71.89 ...
  39. [39]
    GeForce/ION Release 256 257.21 | Windows Vista 32-bit - NVIDIA
    This is the first driver release from the Release 256 family of drivers (versions 256.xx to 259.xx). This driver package supports GeForce 6, 7, 8, 9, 100 ...
  40. [40]
    Half-Life 2 on Steam
    Rating 5.0 (83,872) · 14-day returnsJul 23, 2025 · System Requirements · OS *: Windows 7, Vista, XP · Processor: 1.7 Ghz · Memory: 512 MB RAM · Graphics: DirectX 8.1 level Graphics Card (requires ...<|control11|><|separator|>
  41. [41]
    Official GeForce Drivers - NVIDIA
    Download the latest official GeForce drivers to enhance your PC gaming experience and run apps faster.NVIDIA app · GeForce · GeForce RTX for Virtual Reality · DriversMissing: 256 | Show results with:256
  42. [42]
    Pentium III 700 - Quake III Arena (2) - NVIDIA GeForce 256 DDR
    Dec 25, 1999 · Beating out the next fastest Rage Fury MAXX by over 24 fps, the DDR GeForce even delivers a playable 64 fps at 1024 x 768 x 32-bit color. If ...
  43. [43]
    Retro Review: nVidia GeForce 256 DDR Part 3 - DOS Days
    Mar 3, 2025 · Back in 1999/early 2000, this was the most premium Geforce 256 you could buy - it was the only one to use SGRAM instead of SDRAM and also the ...
  44. [44]
    History of the Modern Graphics Processor, Part 2 - TechSpot
    Dec 4, 2020 · Meanwhile, the Matrox G400 managed to outperform both the Voodoo 3 and TNT2 for the most part, although OpenGL support still noticeably lagged ...
  45. [45]
    Spring 2000 Graphics Card Round-Up | Eurogamer.net
    May 15, 2000 · ATI - Radeon 256 Price Unknown Availability Summer 2000 Fill Rate 400 MegaPixels/second 1200 MegaTexels/second Core Clock Rate 200MHz Memory ...<|separator|>
  46. [46]
    Matrox Millennium G400 (1999) - DOS Days
    The Millennium G400, codenamed Toucan, was Matrox' new card for 1999, with faster core and memory clocks and able to support up to 32 MB of memory.
  47. [47]
    Fallen Enthusiast GPU Manufacturers: Page 2 | Tom's Hardware
    Oct 15, 2016 · Performance tests showed that the S3 Savage 2000 was significantly faster than its predecessor. At times, it was even quick enough to catch ...<|separator|>
  48. [48]
    Chapter 30. The GeForce 6 Series GPU Architecture
    The GeForce 6 Series architecture supports scalable vertex-processing horsepower, allowing the same architecture to service multiple price/performance points.Missing: QuadPipe | Show results with:QuadPipe
  49. [49]
    25 Years of #GeForceGreats - NVIDIA
    Twenty-five years ago, we launched the first GPU, the NVIDIA® GeForce™ 256. Before we look to the future, let's take a nostalgic journey to celebrate the ...Missing: COMDEX | Show results with:COMDEX
  50. [50]
    The Evolution of Shaders - IEEE Computer Society
    Sep 9, 2025 · ... GeForce 256's fixed-function pipelines. And so, the shader era began in 2001. The GeForce 3 featured four pixel shaders and a vertex shader.Missing: shift | Show results with:shift
  51. [51]
    1999 | Timeline of Computer History
    ” The GeForce 256 is often thought of as the first consumer GPU, and while expensive, it sold extremely well. The GeForce 256 was designed to relieve the ...
  52. [52]
    DF Direct: Inside GeForce 256 - The First GPU - YouTube
    Oct 11, 2024 · This video is sponsored by MSI's range of next generation QD-OLED monitors. Learn more here: https://msi.gm/QD-OLEDs UK viewers can click ...Missing: STB Systems