Fact-checked by Grok 2 weeks ago

GeForce 3 series

The GeForce 3 series is a line of graphics processing units (GPUs) developed by NVIDIA Corporation, launched on February 27, 2001, and recognized as the industry's first programmable GPU, enabling advanced rendering effects through vertex and pixel shaders compliant with DirectX 8.0. Built on the Kelvin architecture using a 150 nm manufacturing process with 57 million transistors, the original GeForce 3 model featured a 200 MHz core clock, 64 MB of DDR memory on a 128-bit bus running at 230 MHz (effective 460 MHz), eight texture mapping units (TMUs), four render output units (ROPs), and support for OpenGL 1.5, multisample anti-aliasing, and hardware transform and lighting (T&L). In October 2001, NVIDIA refreshed the lineup with the mid-range GeForce 3 Ti 200, clocked at 175 MHz core and 200 MHz memory (effective 400 MHz) for a launch price of around $149, offering performance roughly 5-15% below the original while retaining all key features. The high-end GeForce 3 Ti 500 followed on October 1, 2001, with a boosted 240 MHz core clock and 250 MHz memory (effective 500 MHz), delivering superior performance for demanding games and applications of the era at a premium price point. The series powered consumer graphics cards from OEMs like ASUS and MSI, as well as professional variants, and earned accolades including the Editors’ Choice Award from Macworld for its acceleration capabilities and Game Developer Magazine's Front Line award for innovation. Positioned as a high-end solution with a launch MSRP of $499 for the base model, the GeForce 3 series marked a pivotal advancement in 3D graphics, bridging fixed-function pipelines to programmable shaders and influencing future GPU designs.

Development and Release

Announcement and Launch

NVIDIA officially announced the GeForce 3 series on February 21, 2001, during a keynote presentation at the Macworld Conference & Expo in , where Apple CEO showcased the GPU powering real-time demos of Pixar's lamp and id Software's engine. The event marked the first public reveal of the series, emphasizing its programmable shader capabilities as a breakthrough in consumer graphics processing. This announcement highlighted 's collaboration with Apple, positioning the GeForce 3 for early integration into Macintosh systems as a build-to-order option. The GeForce 3 series launched on February 27, 2001, with initial availability beginning in March 2001 through major board partners and OEMs. NVIDIA's production codename for the core GPU was NV20, built on the architecture. The initial model carried a manufacturer's suggested price (MSRP) of $499, reflecting its positioning as a high-end solution at the time. Early adoption was supported by partnerships with prominent OEMs such as and Gateway, who integrated the GeForce 3 into their PC systems, alongside add-in card manufacturers like ELSA producing reference designs such as the Gladiac 920. These collaborations ensured broad market entry, with NVIDIA shipping the GPU to top PC and graphics board OEMs shortly after launch. The series' debut set the stage for subsequent variants, including the Ti 500 model introduced later in October 2001 at an MSRP of $349.

Design Goals

The GeForce 3 series was engineered to achieve full compliance with Microsoft Direct3D 8.0, becoming the first consumer-grade graphics processing unit (GPU) to incorporate programmable vertex and pixel shaders, enabling developers to implement custom shading effects previously limited to high-end workstations. This design choice stemmed from NVIDIA's collaboration with Microsoft on key DirectX 8 technologies, aiming to empower more sophisticated rendering techniques such as procedural textures and dynamic lighting in games. Building on the fixed-function pipeline of the preceding , the GeForce 3 sought to deliver a more adaptable and future-proof tailored for the rise of shader-centric applications and games. motivations included enhancing overall flexibility by distributing complex algorithms between the CPU and GPU, while targeting 2-5 times the performance of the GeForce 2 through optimizations like Z occlusion culling to reduce rendering overhead. Development efforts, which built directly on GeForce 2 advancements, focused on 8 compatibility alongside full support for prior DirectX 6 and 7 feature sets, positioning the series as a bridge to programmable graphics in mainstream consumer hardware. A primary engineering challenge was integrating the new programmable shader units while maintaining cost-effective die sizes, culminating in a target of 57 million transistors fabricated on a 150 nm process node to balance enhanced capabilities with power efficiency and manufacturability. This approach allowed NVIDIA to avoid disproportionate increases in chip complexity compared to the GeForce 2's 25 million transistors on a , ensuring viability for both high-end and emerging mainstream variants. Strategically, the GeForce 3 aimed to drive annual performance improvements of 2-3 times and migrate advanced features like and high-resolution to broader markets, solidifying NVIDIA's leadership in visual quality innovations amid intensifying competition from rivals like ATI.

Architecture

Kelvin Microarchitecture

The Kelvin microarchitecture, codenamed NV20, served as the foundational design for NVIDIA's GeForce 3 series graphics processing units, marking a significant evolution from the preceding architecture by introducing programmable capabilities to consumer GPUs. This microarchitecture was fabricated using a 150 nm process at , resulting in a die size of 128 mm² and an integration of 57 million transistors, which enabled enhanced computational density for and pixel processing while maintaining power efficiency for its era. The core structure adopted a quad-pipeline configuration with 4 pixel pipelines, each supporting 2 units for a total of 8 TMUs, alongside 4 render output units (ROPs) to handle final pixel operations. Complementing this was a single programmable shader unit, capable of executing up to 128 instructions per shader program, facilitating advanced geometry transformations. Central to the Kelvin design's efficiency was its memory subsystem, embodied in NVIDIA's Lightspeed Memory Architecture (LMA), a suite of optimizations including a crossbar-based controller to mitigate bandwidth bottlenecks in texture fetching and frame buffer access. LMA featured an integrated 128-bit DDR SDRAM memory controller, supporting memory speeds up to 230 MHz effective, which improved data throughput by dynamically allocating channels and reducing latency in multi-texture scenarios without requiring external chips. This architecture allowed for seamless handling of high-resolution textures and complex effects, contributing to the overall balance between rendering performance and memory utilization. Clock speeds in the varied by model, with the base GeForce 3 operating at a frequency of 200 MHz, while premium variants such as the GeForce 3 Ti 500 accelerated to 240 MHz to boost fill rates up to 960 megapixels per second. The interface supported 4x for high-bandwidth host communication, delivering up to 1.06 GB/s transfer rates, with to for broader system integration. These elements collectively positioned Kelvin as a versatile foundation for 8.0-compliant programming, enabling developers to customize and effects within constraints.

Rendering Pipeline

The GeForce 3 series, based on NVIDIA's , introduced the first consumer-grade to support Model 1.1 as defined in 8.0, enabling programmable and pixel shading for enhanced rendering flexibility. This marked a shift from purely fixed-function pipelines to one incorporating limited programmability, allowing developers to customize transformations and per-pixel operations beyond traditional and . shaders operated on 4-component vectors (position, normals, colors, and coordinates), supporting up to 128 arithmetic instructions per shader program, while pixel shaders were more constrained, limited to up to 8 arithmetic instructions and 4 address instructions. The rendering began with a transform and (T&L) unit, which could operate in fixed-function mode for compatibility or invoke programmable shading for custom effects such as procedural deformation or advanced models. Following processing, the proceeded to setup and rasterization, generating fragments for each covered by triangles or other . These fragments then entered the shading stage, where programmable shaders applied operations like dependent reads and s to compute final colors. The set for both shader types included basic arithmetic operations such as multiply-accumulate (MAD), 3-component (DP3), and multiplies (MUL), alongside lookup (TEX) for sampling from up to four per rendering pass. To ensure backward compatibility, the pipeline provided fixed-function fallbacks aligned with 7 specifications, allowing legacy applications to render without support by defaulting to hardware-accelerated T&L and multi-texturing without programmability. However, pixel s faced significant limitations, restricted to simple, linear sequences of operations without conditional branching or loops, which confined them to effects like basic per-pixel lighting or texture blending rather than complex . shaders, while more capable, also lacked dynamic flow control, emphasizing deterministic execution for consistent across geometry. This design prioritized real-time efficiency on early hardware, laying foundational concepts for subsequent evolution in graphics APIs.

Products and Specifications

Model Lineup

The GeForce 3 series lineup included three primary consumer desktop models based on the NV20 processor, each targeting different performance tiers within NVIDIA's strategy to maintain market leadership in 2001. The base model, GeForce 3 (NV20), served as the high-end flagship upon its release in February 2001, introducing programmable capabilities to cards. This was followed by mid-range and high-end refreshes later in the year to address competitive pressures and extend the series' viability. The mid-range GeForce 3 Ti 200 (NV20 Ti200) launched in October 2001, offering a more accessible entry point into the series' advanced features while maintaining compatibility with AGP 4x interfaces. The high-end GeForce 3 Ti 500 (NV20 Ti500) arrived in October 2001, positioned as the pinnacle of the lineup with optimizations for demanding applications. OEM variants of the GeForce 3 were integrated directly into pre-built systems by manufacturers such as Dell and Compaq, allowing for customized implementations in consumer PCs without standalone retail availability. The series also included a mobile derivative, the GeForce 3 Go. Additionally, a derivative chip known as the NV2A, customized from the GeForce 3 design, powered the Microsoft Xbox console with a 233 MHz core clock and 200 MHz DDR memory configuration. Production of the 3 series was phased out by mid-2002, supplanted by the 4 lineup as NVIDIA shifted to refined architectures for next-generation performance.
ModelChip VariantRelease DateMarket Segment
3NV20February 2001High-end
GeForce 3 Ti 200NV20 Ti200October 2001Mid-range
GeForce 3 Ti 500NV20 Ti500October 2001High-end

Technical Specifications

The GeForce 3 series GPUs were fabricated using a 150 nm process node as part of the microarchitecture. The series includes the base GeForce 3, the entry-level GeForce 3 Ti 200, and the high-end GeForce 3 Ti 500, each with distinct clock speeds and performance characteristics while sharing a 128-bit interface and memory type. Power draw across the models typically ranged from 30-40 W, depending on board implementation and no auxiliary power connectors were required.
ModelCore ClockMemory Clock (DDR)Bus WidthBandwidthStandard VRAMPixel Fill RateTexture Fill Rate
GeForce 3200 MHz230 MHz128-bit7.36 GB/s64 MB800 MPixel/s1.60 GTexel/s
GeForce 3 Ti 200175 MHz200 MHz128-bit6.40 GB/s64 MB700 MPixel/s1.40 GTexel/s
GeForce 3 Ti 500240 MHz250 MHz128-bit8.00 GB/s64 MB960 MPixel/s1.92 GTexel/s
These specifications reflect reference designs, with some partner boards offering up to 128 MB VRAM.

Features and Performance

Graphics Capabilities

The GeForce 3 series introduced significant advancements in , enabling () at 2x and 4x rates to effectively smooth jagged edges in renders by sampling multiple points per during the rasterization process. This hardware-accelerated reduced artifacts more efficiently than software-based methods, providing higher image quality without prohibitive performance penalties in supported applications. Complementing MSAA, the series featured , a specialized 2-sample mode that employed a filtering pattern—sampling at the pixel center and four corners—to deliver visual approaching MSAA while incurring only a 2x performance cost, making it suitable for real-time gaming at higher resolutions. In texture filtering, the GeForce 3 supported up to a 16:1 ratio using 8-tap sampling, which preserved texture detail and sharpness for surfaces viewed at steep angles, such as distant ground or walls, far surpassing bilinear or trilinear methods in reducing blurring and moiré patterns. The architecture also enabled cube environment mapping with mipmapped textures for realistic per-pixel reflections and refractions, enhancing environmental interactions in scenes like or metallic surfaces. Bump mapping was advanced through per-pixel operations in the pixel shader, allowing for detailed that simulated surface irregularities without additional geometry, as demonstrated in effects like true reflective on small triangles up to 25 pixels in size. Vertex skinning for was facilitated by hardware transform and lighting (T&L) combined with programmable vertex shaders, supporting matrix blending for up to four weights per vertex to deform meshes smoothly; however, the 128-instruction limit per vertex program restricted complex operations, such as extensive multi-bone influences or numerous light calculations, potentially bottlenecking scenes with extremely high polygon counts beyond typical 2001-era game demands. The 3 provided full hardware support for 8.0, including its and shader models (VS 1.1 and PS 1.1), and OpenGL 1.3, encompassing extensions for multitexturing, cube maps, and compressed textures to ensure compatibility with contemporary 3D applications and games.

Benchmark Results

The GeForce 3 Ti 500 demonstrated modest improvements in Direct3D 7 benchmarks over its predecessor, the GeForce 2 Ultra, particularly in titles like , where it achieved approximately 10-15% higher frame rates at higher resolutions and quality settings. This uplift was attributed to enhanced and fill rate, allowing smoother performance in CPU-limited scenarios without shaders. In early shader-enabled titles and prototypes, such as and AquaNox leveraging vertex shaders for dynamic geometry deformation, the GeForce 3 series delivered up to 30% performance gains compared to non-shader hardware, enabling more complex environmental effects without significant CPU overhead. Anti-aliasing performance was a strength, with the GeForce 3 Ti 500 maintaining playable frame rates above 60 FPS in at 1024x768 with 4x full-scene (FSAA) enabled, outperforming the competing 8500 in similar conditions due to more efficient multisampling implementation. Synthetic benchmarks further highlighted its capabilities, with the Ti 500 scoring approximately 8100 points in 3DMark 2001, reflecting strong 8 compliance and transform/lighting throughput. Regarding power efficiency, the GeForce 3 series consumed power comparable to the GeForce 2 Ultra at around 30-35W under load, but reviews noted increased heat output from the denser 0.15-micron process and higher transistor count, often requiring improved cooling solutions for sustained operation.

Market Position and Legacy

Competitive Positioning

The GeForce 3 series was targeted at the high-end gaming PC segment, positioning itself as a substantial upgrade from the GeForce 2 GTS by delivering the first consumer-accessible programmable shaders for advanced and 8 compliance. This shift appealed to enthusiasts seeking enhanced realism in games, with marketing the series as a bridge to future programmable rendering paradigms. Against the primary competitor, ATI's 8500 launched later in 2001, the 3 was marketed for its superior modes like and high-quality implementation, which provided sharper image quality despite a greater performance overhead compared to ATI's offerings. However, the 8500 edged out in raw fill rate thanks to its higher rendering throughput and efficiency-focused HyperZ technology. countered by emphasizing the 3's and shaders as key to future-proofing, enabling developer innovations that the fixed-function initially lacked. The series contributed to NVIDIA maintaining a dominant but declining market share in the discrete GPU sector in 2001 (from 66% in Q1 to 53% in Q2), bolstered by strong sales amid competition from ATI. Pricing followed a premium strategy at launch, with the base GeForce 3 starting at $599 in February, before aggressive cuts reduced it to $399 by April and Titanium variants to around $349 by November, broadening accessibility without eroding high-end perception. Overall, the 3's launch accelerated industry adoption of shader-based rendering standards, setting precedents for programmable GPUs in 8 and influencing long-term architectural evolution across competitors.

Driver Support and Discontinuation

The GeForce 3 series received its initial official drivers under branding, with version 21.81 released in September 2001, providing support for and XP operating systems and introducing optimizations for the new programmable shading features of the architecture. These early drivers focused on stability and basic 8 compatibility, enabling the hardware's vertex and pixel capabilities in consumer applications for the first time. Subsequent updates in the Detonator XP line, such as version 23.11 in late 2001, included targeted improvements for 8 performance, enhancing execution efficiency in games like those utilizing programmable pipelines. Throughout 2002 and 2003, NVIDIA released several driver iterations that addressed key issues, including bug fixes for (AA) artifacts commonly reported in titles with high demands, such as improved handling of glitches on GeForce 3 cards. These updates, part of the ForceWare transition starting around mid-2002, also incorporated optimizations to better leverage the hardware's four units, resulting in smoother performance in shader-heavy workloads without requiring hardware overclocks. By late 2003, drivers like ForceWare 53.xx series refined these enhancements, prioritizing compatibility with emerging 9 previews while maintaining backward support for the series. Official driver support for the GeForce 3 series concluded with ForceWare 81.98 in December 2005 for Windows 9x/ME and 93.71 in November 2006 for Windows 2000/XP, marking the end of WHQL-certified updates. No official drivers were provided for Windows Vista or later versions, including Windows 7, due to the architecture's incompatibility with the new WDDM display model and DirectX 10 requirements, rendering the hardware unsupported on these platforms. The discontinuation of driver development stemmed from NVIDIA's shift toward the GeForce 4 series in 2002 and subsequent architectures, as the aging 150 nm NV20 chip proved inadequate for evolving APIs and power efficiency standards by 2006. For legacy users, unofficial tools like NVIDIA Inspector have enabled continued tweaks on modern operating systems through driver profile modifications and registry edits, allowing basic functionality such as custom resolutions and on emulated or modified XP-era drivers, though without official patches or performance guarantees. This community-driven support highlights the series' enduring niche in retro gaming setups, but underscores the hardware's obsolescence for contemporary workloads.

References

  1. [1]
    2001 - Nvidia Corporate Timeline
    In 2001, Nvidia introduced GeForce3, the first programmable GPU, partnered with Apple, expanded GeForce2 MX, and entered the integrated graphics market with ...Missing: press | Show results with:press
  2. [2]
    NVIDIA GeForce3 Specs - GPU Database - TechPowerUp
    The GeForce3 was a high-end graphics card by NVIDIA, launched on February 27th, 2001. Built on the 150 nm process, and based on the NV20 graphics processor.
  3. [3]
    GeForce3 Ti200 - Technical City
    GeForce3 Ti200: specs and benchmarks ; Release date, 1 October 2001 (24 years ago) ; Launch price (MSRP), $149, of 14,999 (Quadro Plex 7000) ; Core clock speed ...
  4. [4]
    NVIDIA GeForce3 Ti500 Specs - GPU Database - TechPowerUp
    NVIDIA GeForce3 Ti500 ; Length: 183 mm 7.2 inches ; TDP: unknown ; Suggested PSU: 200 W ; Outputs: 1x DVI 1x VGA 1x S-Video ; Power Connectors: None ...
  5. [5]
    Apple gets Nvidia GeForce 3 first • The Register
    ... GeForce 3, to the Mac maker first. That at least is what Apple CEO Steve Jobs announced last night/this morning in Tokyo, during his MacWorld Expo keynote.
  6. [6]
    Macworld keynote: GeForce 3 coming to the Mac
    Feb 21, 2001 · Nvidia's GeForce 3 chip is coming to the Mac first and will be available as a BTO (build to order) option from Apple soon.Missing: MSRP | Show results with:MSRP
  7. [7]
    2001 Revisited: Macworld Expo Tokyo - 512 Pixels
    Feb 22, 2021 · Jobs then announced a new Nvidia video card, the GeForce 3. As Nvidia's David Kirk stood on stage, the GPU rendered Pixar's famous Luxo Jr.
  8. [8]
    GeForce 3 review | Eurogamer.net
    May 1, 2001 · Like the 1.7GHz Pentium 4, the GeForce 3 has undergone several price reductions prior to launch, and now sits at an RRP of £399.99, with the ...Vertex Shader · Anti-Aliasing · Lightspeed Memory...
  9. [9]
    nVidia GeForce3 TI 500, v. 21.89, A01 | Driver Details | Dell US
    Fixes 2000FP monitor scaling issues. Read More. Version. 21.89, A01. Release date. 06 Dec 2001. Download Type.
  10. [10]
    Nvidia Introduces DirectX 8 Development Kit - Game Developer
    Nvidia and Microsoft co-developed many of the key fundamental technologies of DirectX 8.0 API, including programmable vertex shaders, programmable pixel shaders ...Missing: GeForce aim Direct3D
  11. [11]
    Introduction | SpringerLink
    Jan 2, 2023 · ... NV20 (code named Kelvin) on February ... Nvidia announced its Lightspeed Memory Architecture in the GeForce 3 in February 2001.
  12. [12]
    [PDF] GeForce3 Architecture Overview - NVIDIA
    GeForce3 features include hardware T&L, high order surface evaluation, programmable geometry/lighting, 2-5x performance over GeForce2, and 4 textures per pass.Missing: design goals engineering motivations
  13. [13]
    NVIDIA NV20 GPU Specs - TechPowerUp
    The NV20 uses Kelvin architecture, 150nm process, 57 million transistors, 128mm² die, 4 pixel shaders, 1 vertex shader, 8 TMUs, 4 ROPs, and supports DirectX 8. ...
  14. [14]
    [PDF] Lightspeed Memory Architecture - GeForce3 - EVGA
    GeForce3 incorporates a number of revolutionary advances in graphics architecture that dramatically improve the GPU's efficiency with memory and bus bandwidth ...
  15. [15]
    GeForce 3 architecture features | NVIDIA video cards - GameGPU
    Nov 5, 2009 · GeForce 3 architecture features ; Core frequency, 200 ; Memory operating frequency (DDR), 230 (460) ; Bus and memory type, DDR-128 bit ; Bandwidth ( ...
  16. [16]
    High-Tech And Vertex Juggling - NVIDIA's New GeForce3 GPU
    Feb 27, 2001 · GeForce256 was released late 1999 and even today there are only few games that require integrated T&L for a proper operation. The second ...
  17. [17]
    Programming Pixel Shaders - GameDev.net
    There could be up to eight arithmetic instructions in a ps.1.1 shader. The ps.1.4 pixel shader instruction flow is a little bit more complex: version ...Missing: max | Show results with:max
  18. [18]
    [PDF] Vertex Program 1.1 Texture Shader 3 - NVIDIA
    Vertex Program 1.1 New Instructions. RCC: Reciprocal Clamped. Same as RCP, but ... The result: custom per-pixel diffuse and specular controlled by two 1 ...
  19. [19]
    GeForce 3 / NV20 : Recommended price - $699 US - AnandTech
    Feb 19, 2001 · The cards on GeForce3 are expected to start selling in mid March and their recommended price is set around $699. Although we believe the cost ...Missing: announcement September codename partners
  20. [20]
    NVIDIA GeForce3 Ti200 Specs - GPU Database - TechPowerUp
    Release Date: Oct 1st, 2001. Generation: GeForce 3. Predecessor: GeForce 2. Successor: GeForce 4 MX. Production: End-of-life. Launch Price: 149 USD. Bus ...
  21. [21]
    GeForce 2 and 3: DOS and Windows 98 Powerhouses - Purple Paws
    Oct 23, 2018 · OEM Model List ; GeForce 2 Ultra · GeForce 3 Ti200 · GeForce 3 ; NV16 · NV20 · NV20 ; Dell 01F951 · Asus V8200T2 Dell 04N857 Dell 07J062 · Dell 00C042 ( ...Missing: variants | Show results with:variants
  22. [22]
    NVIDIA Xbox GPU Specs - TechPowerUp
    The Xbox GPU has 64MB DDR memory, 4 pixel shaders, 2 vertex shaders, 8 TMUs, 4 ROPs, 233 MHz GPU clock, and 200 MHz memory clock.
  23. [23]
    Product Specs - e-GeForce3 Ti 200, Retail - EVGA
    Specifications · Base Clock: 175 MHZ · Memory Clock: 200 MHz Effective · CUDA Cores: NA · Bus Type: AGP · Memory Detail: 64MB DDR · Memory Bit Width: 128 Bit · Memory ...
  24. [24]
    nVidia power consumption chart? - VOGONS
    Jan 26, 2015 · Based on what those charts show, you can safely assume that GF4 uses about 40W, GF3 uses about 35W, and GF2 and earlier probably draw 30W or less.
  25. [25]
    NVIDIA GeForce3 Ti 200 Graphics Card - VideoCardz.net
    NVIDIA GeForce3 Ti 200 Graphics Card ; Memory Type. DDR ; Memory Bus Width. 128-bit ; Memory Bandwidth. 6.4 GB/s ; Interface. AGP 4x ; Height. 1-slot SFF-Ready.<|separator|>
  26. [26]
    Gainward GeForce3 Ti 200 128 MB - Hardware museum
    Oct 5, 2013 · GeForce3 Ti 200 128 MB is based on the NV20 GPU with 128 MB of DDR. Running at 175/400 MHz. Supports DirectX 8. Introduced in 2001.
  27. [27]
    NVIDIA GeForce 3 Titanium 500 Review - PC Perspective
    Dec 19, 2001 · 3DMark®2001, the Gamers' Benchmark, combines DirectX®8 support with breathtaking graphics, and continues to accurately provide benchmark results ...
  28. [28]
    GeForce3 Under Attack: ATi's Radeon 8500 Previewed - THG.RU
    Aug 14, 2001 · At 1600x1200x32 GeForce3 has an even higher fill rate than Radeon 8500, although the ATi chip has a higher core clock as well as about 10% more ...Missing: proofing | Show results with:proofing<|separator|>
  29. [29]
  30. [30]
    Nvidia lost 13% marketshare in Q2 - The Register
    Aug 14, 2001 · ATI appears to be winning back the marketshare it has lost to its arch-rival, Nvidia, over the last year or so.Missing: 60%
  31. [31]
    Nvidia cuts GeForce3 price by one-third, announces spring product ...
    Apr 26, 2001 · On April 23, Nvidia officially announced it is slashing the price on its flagship GeForce3 graphics processor from US$599 to US$399.
  32. [32]
    GeForce3 Ti500 First Look - IGN
    Nov 2, 2001 · Could be worse: GeForce 3 cards were came well over $500 when they were first available just over six months ago. As mentioned, we'll have final ...
  33. [33]
    nVidia Detonator 21.81 Drivers - NT Compatible
    Sep 11, 2001 · nVidia has released new 21.81 detonator drivers. Thanks Nail, FrogMaster, Stefan & Lemmi. Download for Windows NT 4.0
  34. [34]
    Best driver for GeForce3 Ti200 | AnandTech Forums
    Jan 3, 2002 · The 23.11's were specifically designed with optimized DX 8.0+ performance in mind. If you play games that predominantly use DX 8, then those are ...
  35. [35]
    NVIDIA Editor's Day 2003 (Page 2) - Guru.3D
    Rating 5.0 · Review by Brann Mitchell (Guru3D)Oct 22, 2003 · The new drivers will be the ForceWare 50.x series of drivers. The 50.x series will have much better optimizations all around for the GeForce FX, ...
  36. [36]
    Win 9x/ME - (81.98) - NVIDIA
    Windows 9x/ME ForceWare Logo (200 x 30) ForceWare Release 80 Version: 81.98 Release Date: December 21, 2005 US English File Size: 11.7 MB
  37. [37]
    ForceWare Release 90 93.71 | Windows XP - NVIDIA
    The driver will begin downloading immediately after clicking on the "Download" button. NVIDIA recommends users update to the latest driver version. Download.
  38. [38]
    NVIDIA GeForce3 Ti 200 and Windows | NVIDIA GeForce Forums
    I bet that you got a later driver that dose'nt support your very old card. There are no Vista or W7 compatible video card drivers for the Ti 200. 0.
  39. [39]
    Nvidia to phase out GeForce3 - Neowin
    Feb 8, 2002 · Nvidia is to phase out its GeForce3 line of graphics chips entirely as it makes room for the the Geforce 4, the company said on Wednesday.<|separator|>
  40. [40]
    Nvidia Inspector introduction and Guide - guru3D Forums
    Nov 5, 2015 · NVIDIA Inspector is a handy application that reads out driver and hardware information for GeForce graphics cards.