Fact-checked by Grok 2 weeks ago

Nvidia RTX

Nvidia RTX is a visual developed by Nvidia Corporation, introduced in 2018 with the launch of the GeForce RTX 20 Series graphics processing units (GPUs), enabling real-time ray tracing and AI-accelerated rendering for enhanced realism in gaming, content creation, and professional applications. At its core, RTX technology simulates the physical properties of light through ray tracing, a rendering method that traces rays of light to model realistic reflections, refractions, shadows, and in , powered by specialized RT Cores integrated into RTX GPUs. This breakthrough, announced on August 20, 2018, with pre-orders starting that day and availability in September, marked the first consumer-grade implementation of hardware-accelerated ray tracing, revolutionizing graphics by replacing traditional rasterization approximations with physically accurate simulations. Complementing ray tracing, RTX incorporates Tensor Cores for deep learning-based features, most notably (DLSS), an AI-driven suite of neural rendering technologies that upscale lower-resolution images to higher resolutions while boosting frame rates by up to 8x and reducing latency, thereby maintaining visual fidelity without sacrificing performance. Introduced alongside the RTX platform, DLSS has evolved through versions, with DLSS 4—launched in early 2025—adding Multi Frame Generation for even greater performance gains in supported games and applications. The RTX platform has expanded across multiple GPU generations, including the Ampere-based RTX 30 Series (2020), Ada Lovelace-based RTX 40 Series (2022), and the latest Blackwell-based RTX 50 Series (2025), powering over 870 games and applications with ray tracing and DLSS support as of November 2025. Beyond gaming, RTX enables professional workflows in fields like , , and design through and RTX-enabled software, delivering photorealistic previews and simulations.

History

Origins and Development

Nvidia's exploration of ray tracing technology originated in the mid-2000s, coinciding with the launch of in 2006, which enabled general-purpose computing on GPUs and opened pathways for accelerated graphics rendering. This foundational work built on earlier GPU advancements to tackle the computational intensity of ray tracing, a technique for simulating realistic light behavior that had long been limited to offline rendering in film and design. By leveraging GPU parallelism, Nvidia researchers began experimenting with software-based ray tracing implementations to push toward interactive applications in and . A pivotal early milestone was the 2009 release of the OptiX ray tracing engine, a programmable designed specifically for GPUs to facilitate high-performance, general-purpose ray tracing. OptiX served as a key precursor to RTX by abstracting complex ray generation, traversal, and intersection tasks into a flexible , allowing developers to create custom shaders for diverse use cases like physically-based rendering without managing low-level GPU details. This engine demonstrated the potential for GPU-accelerated ray tracing in scenarios, influencing subsequent internal efforts to integrate it more deeply into hardware. Nvidia's development of RTX was shaped by extensive collaborations with academic institutions and industry leaders in , focusing on advancing physically-based rendering techniques to achieve performance in and . These partnerships, including joint research presented at conferences like , emphasized stochastic sampling, denoising algorithms, and light transport models to overcome the performance bottlenecks of traditional rasterization. Such efforts highlighted the growing need for ray tracing in immersive environments, where accurate of reflections, shadows, and enhances realism without prohibitive computational costs. In the mid-2010s, pursued internal prototypes that explored hardware-accelerated tracing, setting the stage for the Turing architecture's integration of dedicated RT cores. CEO championed RTX as a fundamental shift in platforms, emphasizing its potential to blend tracing with for cinematic-quality rendering in real time during key announcements. By 2018, 's overall R&D investment exceeded $1.5 billion annually, with substantial resources allocated to and tracing innovations that underpinned these prototypes.

Launch and Architectural Evolution

NVIDIA unveiled the RTX 20 series graphics cards at 2018, marking the debut of dedicated RT and Tensor cores for real-time ray tracing and AI acceleration in consumer GPUs. Powered by the Turing architecture, the initial lineup included the RTX 2080 Ti, RTX 2080, and RTX 2070, with pre-orders starting immediately and availability from September and October 2018. This launch introduced the RTX platform, enabling hybrid rendering that combined traditional rasterization with ray-traced effects, though it faced initial skepticism from developers regarding the viability of real-time ray tracing in performance-sensitive gaming scenarios. The RTX architecture evolved with the Ampere-based GeForce RTX 30 series, announced on September 1, 2020, which delivered up to twice the performance of the prior generation through second-generation RT and Tensor cores, emphasizing improved efficiency for ray tracing and AI tasks. Subsequent advancements came with the architecture in the , revealed on September 20, 2022, in a livestream event, introducing DLSS 3 with frame generation powered by fourth-generation Tensor cores and an Optical Flow Accelerator. This generation expanded RTX capabilities into more accessible price points, enhancing neural rendering for broader adoption in and creative workflows. In 2025, announced the Blackwell architecture powering the RTX 50 series at CES on January 6, featuring neural shading innovations and up to 92 billion transistors in the flagship RTX 5090 for unprecedented -driven graphics performance. This progression integrated RTX technologies into laptops starting with in 2021 and extended to PCs, enabling portable ray tracing and acceleration. As of November 2025, over 870 games and applications supported RTX features, reflecting widespread adoption, while DLSS 4 reached more than 175 titles as of August 2025, underscoring the platform's growth from niche innovation to mainstream standard.

Architecture and Components

RT Cores

RT Cores are specialized hardware accelerators integrated into RTX GPUs, dedicated to expediting key computations in ray tracing pipelines, including (BVH) traversal, ray-triangle intersection testing, and collaboration with denoising processes to produce noise-free images in . These units offload intensive geometric operations from general-purpose cores, enabling efficient simulation of light interactions for realistic reflections, shadows, and . The first-generation RT Cores debuted in the Turing architecture in 2018, marking NVIDIA's entry into hardware-accelerated real-time ray tracing by dramatically speeding up BVH traversal and intersection calculations compared to software-based approaches on prior GPUs. Subsequent iterations built on this foundation: second-generation in (2020), third-generation in (2022), and fourth-generation in Blackwell (2025). The fourth-generation RT Cores in Blackwell introduce enhancements tailored for "Mega Geometry" workloads, featuring specialized cluster engines that accelerate BVH construction by up to 100x, allowing GPUs to handle vastly more complex scenes with billions of triangles without proportional performance degradation. Performance of RT Cores is often measured in giga rays per second (GR/s), quantifying the rate of ray casting and intersection tests, or in tera floating-point operations per second (TFLOPS) for ray tracing-specific operations. A conceptual model for ray throughput can be expressed as: \text{GigaRays/s} = (\text{RT Core count} \times \text{Clock speed in GHz} \times \text{Efficiency factor}) where the efficiency factor accounts for architectural improvements in traversal and intersection throughput, typically ranging from 0.5 to 1.0 based on workload complexity. For instance, first-generation RT Cores in the RTX 2080 Ti achieved over 10 GR/s at typical boost clocks. In the Blackwell-based GeForce RTX 5090, fourth-generation RT Cores deliver up to 317.5 RT TFLOPS, representing a significant leap that supports up to 2x faster ray-triangle intersections over third-generation cores. RT Cores integrate seamlessly with industry-standard APIs to enable developers to leverage their capabilities in applications. They are fully supported by Microsoft's (DXR) , which allows shaders to invoke hardware-accelerated ray tracing calls for hybrid rendering pipelines. Similarly, the Ray Tracing extension provides cross-platform access to RT Core acceleration, including advanced features like mesh shaders and for efficient BVH management. This compatibility ensures RT Cores contribute to ray tracing in games and professional tools without requiring custom low-level programming.

Tensor Cores

Tensor Cores are specialized hardware units integrated into GPUs, designed to accelerate multiply-accumulate (MMA) operations using mixed-precision , which is fundamental to inference and training workloads. These cores perform high-throughput computations on matrices, enabling efficient processing of layers by supporting lower-precision formats like FP16 (half-precision floating-point) paired with FP32 accumulation in their initial , reducing computational overhead while maintaining accuracy. Introduced in the in 2017, Tensor Cores marked 's entry into dedicated acceleration hardware, with each core capable of executing a 4x4x4 MMA operation per clock cycle. In the context of RTX GPUs, Tensor Cores were optimized starting with the Turing architecture in the RTX 20 series (2018), representing the second generation with added support for integer precisions such as INT8 and INT4, alongside continued FP16 capabilities, to broaden applicability in inference tasks. The third generation in the Ampere-based RTX 30 series (2020) introduced TF32 precision for training and structural sparsity acceleration, allowing up to 2x throughput gains by skipping zero-valued computations in sparse matrices common in trained neural networks. Advancing to the fourth generation in the Ada Lovelace architecture of the RTX 40 series (2022), these cores added FP8 support and enhanced sparsity handling, doubling performance for formats like FP16, BF16, TF32, and INT8 compared to prior generations. For instance, the RTX 4090 achieves approximately 660 TFLOPS in FP16 Tensor performance, scaling to 1,321 TFLOPS with sparsity enabled, illustrating the hardware's efficiency in AI matrix operations. The fifth-generation Tensor Cores in the Blackwell architecture, powering the RTX 50 series (2025), further evolve this lineage by incorporating FP4 precision for even greater inference throughput, alongside optimized support for emerging AI models. This generation enables the RTX 5090 to deliver up to 3,352 trillion AI operations per second () in INT8, representing a significant leap in raw computational capacity for tasks. At the hardware level, Tensor Cores facilitate applications such as neural upscaling and super-resolution by accelerating convolutional and transformer-based operations inherent to these techniques, providing the foundational compute for real-time image enhancement without relying on general-purpose cores. A key conceptual framework for Tensor Core throughput is given by the equation: \text{Throughput} = (\text{Number of Tensor Cores} \times \text{Sparsity Factor} \times \text{Precision Efficiency}) where the sparsity factor (typically up to 2x for structured sparsity) accounts for skipped operations, and precision efficiency reflects the operations per cycle based on (e.g., higher for FP8/FP4 versus FP16). This model underscores how architectural advancements to achieve peta-scale performance in RTX GPUs, as seen in the RTX 4090's effective 1.3 PFLOPS under sparsity-optimized FP16 workloads.

Supporting Hardware Elements

The foundational elements of Nvidia RTX GPUs include CUDA cores, which handle general-purpose rasterization, shading, and compute tasks beyond specialized ray tracing and acceleration. These streaming multiprocessor-based cores have scaled significantly across generations to support high-throughput graphics and . For instance, the GeForce RTX 3080 features 8,704 cores, while the RTX 4090 increases this to 16,384, and the RTX 5090 further expands to 21,760 cores, enabling enhanced parallel execution for complex scenes and workloads. Memory subsystems in RTX GPUs form a critical hierarchy, providing high-speed access to textures, frame buffers, and compute data. Early RTX generations, such as the 20 series on , utilized , evolving to GDDR6X in the 30 and 40 series for improved and efficiency in cards. The 50 series, based on Blackwell, adopts in high-end models like the RTX 5090, which includes 32 GB on a 512-bit bus, delivering up to 1,792 GB/s —calculated as GB/s = (memory bus width in bits × effective speed in GT/s) / 8. This progression supports the data demands of advanced rendering pipelines without relying on datacenter-specific HBM3e variants. Interconnects and power delivery enable multi-GPU configurations and sustained performance in RTX setups. PCIe integration has advanced to Gen 5.0 x16 in the 50 series, offering up to 128 GB/s bidirectional bandwidth for single-GPU systems, while remains available in professional RTX variants for high-speed multi-GPU linking in compute-intensive environments, though consumer models prioritize PCIe for compatibility. Power advancements accommodate rising (TDP), with the RTX 5090 rated at 575 W, requiring robust 1,000 W+ system supplies and improved to maintain stability under load. Die size and density reflect architectural maturation, with Turing's flagship TU102 at 754 mm² and 18.6 billion transistors, expanding in GB202 to approximately 750 mm² but achieving 92.2 billion transistors through TSMC's 4NP , boosting overall compute and . These elements collectively underpin RTX performance by providing scalable compute, rapid , and reliable interconnectivity that complement specialized cores.

Core Technologies

Real-Time Ray Tracing

Real-time ray tracing simulates the physical behavior of light by tracing rays from the camera through a scene, calculating interactions such as reflections, refractions, shadows, and to produce highly realistic rendering effects. Primary rays originate from the camera and intersect scene geometry, while secondary rays are spawned from those intersection points to model bounced light paths, enabling effects like indirect lighting that traditional rasterization approximates with less accuracy. This approach, rooted in techniques, allows for performance exceeding 60 frames per second in interactive applications when accelerated by dedicated hardware. Nvidia RTX introduces hardware-accelerated bounding volume hierarchies (BVH) to optimize ray-scene intersection computations, reducing the average time complexity per ray from linear O(n)—where n is the number of scene objects—to logarithmic O(log n) by pruning non-intersecting branches in the tree structure. This acceleration enables efficient traversal for complex scenes, supporting advanced features like full path tracing in titles such as , where multiple ray bounces simulate comprehensive without prohibitive computational overhead. By handling BVH operations in specialized hardware, RTX GPUs achieve up to 10x faster ray tracing compared to software-based methods on prior architectures. Performance in real-time ray tracing involves managing a "ray budget" to balance visual quality and frame rates, often limiting rays to 1-3 samples per pixel (SPP) at 1080p resolution to maintain 30-60 FPS on high-end hardware. Higher SPP values improve accuracy in simulating light paths but exponentially increase compute demands, requiring trade-offs such as hybrid rasterization-ray tracing or selective ray casting for non-critical effects. These constraints ensure playable performance while delivering noticeable enhancements in lighting realism. By 2025, ray tracing has seen widespread adoption, with over 800 games and applications supporting RTX technologies, including ray-traced effects in major titles. A seminal example is with RTX, launched in 2020, which integrates real-time ray tracing to add dynamic lighting and shadows to its block-based worlds, demonstrating accessibility for broad audiences. This growth reflects the maturation of ray tracing from offline rendering to a standard feature in interactive entertainment.

AI Acceleration and DLSS

Deep Learning Super Sampling (DLSS) is an AI-driven technology developed by that leverages Tensor Cores in RTX GPUs to perform real-time upscaling and , rendering games at lower internal resolutions and reconstructing higher-quality images to enhance performance while maintaining visual fidelity. Introduced with the RTX 20 series in 2018, DLSS 1.0 relied on per-game training of convolutional neural networks for super-resolution, but it faced limitations in generalization and artifacting. DLSS 2.0, released in 2020, shifted to a more efficient, temporally stable AI model using motion vectors and depth buffers, enabling broader adoption without game-specific training. Subsequent iterations built on this foundation: DLSS 3 in 2022 added frame generation for RTX 40 series GPUs, while DLSS 4.0, launched in 2025 for RTX 50 series, introduced Multi Frame Generation alongside refined super resolution models powered by fifth-generation Tensor Cores. These advancements allow DLSS to upscale from resolutions as low as to with minimal quality loss, often outperforming native rendering in demanding ray-traced scenarios. A key component of AI acceleration in RTX is Frame Generation, which uses optical flow analysis and AI inference to insert interpolated frames between traditionally rendered ones, significantly boosting frame rates without requiring additional GPU compute for full scenes. In DLSS 3, available on RTX 40 series GPUs, this feature can effectively double frame rates in supported titles by generating one additional frame per rendered frame, with the overall performance scaling as effective FPS ≈ native FPS × (1 + generation factor), where the factor typically equals 1 for standard implementation. DLSS 4.0 extends this to Multi Frame Generation, capable of producing up to three AI-generated frames per rendered frame on RTX 50 series hardware, potentially quadrupling frame rates in optimized games while integrating with NVIDIA Reflex to mitigate added latency from the post-process. For instance, in titles like Cyberpunk 2077 with full ray tracing, Frame Generation enables playable 4K performance on mid-range RTX cards that would otherwise struggle below 60 FPS. Beyond DLSS, RTX GPUs incorporate other AI-accelerated features to optimize user experience. NVIDIA Reflex employs AI-guided synchronization between CPU and GPU to reduce system latency by up to 50% in competitive games, dynamically adjusting render queues to minimize input lag without sacrificing frame rates. Similarly, NVIDIA Broadcast utilizes Tensor Cores for AI effects in streaming and video calls, including removal that eliminates background sounds from microphones and cameras, virtual backgrounds, and auto-framing to enhance production quality. These tools run efficiently on RTX hardware, processing audio and video streams at low overhead. By November 2025, DLSS 4.0 supports over 175 games and applications, including major titles like and and the Great Circle, demonstrating its widespread integration across PC gaming. In fidelity comparisons, DLSS consistently delivers superior image quality over alternatives like AMD , with fewer artifacts in motion and better temporal stability due to its approach, particularly in quality modes at resolutions.

Neural Rendering Innovations

Neural rendering innovations in NVIDIA RTX represent a toward AI-driven graphics generation, leveraging the Blackwell architecture's enhanced Tensor Cores to integrate directly into the rendering pipeline for unprecedented efficiency and realism. These advancements build on RTX's foundational capabilities, enabling developers to create dynamic, photorealistic visuals that adapt in without relying solely on traditional rasterization or ray tracing. By 2025, NVIDIA's focus has shifted to neural techniques that generate content procedurally, reducing computational overhead while scaling to complex environments. Neural Shading emerges as a cornerstone of this evolution, allowing AI foundation models to generate materials and textures on-the-fly within shaders. Announced at GDC 2025 in collaboration with , Neural Shading support integrates into 12 via the SDK Preview released in April 2025, granting developers access to RTX Tensor Cores for training and deploying compact neural networks directly in graphics pipelines. This technology enables procedural creation of detailed surfaces, such as weathered metals or organic fabrics, by inferring properties from sparse input data, significantly streamlining asset authoring for games and simulations. RTX Mega Geometry further amplifies scene complexity through neural acceleration of (BVH) construction, facilitating ray tracing of massive geometric datasets. Introduced in 2025 as part of NVIDIA's RTX Kit, it clusters and dynamically updates intricate geometry, enabling up to 100 times more triangles in scenes compared to prior generations while minimizing CPU and memory demands. Demonstrated in 5.6 tech demos, such as the RTX Bonsai showcase, Mega Geometry achieves substantial performance uplifts in BVH builds for cluster-based systems, allowing immersive worlds with billions of polygons to render fluidly on RTX hardware. In the realm of digital humans and simulations, RTX powers AI-driven avatars capable of photorealistic facial expressions and interactions, as showcased in GDC 2025 demos like the updated technology. ACE microservices facilitate these avatars by combining neural rendering with generative for multilingual speech and expressive animations, producing lifelike responses in gaming and virtual environments. For autonomous vehicle (AV) applications, the Cosmos platform leverages world foundation models to accelerate simulations, integrating neural reconstruction for high-fidelity data generation from real-world AV recordings and enabling scalable testing of physical scenarios. The computational demands of these neural passes scale with operations multiplied by complexity, underscoring the need for RTX's optimized hardware to maintain performance. Efficiency gains from these innovations are evident in RTX 50 Series laptops, which incorporate Blackwell Max-Q technologies to deliver up to 40% longer battery life during neural rendering workloads, balancing high-fidelity outputs with power constraints in mobile scenarios.

Software and Ecosystem

Development Tools and APIs

NVIDIA OptiX is a proprietary ray tracing designed for GPU-accelerated rendering, providing developers with a flexible framework to implement ray tracing pipelines optimized for RTX hardware. First introduced in and evolving significantly with version 7.0 in 2020, which introduced support for hardware-accelerated ray tracing via RT Cores, OptiX has integrated denoising capabilities to reduce noise in ray-traced images efficiently. The latest version, OptiX 9.0 released in February 2025, adds features such as clusters for accelerating BVH builds on massive dynamic geometry, cooperative vectors for embedding AI workflows within ray tracing kernels, and full support for GeForce RTX 50 series GPUs based on the Blackwell architecture. OptiX seamlessly integrates with , allowing developers to leverage GPU compute capabilities for custom ray generation and intersection programs. For cross-platform development, supports industry-standard ray tracing extensions including (DXR) and Ray Tracing. DXR, introduced by in 2018, enables real-time ray tracing in 12 applications and is accelerated on RTX GPUs through dedicated RT Core hardware for (BVH) traversal and ray-triangle intersections. The API has progressed to version 1.2 as of March 2025, offering up to 2.3x performance improvements via features like execution reordering, with providing driver support across GeForce RTX series GPUs. Similarly, Ray Tracing extensions, ratified by the in 2020 and built upon 's initial VK_NV_ray_tracing proposal, allow developers to integrate ray tracing into -based applications with RTX for ray queries and pipeline stages. These extensions support domains for ray generation, intersection, and closest-hit, enabling efficient hybrid rendering workflows on platforms like Windows, , and . NVIDIA serves as a collaborative platform for building and simulating RTX-enabled 3D workflows, centered around the Universal Scene Description (USD) framework developed with . Launched in 2020, provides APIs, SDKs, and services for integrating OpenUSD with RTX rendering technologies, facilitating real-time collaboration in virtual production and simulation environments. By 2025, enhancements include the Kit SDK 108.0 released in August, which improves rendering quality and performance for physical development, and the Kit SDK 109.0 released in November as a targeted feature branch focusing on critical platform updates; alongside new libraries like NuRec for ray-traced 3D Gaussian splatting to accelerate -driven scene reconstruction. These updates enable developers to create scalable, USD-native applications with -accelerated RTX simulations for industries such as and . Complementing these, the RTX AI Toolkit, released in June 2024, equips Windows developers with a suite of tools and SDKs to customize, optimize, and deploy AI models on RTX PCs and cloud infrastructure. It includes components for model , inference acceleration via Tensor Cores, and integration with frameworks like TensorRT, streamlining the development of AI-enhanced RTX applications such as neural rendering and generative tools. By 2025, over 100 professional applications across creative, engineering, and scientific domains have been accelerated by RTX technologies through these tools and , enabling faster rendering, processing, and workflows.

Applications and User Tools

NVIDIA RTX technologies power a range of consumer applications that enhance , content creation, and interactions on personal computers. These tools leverage RT Cores for ray tracing, Tensor Cores for acceleration, and supporting hardware to deliver immersive experiences without requiring developer-level expertise. By integrating directly with RTX GPUs, they enable users to access advanced features like real-time rendering and neural processing in everyday scenarios. RTX Remix is an open-source modding platform designed for remastering classic DirectX 8 and 9 games with modern graphics enhancements, including ray tracing, DLSS, and AI-upscaled assets. It allows users to inject path-traced lighting, physically based rendering, and neural textures into legacy titles, transforming them into high-fidelity experiences playable on RTX hardware. In 2025, NVIDIA launched a $50,000 RTX Remix Mod Contest to encourage community creations, with winners announced at Gamescom, including mods like Painkiller RTX that showcase full ray-traced visuals. This initiative has fostered over a dozen notable remasters, emphasizing accessibility for hobbyist modders through tools like the Omniverse-based runtime. Chat with RTX provides a privacy-focused, local AI chatbot that runs large language models (LLMs) on RTX GPUs, allowing users to query personal documents, notes, and images without cloud dependency. Released on February 13, 2024, as a free tech demo, it supports open-source models like or , accelerated by Tensor Cores for offline inference and retrieval-augmented generation. Users can customize the chatbot for tasks such as summarizing files or answering context-specific questions, ensuring data remains on-device for enhanced security and speed on RTX 30-series or newer cards. NVIDIA Broadcast enhances streaming and video calls with AI-driven effects, including noise removal, virtual backgrounds, eye contact correction, and Studio Voice for real-time audio enhancement. Integrated as a universal plugin for apps like , , and , it utilizes Tensor Cores to process audio and video feeds in real time, reducing background interference and improving production quality for gamers and creators. The app, updated to in early 2025, added features like Virtual Key Light for simulated illumination, making professional-grade streaming accessible on RTX-equipped PCs. The App, formerly known as Experience and RTX Experience, serves as a central hub for game optimization, driver updates, and performance tuning on RTX systems. It automatically scans and adjusts in-game settings for optimal frame rates and visuals based on hardware capabilities, while features like Project G-Assist use to provide voice-activated tweaks for power efficiency and gameplay enhancements. For creative workflows, Studio Drivers—optimized branches of the main driver suite—deliver stability and peak performance in applications like Adobe Premiere, , and , tested specifically for content creators to minimize crashes during rendering and editing. RTX features extend to cloud and ecosystem integrations, with DLSS 4 now supported in over 175 games and apps as of August 2025, enabling AI-upscaled rendering and multi-frame generation for smoother gameplay across titles like Requiem and PRAGMATA. , NVIDIA's service, incorporated RTX 50-series (Blackwell) performance in September 2025, allowing Ultimate members to stream ray-traced games at up to with low on non-RTX devices. These tools collectively democratize RTX capabilities, bridging local with seamless user interfaces for and .

Graphics Cards

20 Series (Turing)

The RTX 20 series, codenamed Turing, represented Nvidia's first consumer graphics cards to integrate dedicated RT Cores for real-time ray tracing and Tensor Cores for acceleration, launching the RTX brand. Announced on August 20, 2018, at in , , the initial models—RTX 2080 Ti, RTX 2080, and RTX 2070—began shipping on September 20, 2018, with the RTX 2060 following on January 15, 2019. Launch prices started at $499 MSRP for the RTX 2070, $699 for the RTX 2080, $999 for the RTX 2080 Ti, and $349 for the RTX 2060, though Founders Edition variants reached up to $1,199 for the top model. All cards utilized GDDR6 memory, with configurations ranging from 6 GB on the RTX 2060 to 11 GB on the RTX 2080 Ti. These GPUs featured 1,920 to 4,352 cores, marking the introduction of first-generation RT Cores (30 to 68 per card) and Tensor Cores (240 to 544 per card) to handle ray-triangle intersection tests and matrix multiply-accumulate operations, respectively. Ray tracing throughput ranged from 7 to 14 GigaRays per second across the lineup, enabling hardware-accelerated ray tracing at playable frame rates in supported titles. The flagship , built on TSMC's 12 nm with 18.6 billion transistors, delivered peak performance of up to 14 TFLOPS FP32, setting the scale for the series' computational capabilities. The RTX 20 series pioneered real-time ray tracing in mainstream gaming, with (released November 20, 2018) as the first title to leverage Cores for dynamic reflections and shadows at 60 FPS on high-end models like the RTX 2080 Ti. This integration demonstrated the potential for photorealistic lighting without prohibitive performance costs, influencing subsequent game development pipelines. Mobile variants, including the RTX 2060 through 2080 Max-Q editions, debuted in laptops starting January 29, 2019, extending these features to portable devices with power-optimized designs.

30 Series (Ampere)

The Nvidia RTX 30 series, based on the architecture, was announced on , 2020, and launched progressively through 2021, targeting gamers, creators, and professionals with enhanced rasterization, ray tracing, and AI capabilities. This lineup marked a significant generational leap, offering up to an 82% average performance uplift over the previous RTX 20 series across various workloads, driven by increased core counts and . The series played a pivotal role in the market from 2020 to 2022, powering high-end gaming at resolutions and accelerating tasks amid booming demand for and streaming. The RTX 30 series encompassed a range of GPUs, from the entry-level RTX 3050 to the RTX 3090 , with suggested prices spanning $249 to $1,999 at launch. Memory configurations varied from 4 GB to 24 GB of GDDR6X, enabling support for demanding applications like 8K and VR development. Key models included the mid-range RTX 3060 and RTX 3070 for gaming, and the high-end RTX 3080 and RTX 3090 for and beyond, with the Ti variants providing incremental boosts in clock speeds and power delivery. Core specifications highlighted the series' scalability, featuring 2,560 to 10,752 cores across models, second-generation RT cores for ray tracing, and third-generation Tensor cores for tasks. The RTX 3080, for instance, integrated 28 billion transistors on the GA102 die, delivering up to 35.6 TFLOPS of FP32 performance while supporting PCIe 4.0 for faster data transfer. These advancements enabled smoother real-time ray tracing in games and more efficient upscaling. A major innovation in the RTX 30 series was the debut of DLSS 2.0, which leveraged Tensor cores to generate higher-resolution frames from lower ones using , significantly improving frame rates without sacrificing visual quality—building on the foundational DLSS from the prior generation. However, the series faced global supply shortages in due to constraints and demand, leading to and limited availability that extended into 2022. Despite these challenges, the RTX 30 series solidified Nvidia's dominance in the discrete GPU market, capturing over 80% share during its peak.

40 Series (Ada Lovelace)

The graphics processing units (GPUs), codenamed , represent NVIDIA's high-end consumer graphics lineup released starting in late 2022, emphasizing advancements in -driven rendering, ray tracing, and power efficiency. Built on a node, these GPUs integrate third-generation RT cores for hardware-accelerated ray tracing and fourth-generation Tensor cores for acceleration, enabling features like DLSS 3, which includes exclusive Frame Generation technology that uses to insert additional frames for up to 4x performance gains in supported games compared to brute-force rendering. The series spans both and variants, powering everything from entry-level gaming to professional workstations, with over 170 laptop designs certified by early 2023. Key models in the RTX 40 series range from the RTX 4050 (mobile-only) to the flagship RTX 4090, with desktop options starting at the RTX 4060 and extending to the RTX 4090; launch prices varied from $299 for the RTX 4060 desktop to $1,599 for the RTX 4090, while mobile configurations influenced laptop pricing starting around $800 for RTX 4050-equipped systems. Memory configurations utilize GDDR6X across 6 GB (RTX 4050 mobile) to 24 GB (RTX 4090), paired with CUDA core counts scaling from 3,072 in the RTX 4060 to 16,384 in the RTX 4090, supporting high-bandwidth applications like 4K gaming and AI workloads. The RTX 4090, featuring the AD102 die with 76.3 billion transistors, delivers 1.3 petaFLOPS of AI inference performance via its Tensor cores in FP8 precision, marking a substantial leap for generative AI and neural rendering tasks. In 2024, NVIDIA expanded the lineup with SUPER variants (RTX 4070 SUPER, 4070 Ti SUPER, 4080 SUPER), offering refined core counts and memory for better value without altering the core architecture. Performance benchmarks highlight the series' ray tracing capabilities, achieving up to 4x faster speeds in ray-traced scenarios over the RTX 30 series () when leveraging DLSS 3, driven by doubled RT core throughput and acceleration for motion vector generation. For instance, the RTX 4090 renders complex scenes in games like at over 100 in with full ray tracing and DLSS, compared to sub-30 FPS on the RTX 3090 without such optimizations. Laptop integrations benefit from Ada Lovelace's efficiency gains, delivering up to 2x at low power levels (e.g., 35W for RTX 4070 ), enabling thinner designs with 40% reduced thermal output in encoding tasks relative to prior generations, thus extending battery life in creator workflows. These advancements position the RTX 40 series as a bridge to neural rendering extensions, briefly enhancing upscaling in non-gaming applications.

50 Series (Blackwell)

The RTX 50 series, based on the Blackwell architecture, represents a significant advancement in consumer graphics processing units, emphasizing -driven enhancements for gaming, content creation, and generative applications. Announced at CES 2025, the series introduces fifth-generation Tensor Cores and fourth-generation RT Cores, enabling up to 3,352 in the flagship model for accelerated neural rendering and real-time ray tracing. With a focus on transforming PCs into powerhouses, the RTX 50 series supports generative tools for tasks like and , delivering approximately twice the performance of the preceding RTX 40 series in ray-traced workloads when paired with updated software features. The lineup spans entry-level to high-end models, including the RTX 5050, RTX 5060, RTX 5060 Ti, RTX 5070, RTX 5070 Ti, RTX 5080, and RTX 5090, with suggested retail prices ranging from $299 for the RTX 5050 to $1,999 for the RTX 5090. Memory configurations vary from 8 GB GDDR6 in lower-tier cards like the RTX 5050 to 32 GB GDDR7 in the RTX 5090, providing ample bandwidth for high-resolution textures and AI model inference. CUDA core counts scale from 2,560 in the RTX 5050 to 24,576 in the RTX 5090, built on dies such as the 92.2 billion transistor GB202 for the top model. These GPUs prioritize power efficiency and AI acceleration, with the series drawing between 200 W and 600 W depending on the variant. Key innovations in the RTX 50 series include DLSS 4, which leverages AI multi-frame generation to boost frame rates by up to 8x in supported games while maintaining visual fidelity through advanced upscaling and frame interpolation. Complementing this is Neural Shading, a technology that integrates small neural networks directly into programmable shaders, enabling developers to create AI-accelerated effects such as neural materials, volumes, and lighting for more realistic rendering without traditional compute overhead. These features, powered by the Blackwell architecture's enhanced Tensor Cores, extend beyond gaming to professional workflows, including real-time AI denoising in tools like and generative content creation in applications. The RTX 50 series launched progressively throughout 2025, with the RTX 5080 and RTX 5090 becoming available on January 30, followed by the RTX 5070 family in February 2025, with lower-end models like the RTX 5050 completing the lineup in June 2025. By November 2025, all variants are widely available, though high-demand models like the RTX 5090 have seen prices exceed MSRP due to supply constraints. This release schedule underscores Nvidia's strategy to rapidly deploy Blackwell's AI capabilities to consumers, fostering an ecosystem of AI PCs optimized for both and .

References

  1. [1]
    NVIDIA Brings Real-Time Ray Tracing to Gamers with GeForce RTX
    Aug 20, 2018 · Starting at $499, Pre-Orders Today, Availability Sept.​​ Pre-orders on nvidia.com and at over 100 participating partners starts today for GeForce ...
  2. [2]
    GeForce RTX 20 Series Graphics Cards and Laptops - NVIDIA
    NVIDIA GeForce RTX 20 series graphics cards and laptops feature dedicated ray tracing and AI cores to bring you powerful performance and cutting-edge features.
  3. [3]
    Real-Time Ray Tracing | NVIDIA Developer
    Real-time ray tracing is a method of graphics rendering that simulates the physical behavior of light. Find tutorials, samples, videos, and more.
  4. [4]
    What is GeForce RTX? - NVIDIA
    What is GeForce RTX? Learn about real-time ray tracing, deep learning super sampling (DLSS) technology, and more.
  5. [5]
    Introduction to NVIDIA RTX and DirectX Ray Tracing
    Mar 19, 2018 · It fully integrates ray tracing into DirectX, and makes it a companion (as opposed to a replacement) to rasterization and compute.
  6. [6]
    NVIDIA DLSS 4 Technology
    DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality.
  7. [7]
    NVIDIA DLSS
    NVIDIA DLSS is a suite of neural rendering technologies that boosts frame rates and delivers high-quality images, multiplying frame rates by up to 8X.
  8. [8]
    NVIDIA DLSS 4 Introduces Multi Frame Generation ...
    Jan 6, 2025 · NVIDIA DLSS is a suite of neural rendering technologies powered by GeForce RTX Tensor Cores that boosts frame rates while delivering crisp, ...
  9. [9]
    DLSS 4 With Multi Frame Generation & Enhancements For ... - NVIDIA
    Jan 30, 2025 · NVIDIA DLSS is a suite of neural rendering technologies, powered by GeForce RTX Tensor Cores, that boost frame rates while delivering crisp, ...
  10. [10]
    NVIDIA DLSS & GeForce RTX: List Of All Games, Engines And ...
    Jan 30, 2025 · NVIDIA DLSS is our award-winning suite of neural rendering technologies that use AI to boost FPS, reduce latency and improve image quality using ...
  11. [11]
    AI-Powered Neural Rendering Technologies | NVIDIA RTX ...
    NVIDIA RTX technology is the most advanced platform for full ray tracing and AI-powered neural rendering technologies that revolutionize how we create and ...Technologies · Gpu Architecture · Learn More About Nvidia Rtx...
  12. [12]
    Inside the Programming Evolution of GPU Computing
    Oct 28, 2015 · In a recent interview, NVIDIA VP of Accelerated Computing Ian Buck talks about the history of using GPUs for more than just game graphics.
  13. [13]
    NVIDIA OptiX™ Ray Tracing Engine
    Jul 29, 2025 · An application framework for achieving optimal ray tracing performance on the GPU. It provides a simple, recursive, and flexible pipeline for accelerating ray ...
  14. [14]
    Academic Collaborations | NVIDIA Research
    Harness the power of large-scale, physically-based OpenUSD simulation · Rendering. Bring state-of-the-art rendering to professional workflows · Robotic ...
  15. [15]
    Academic Researchers Collaborate With NVIDIA to Tackle Graphics ...
    May 4, 2022 · NVIDIA co-authors are presenting a record 16 research papers at the year's most important graphics conference, pushing forward the fields of neural rendering, ...
  16. [16]
    Physically-based rendering | NVIDIA Real-Time Graphics Research
    With recent advances in hardware ray tracing, systems research, stochastic light sampling, and denoising, we show that complex light transport effects can be ...
  17. [17]
    NVIDIA Turing Architecture In-Depth | NVIDIA Technical Blog
    Sep 14, 2018 · Turing combines programmable shading, real-time ray tracing, and AI algorithms to deliver incredibly realistic and physically accurate graphics for games and ...
  18. [18]
    NVIDIA RTX Technology Realizes Dream of Real-Time Cinematic ...
    Mar 19, 2018 · Game Developers Conference - NVIDIA today announced NVIDIA RTX™, a ray-tracing technology that brings real-time, cinematic-quality rendering ...Missing: 2014 | Show results with:2014
  19. [19]
    NVIDIA Research and Development Expenses 2012-2025 | NVDA
    NVIDIA research and development expenses for the twelve months ending July 31, 2025 were $15.384B, a 45.54% increase year-over-year. NVIDIA annual research and ...
  20. [20]
    Ignacio Llamas Interview: Unearthing Ray Tracing - NVIDIA Developer
    Mar 20, 2019 · We spoke with Ignacio Llamas, Director of Real Time Ray Tracing Software at NVIDIA about the introduction of real-time ray tracing in ...
  21. [21]
    GeForce RTX 30 Series Graphics Cards: The Ultimate Play - NVIDIA
    Sep 1, 2020 · Release Date, September 24, September 17, Available October. View Full ... With the launch of our GeForce RTX 30 Series GPUs, we're ...Missing: history | Show results with:history
  22. [22]
    Introducing NVIDIA DLSS 3 | GeForce News
    Sep 20, 2022 · DLSS 3 is a revolutionary breakthrough in AI-powered graphics that massively boosts performance, while maintaining great image quality and responsiveness.
  23. [23]
    NVIDIA Blackwell GeForce RTX 50 Series Opens New World of AI ...
    Jan 6, 2025 · Next generation of GeForce RTX GPUs deliver stunning visual realism and 2x performance increase, made possible by AI, Neural Shaders and DLSS.
  24. [24]
    NVIDIA Ampere Architecture Powers Record 70+ New GeForce RTX ...
    Jan 12, 2021 · GeForce RTX 30 Series laptops will be available starting Jan. 26 from the world's top OEMs, including Acer, Alienware, ASUS, Gigabyte, HP, ...
  25. [25]
    Over 175 DLSS 4 Games and Apps Available, 10+ RTX ... - NVIDIA
    Aug 20, 2025 · More than 800 games and applications feature RTX technologies, and each week new games integrating DLSS 4 with Multi Frame Generation, NVIDIA ...
  26. [26]
    [PDF] NVIDIA TURING GPU ARCHITECTURE
    Key features of NVIDIA Turing include new Streaming Multiprocessors, Turing Tensor Cores, Real-Time Ray Tracing, Mesh Shading, and Variable Rate Shading.
  27. [27]
    [PDF] NVIDIA RTX BLACKWELL GPU ARCHITECTURE
    The full GB203 chip consists of 45.6 billion transistors and contains 7 GPCs, 42 TPCs, 84 SMs, and eight 32-bit memory controllers (256-bit total). With ...Missing: 92 | Show results with:92
  28. [28]
    NVIDIA RTX Kit - NVIDIA Developer
    RTX Mega Geometry. Accelerate BVH building for cluster-based geometry systems, enabling up to 100x more ray-traced triangles and better performance in ...Missing: fourth | Show results with:fourth
  29. [29]
    Nvidia Turing architecture deep dive - PC Gamer
    Sep 21, 2018 · Nvidia says the RT cores do ">10 Giga Rays per second" (GR/s) in the GeForce RTX 2080 Ti, and that it takes about 10 TFLOPS of computations ...
  30. [30]
    NVIDIA GeForce RTX 5090 - Benchmarks and Specs - Notebookcheck
    The Nvidia GeForce RTX 5090 features a new Tensor (5th Gen) and RT-cores (4th Gen), with 3352 AI TOPS and 318 TFLOPS respectively. Blackwell architecture is ...
  31. [31]
    Vulkan Ray Tracing Final Specification Release - The Khronos Group
    Nov 23, 2020 · Vulkan Ray Tracing will be familiar to anyone who has used DirectX Raytracing (DXR) in DirectX 12, but also introduces advanced functionality ...
  32. [32]
    NVIDIA Tensor Cores: Versatility for HPC & AI
    Tensor Cores are the advanced NVIDIA technology that enables mixed-precision computing. This technology expands the full range of workload across AI & HPC.Unprecedented Acceleration... · Breakthrough Inference · Nvidia Hopper Architecture...
  33. [33]
    [PDF] NVIDIA TESLA V100 GPU ARCHITECTURE
    New Tensor Cores are a key capability enabling the Volta GV100 GPU architecture to deliver the performance required to train large neural networks. The ...
  34. [34]
    NVIDIA Volta - Artificial Intelligence Architecture
    Volta is the most powerful GPU architecture the world has ever seen. It pairs NVIDIA CUDA and Tensor Cores to deliver the performance of an AI supercomputer in ...The Core Of Ai · Groundbreaking Innovations · 640 Tensor CoresMissing: introduction | Show results with:introduction
  35. [35]
    NVIDIA Ampere Architecture
    The NVIDIA Ampere architecture, designed for elastic computing, is the largest 7nm chip with 54 billion transistors, and features new Tensor Cores with TF32 ...
  36. [36]
    The NVIDIA Ada Lovelace Architecture
    Powered by the new fourth-gen Tensor Cores and Optical Flow Accelerator on GeForce RTX 40 Series GPUs, DLSS 3 uses AI to create additional high-quality frames.
  37. [37]
    [PDF] NVIDIA ADA GPU ARCHITECTURE
    With its groundbreaking RT and Tensor Cores, the Turing architecture laid the foundation for a new era in graphics, which includes ray tracing and AI-based ...
  38. [38]
    GeForce RTX 5090 Graphics Cards - NVIDIA
    The GeForce RTX 5090 is powered by the NVIDIA Blackwell architecture and equipped with 32 GB of super-fast GDDR7 memory, so you can do it all. Starting at $1999.
  39. [39]
    GeForce RTX 40 Series Graphics Cards - NVIDIA
    Cutting-Edge GPUs. NVIDIA Ada Lovelace Architecture ; Realistic and Immersive Graphics. Dedicated Ray Tracing Cores ; AI-Accelerated Performance. NVIDIA DLSS 3.GeForce RTX 4070 Family · GeForce RTX 4080 Family · GeForce RTX 4090
  40. [40]
    Nvidia Blackwell and GeForce RTX 50-Series GPUs - Tom's Hardware
    Jan 17, 2025 · Blackwell specifications ; Transistors (Billion), 92.2, 45.6 ; Die size (mm^2), 750, 378 ; SMs, 170, 84 ; GPU Shaders (ALUs), 21760, 10752 ...
  41. [41]
    GeForce RTX 30 Series Graphics Card Overview - NVIDIA
    The RTX 30 series uses Ampere architecture, ray tracing, AI (DLSS), and Reflex for low latency, and is designed for both gaming and creative tasks.
  42. [42]
    NVIDIA GeForce RTX 5090 Features 575 W TDP, RTX 5080 Carries ...
    Jan 3, 2025 · According to the latest leaks, 575 Watts are dedicated to the GB202-300-A1 GPU die in the GeForce RTX 5090, while 25 Watts are for GDDR7 memory ...
  43. [43]
    NVIDIA reveals die sizes for GB200 Blackwell GPUs - VideoCardz.com
    Jan 16, 2025 · According to the report, the RTX 5090 GPU, named GB202, will feature a 750 mm² die size and 92.2B transistors. This marks the largest increase ...
  44. [44]
    Ray Tracing Essentials Part 1: Basics of Ray Tracing
    Jan 18, 2020 · In February 2019, NVIDIA published Ray Tracing Gems, a deep-dive into best practices for real-time ray tracing.
  45. [45]
    [PDF] A Survey on Bounding Volume Hierarchies for Ray Tracing
    Using a. BVH, we can efficiently prune branches that do not intersect a given ray, and thus reduce the time complexity from linear to logarithmic on average.
  46. [46]
    Ray Tracing | NVIDIA Developer
    Ray tracing is a rendering technique that can realistically simulate the lighting of a scene and its objects by rendering physically accurate reflections.
  47. [47]
    Tips and Tricks: Ray Tracing Best Practices | NVIDIA Technical Blog
    Mar 20, 2019 · BVH handles complexity rather well by architecture. Performance Best Practices. 1.0 Acceleration Structure Management. 1.1 General Practices.
  48. [48]
    Reality Check. What Is NVIDIA RTX Technology? What Is DirectX ...
    Mar 28, 2018 · Even the most powerful graphics hardware will struggle to trace with more than 1 to 3 samples per pixel for a 1080p-frame at a rate of 30 to 60 ...
  49. [49]
    Best Practices for Using NVIDIA RTX Ray Tracing (Updated)
    Jul 25, 2022 · This post gathers best practices based on our experiences so far using NVIDIA RTX ray tracing in games.<|separator|>
  50. [50]
    NVIDIA Introduces DLSS 3 With Breakthrough AI-Powered Frame ...
    Sep 20, 2022 · Powered by new fourth-generation Tensor Cores and a new Optical Flow Accelerator on GeForce RTX 40 Series GPUs, DLSS 3 is the latest iteration ...
  51. [51]
    Introducing NVIDIA Reflex: Optimize and Measure Latency in ...
    Sep 1, 2020 · A revolutionary suite of GPU, G-SYNC display, and software technologies that measure and reduce system latency in competitive games.Nvidia Reflex Sdk · Nvidia Reflex Latency... · So What Is Latency Anyway?
  52. [52]
    NVIDIA Broadcast App: AI-Powered Voice and Video
    Transform your livestreams, voice chats, and video calls with powerful AI effects like noise removal and virtual background.Setup Guide · Release Notes · GeForce News · FAQs
  53. [53]
    GeForce @ Gamescom 2025: DLSS 4 Now in 175+ Games & Apps ...
    Aug 18, 2025 · GeForce RTX 50 Series graphics cards and DLSS 4 with Multi Frame Generation launched at the end of January, with support available in 75 games ...
  54. [54]
    NVIDIA OptiX™ Downloads
    Release Highlights NVIDIA® OptiX™ 9.0.0 (February 2025). Clusters a.k.a. Megageometry - API for dramatically accelerating BVH builds of massive dynamic triangle ...Missing: history | Show results with:history
  55. [55]
    NVIDIA OptiX 9.0
    The NVIDIA OptiX API is an application framework for achieving optimal ray tracing performance on the GPU. It provides a simple, recursive, and flexible ...Missing: history | Show results with:history
  56. [56]
    Announcing DirectX Raytracing 1.2, PIX, Neural Rendering and ...
    Mar 20, 2025 · NVIDIA unveiled that their Neural Shading SDK will support DirectX and utilize cooperative vectors, providing developers with tools to ...
  57. [57]
    Ray Tracing In Vulkan - The Khronos Group
    Mar 17, 2020 · This blog summarizes how the Vulkan Ray Tracing extensions were developed, and illustrates how they can be used by developers to bring ray tracing ...Overview · Introduction to the Vulkan Ray... · Ray Traversal · Ray Tracing Pipelines
  58. [58]
    NVIDIA Vulkan Ray Tracing Tutorial
    This tutorial starts from a basic Vulkan application and provides step-by-step instructions to modify and add methods and functions.
  59. [59]
    Omniverse Platform for OpenUSD - NVIDIA
    Easily integrate OpenUSD data interoperability and NVIDIA RTX physics-based, real-time rendering directly into your applications, workflows, and services by ...Robotics Simulation · Autonomous Vehicle Simulation · Universal Scene Description
  60. [60]
    Universal Scene Description (USD) 3D Framework - NVIDIA
    Cloud APIs​​ Omniverse core technologies for OpenUSD and RTX are exposed as simple APIs that developers can self-host or use as a managed service. Coming soon to ...
  61. [61]
    Release Notes — Omniverse Developer Guide
    Release Date: August 2025. Omniverse Kit SDK 108.0 accelerates physical AI development with breakthrough rendering quality and performance improvements.Current Release · What's New In Kit Sdk 108.0 · Omniverse Live Streaming...
  62. [62]
    NVIDIA Opens Portals to World of Robotics With New Omniverse ...
    Aug 11, 2025 · New Omniverse NuRec libraries and AI models introduce Omniverse RTX ray-traced 3D Gaussian splatting, a rendering technique that lets developers ...
  63. [63]
    RTX AI Toolkit - NVIDIA Developer
    The NVIDIA RTX AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PCs and cloud.
  64. [64]
    Streamline Development of AI-Powered Apps with NVIDIA RTX AI ...
    Jun 2, 2024 · The NVIDIA RTX AI Toolkit is a free collection of tools and SDKs that allows Windows application developers to customize, optimize, and deploy ...
  65. [65]
    RTX Remix | The Ultimate Modding Platform - NVIDIA
    RTX Remix is an open-source platform for creating RTX remasters of classic games, using AI to enhance assets and a runtime to inject RTX technologies.
  66. [66]
    What is RTX Remix? - NVIDIA Omniverse
    RTX Remix is a modding platform that lets you remaster classic DirectX 8 and 9 games with modern graphics. It allows modders to: Enhance textures using AI.
  67. [67]
    Build a Custom LLM with ChatRTX - NVIDIA
    ChatRTX is a demo app that lets you personalize a GPT large language model (LLM) connected to your own content—docs, notes, images, or other data ...
  68. [68]
    Chat with RTX Now Free to Download | NVIDIA Blog
    Feb 13, 2024 · Chat with RTX, now free to download, is a tech demo that lets users personalize a chatbot with their own content, accelerated by a local NVIDIA GeForce RTX 30 ...
  69. [69]
    New NVIDIA Broadcast AI Features Now Streaming
    Studio Voice and Virtual Key Light — and ...
  70. [70]
    Download NVIDIA App for Gamers and Creators
    Project G-Assist is an AI assistant for your GeForce RTX PC. With simple voice or text commands, G-Assist can maximize performance and optimize power efficiency ...Supported Games · RTX Remix · NVIDIA ChatRTX · NVIDIA App for Enterprise
  71. [71]
    NVIDIA Studio | AI-Powered Platform for Creators
    Designed for creators, NVIDIA Studio Drivers are GPU-tuned updates that ensure rock-solid stability and peak performance for every creative workflow.RTX Accelerated Creative Apps · Choose the Best GPU for... · Overview
  72. [72]
    Nvidia announces RTX 2000 GPU series with '6 times more ...
    Aug 20, 2018 · RTX 2070 cards will start at $499, with RTX 2080 at $699, and the RTX 2080 Ti starting at $999. Nvidia is also offering Founders Editions for ...
  73. [73]
    Introducing The GeForce RTX 2060: Turing For Every Gamer - NVIDIA
    Jan 6, 2019 · ... 2080 Ti, 2080 and 2070, enabling RTX 2060 owners to enjoy Battlefield_TM _V at 60 frames per second, with amazing real-time ray-tracing effects ...
  74. [74]
    Nvidia Reveals GeForce RTX 20 Series Graphics Cards, Starting at ...
    Aug 20, 2018 · Nvidia's GeForce RTX 20 Series starts at $500 and features real-time ray tracing ; CUDA cores: 4,352, 2,944 ; Base speed: 1,350MHz, 1,515MHz ...
  75. [75]
    Battlefield V DXR Real-Time Ray Tracing Available Now - NVIDIA
    Nov 14, 2018 · GeForce RTX gamers can now experience lifelike reflections with the world's first use of real-time ray tracing in video games.
  76. [76]
  77. [77]
    Nvidia GeForce RTX 40 series: Price, specs and everything we know
    Aug 29, 2024 · These new cards are the Nvidia RTX 4070 Super ($599), the RTX 4070 Ti Super ($799) and the RTX 4080 Super ($999). Together they help add a new ...
  78. [78]
    NVIDIA GeForce RTX 4090 Specs - GPU Database - TechPowerUp
    It features 16384 shading units, 512 texture mapping units, and 176 ROPs. Also included are 512 tensor cores which help improve the speed of machine learning ...Asus rog matrix rtx 4090... · NVIDIA GeForce RTX 4090 · EVGA RTX 4090 FTW3...
  79. [79]
    GeForce RTX 4090 Graphics Cards for Gaming - NVIDIA
    Up to 2X performance and power efficiency. Fourth-Gen Tensor Cores. Up to 4X performance with DLSS 3 vs. brute-force rendering. Third-Gen RT Cores. Up to 2X ray ...
  80. [80]
    Introducing GeForce RTX 40 Series GPUs - NVIDIA
    Sep 20, 2022 · In creative apps, GeForce RTX 40 Series graphics cards provide up to 2X the performance in 3D rendering, video export speed, and AI tools. With ...
  81. [81]
    Understanding The Power, Performance & Efficiency Of GeForce ...
    Feb 7, 2023 · The supported power range begins at 35W, the point of maximum efficiency. Compared to its maximum power of 115W, the 35W point has 2X more performance per watt.
  82. [82]
    Creativity At The Speed of Light: GeForce RTX 40 Series Graphics ...
    Sep 20, 2022 · For livestreamers, the new AV1 encoder delivers 40% better efficiency. This means livestreams will appear as if bandwidth was increased by ...
  83. [83]
    New GeForce RTX 50 Series Graphics Cards & Laptops ... - NVIDIA
    Jan 6, 2025 · January 28th, 2025: The NVIDIA RTX Blackwell GPU Architecture whitepaper is now available for download here. Powered by the NVIDIA Blackwell RTX ...
  84. [84]
    NVIDIA GeForce RTX 5090 vs RTX 4090: Specs & Performance
    Jan 22, 2025 · GeForce RTX 5090 delivers up to 1676 TFLOPS of tensor FP16 compute, and double for 3352 TOPS of tensor INT8 compute for an astonishing 154% ...
  85. [85]
    GeForce RTX 50 Series Graphics Cards - NVIDIA
    Fifth-Gen Tensor Cores. Max AI performance with FP4 and DLSS 4. New Streaming Multiprocessors. Optimized for neural shaders. Fourth-Gen Ray Tracing Cores. Built ...RTX 5090 · RTX 5050 · RTX 5080 · RTX 5070 Family
  86. [86]
    NVIDIA GeForce RTX 5090 Specs - GPU Database - TechPowerUp
    Based on TPU review data: "Performance Summary" at 1920x1080, 4K for RTX 3080 and faster. ... 104.8 TFLOPS. FP64 (double): 1.637 TFLOPS (1:64). Board Design.Gigabyte aorus rtx 5090... · Inno3D RTX 5090 iCHILL... · Palit RTX 5090 GameRock
  87. [87]
    Lowest price on every graphics card from Nvidia, AMD, and Intel today
    Oct 6, 2025 · Nvidia RTX 50-Series: Lowest Graphics Cards Prices ; GeForce RTX 5060 Ti 16GB. $429. $379. $429 ; GeForce RTX 5060 Ti 8GB. $349. $319. $379.<|separator|>
  88. [88]
    NVIDIA GB202 GPU Specs - TechPowerUp
    With a die size of 750 mm² and a transistor count of 92,200 million it is a very big chip. GB202 supports DirectX 12 Ultimate (Feature Level 12_2).
  89. [89]
    NVIDIA RTX Neural Rendering Introduces Next Era of AI-Powered ...
    Jan 6, 2025 · NVIDIA introduced NVIDIA RTX Kit, a suite of neural rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game ...Missing: equation ops
  90. [90]
    Nvidia RTX 50-series GPUs: performance, specs, prices, availability
    Aug 15, 2025 · RTX 5090: $1,999, launched January 30, 2025 · RTX 5080: $999, launched January 30, 2025 · RTX 5070 Ti: $749, launched February 20, 2025 · RTX 5070: ...
  91. [91]