Fact-checked by Grok 2 weeks ago

Texture filtering

Texture filtering is a fundamental technique in that determines the color value for each when mapping a 2D texture onto a 3D projected onto a 2D screen, by blending or interpolating between discrete texture elements known as texels to produce smooth, artifact-free visuals. This process addresses common rendering issues such as (jagged edges) during minification—when multiple texels map to a single —and blurring or during —when a single texel covers multiple —ensuring higher-quality in real-time applications like and simulations. Developed from early efforts in the late 1970s and early 1980s to mitigate sampling artifacts in rendering, texture filtering has evolved with graphics hardware to support efficient computation on GPUs. The primary methods of texture filtering include nearest-point sampling, which simply selects the color of the closest without blending, offering speed but prone to blocky results; bilinear filtering, which computes a weighted average of the four nearest texels for smoother transitions; and , which extends bilinear by interpolating between two mipmaps (precomputed lower-resolution texture versions) to handle minification more effectively. represents an advanced variant, accounting for texture distortion on angled or grazing surfaces by sampling more texels along the viewing direction, significantly reducing blurring in oblique views at a higher computational cost. These techniques are implemented in graphics APIs such as and , where developers can select modes based on performance versus quality trade-offs. Beyond basic , modern texture filtering incorporates GPU-accelerated advanced methods like subpixel filtering with Gaussian kernels for magnification and quasi-optimal to average samples over pixel areas, enhancing applications in , image processing, and . Mipmapping, a complementary , prefilters textures into a of resolutions to optimize sampling and reduce , making it essential for efficient real-time graphics. As of 2025, advancements continue with approaches and techniques for improved efficiency and quality. Overall, texture filtering balances visual fidelity with hardware constraints, remaining a cornerstone of high-quality since its foundational implementations in the early .

Background and Motivation

Texture Mapping Basics

is a fundamental technique in that involves projecting a two-dimensional , referred to as a , onto the surface of a three-dimensional model, typically composed of polygons, by associating coordinates with points on the surface. The itself consists of discrete elements known as texels, which serve as the atomic units analogous to pixels in a standard , storing color or other attribute values that contribute to the surface's appearance. Texture coordinates, commonly denoted as (u, v), are normalized values in the range [0, 1] that parameterize the space and map directly to positions on the texture image, with (0, 0) corresponding to one corner and (1, 1) to the opposite corner. These coordinates are typically assigned to the vertices of polygonal surfaces during modeling. In the rendering pipeline, they are interpolated across the surface to determine the texture location for each rendered fragment, with the interpolation performed in a perspective-correct manner to account for the projection from 3D world space to 2D screen space, ensuring undistorted mapping as surfaces recede from the viewer. The concept of texture mapping originated in the 1970s, pioneered by in his 1974 PhD thesis at the , where he introduced it as a method to add surface detail to curved surfaces rendered via subdivision algorithms, building on early rasterization techniques akin to extensions of for more realistic imagery. This innovation allowed for efficient application of detailed patterns without increasing geometric complexity, marking a significant advancement in computer-generated image realism. In the basic sampling process, interpolated (u, v) coordinates for a given screen or fragment are used to fetch the corresponding color(s) from the , which are then combined—often directly or via simple averaging—to compute the final color contribution for that fragment in the rendered image. This direct sampling approach forms the foundation of application but can introduce visual discrepancies due to mismatches between density and screen , necessitating filtering techniques to achieve smooth results.

Sampling Challenges and Artifacts

In , two primary sampling scenarios arise: minification and . Minification occurs when a single screen corresponds to multiple in the , leading to of the texture's detail and potential loss of high-frequency information. Conversely, happens when one texel spans multiple screen pixels, resulting in and typically causing blurring rather than , though both can degrade image quality if not addressed. These mismatches introduce various artifacts, where high-frequency details are incorrectly represented as lower-frequency patterns due to insufficient sampling. Spatial manifests as jagged edges or blocky appearances in static images, while temporal produces shimmering or flickering effects during motion, such as crawling edges on moving objects. Moiré patterns emerge from the between repetitive elements and the , creating wavy or unwanted geometric distortions, particularly in fine patterns like or stripes. The Nyquist-Shannon sampling theorem underpins these issues, stating that to accurately reconstruct a signal without , it must be sampled at least twice the rate of its highest frequency component. In texture sampling, this implies that for high-frequency details like sharp edges or periodic motifs, at least two samples per cycle are required; falling below this threshold folds high frequencies into lower ones, producing artifacts. For example, distant textures may appear blocky with missing details, while animated scenes exhibit shimmering as camera or object motion varies the sampling rate across frames. Naive approaches to mitigate these problems, such as by rendering at higher resolutions and downsampling, incur significant performance costs, as the computational expense scales with the number of additional samples needed—often requiring over thousands of texels for large footprints near horizons or silhouettes. This makes applications impractical without more efficient strategies.

Mipmapping Fundamentals

Mipmap Pyramid Construction

A mipmap chain, also known as a , consists of a series of precomputed s derived from an original base , where each subsequent level is reduced to half the in each dimension of the previous level. The base level (level 0) retains the full of the original , level 1 is one-quarter the area (half width and for s), level 2 is one-sixteenth, and so on, with level k having dimensions scaled by $1/2^k until reaching a 1x1 (or equivalent minimal size) . ping was introduced by Lance Williams in 1983 in his paper "Pyramidal parametrics." This hierarchical structure enables efficient sampling at varying distances by selecting appropriate levels during rendering. Mipmap levels are typically generated through downsampling algorithms that filter the base texture to produce lower-resolution versions, with the box filter being the standard and simplest method. In a box filter for 2D textures, each in a given level is computed by averaging the values of a 2x2 block of texels from the preceding higher-resolution level, ensuring uniform weighting across the samples. For 3D textures (volume mipmaps), the box filter extends to averaging 2x2x2 (eight) neighboring texels. More advanced filters, such as Gaussian filters, can be applied instead to achieve smoother transitions and reduced by using weighted kernels that emphasize central texels more heavily, though they increase computational cost during generation. The total storage for a complete mipmap pyramid approximates 1.33 times (or precisely $4/3) the size of the base , arising from the summing the areas: 1 + (1/4) + (1/16) + ... approaching $1 / (1 - 1/4) = 4/3. For 3D mipmaps, the factor is $8/7 \approx 1.14 times the base volume due to the $1/8 scaling per level. This overhead is managed efficiently in graphics memory, as the pyramid enables savings during runtime sampling. Graphics APIs provide automated mipmap generation to simplify pyramid construction, often using hardware-accelerated box filtering on the base level iteratively. In (version 3.0+), the glGenerateMipmap function computes all levels from the bound base texture for targets like GL_TEXTURE_2D, GL_TEXTURE_3D, and GL_TEXTURE_2D_ARRAY, replacing lower levels with filtered reductions while preserving the base. Similarly, DirectX 11's ID3D11DeviceContext::GenerateMips recursively generates mipmaps from the largest level of a shader resource view, supporting 1D//3D textures and arrays with compatible formats. These functions handle the full chain automatically once invoked after uploading the base texture. For non-power-of-two (NPOT) textures, generation requires careful handling to avoid irregular level sizes, as dimensions are floored or ceiled during halving (e.g., a 127x127 base yields levels like 63x63, 31x31). supports NPOT mipmaps since version 2.0 for complete textures, using adjusted box filters or trapezoidal variants to maintain without padding to power-of-two sizes. similarly accommodates NPOT via resource flags like D3D11_RESOURCE_MISC_GENERATE_MIPS. In texture arrays, mipmaps are generated independently for each layer, ensuring the pyramid structure applies per array element without cross-layer mixing.

Level of Detail Selection

Level of detail (LOD) selection in texture filtering determines the appropriate level to sample during rendering by estimating the projected size of texels in screen space, thereby balancing detail preservation with reduction. This process relies on computing the rate at which texture coordinates change across screen pixels, using partial derivatives of the UV coordinates (du/dx, dv/dy) to quantify the sampling footprint. The LOD value, denoted as λ, is calculated using the formula: \lambda = \log_2 \left( \rho \right) where ρ represents the minification factor derived from the magnitude of screen-space texture derivatives. Specifically, for a 2D texture, ρ is computed as the maximum of the magnitudes along the x and y screen directions: \rho = \max \left( \sqrt{ \left( \frac{\partial u}{\partial x} \right)^2 + \left( \frac{\partial v}{\partial x} \right)^2 }, \sqrt{ \left( \frac{\partial u}{\partial y} \right)^2 + \left( \frac{\partial v}{\partial y} \right)^2 } \right) These derivatives are evaluated at the fragment's screen position, reflecting how rapidly the texture coordinates vary over a , which indicates whether the texture is being minified (ρ > 1, selecting higher LOD levels for smaller details) or magnified (ρ < 1, favoring lower LOD levels). LOD bias provides a user-adjustable offset to refine this selection, added to the base λ as λ' = λ + bias, where negative values sharpen by favoring higher-resolution mipmaps (e.g., -0.5 for crisper details at distance) and positive values introduce blurring to reduce shimmer. This bias can be set per-texture via parameters like TEXTURE_LOD_BIAS in graphics APIs, allowing artists to tune visual quality without altering the underlying pyramid structure. In shader-based rendering, perspective-correct interpolation ensures accurate LOD computation by dividing texture coordinates by the homogeneous w-component during rasterization, preventing distortions in 3D scenes where surfaces recede into the distance. Derivatives are then computed on these interpolated, perspective-divided coordinates (u/w, v/w) to maintain correct sampling rates across the view frustum. To handle edge cases, the final LOD is clamped between user-defined minimum and maximum values (e.g., TEXTURE_MIN_LOD and TEXTURE_MAX_LOD, often ranging from 0 to the pyramid's depth), preventing over-minification that could select excessively low-resolution levels or extrapolation beyond the base level 0, which would otherwise cause invalid sampling or unnecessary magnification artifacts.

Core Isotropic Filtering Methods

Nearest-Neighbor Interpolation

Nearest-neighbor interpolation, also known as point sampling, is the simplest form of texture filtering in computer graphics, where the color value of the texel closest to the computed texture coordinates (UV position) is directly selected without any blending or averaging. This method selects the texel whose center is closest to the texture coordinates (typically by rounding the u and v components to the nearest integer), making it computationally inexpensive as it requires only a single texel access per fragment. When combined with mipmapping, nearest-neighbor interpolation selects the nearest texel from the most appropriate level of detail (LOD) in the mipmap pyramid to address minification artifacts. In OpenGL, this is implemented via the GL_NEAREST_MIPMAP_NEAREST parameter, which first chooses the mipmap level closest to the pixel size and then applies nearest-neighbor sampling within that level, helping to reduce aliasing during texture minification. The primary advantages of nearest-neighbor interpolation include its zero additional computational cost beyond the basic texture fetch, enabling exact texel access and high performance in real-time rendering scenarios. It is particularly suitable for applications requiring unfiltered magnification, where a sharp, blocky appearance is desired, such as in pixel art rendering to preserve the original pixelated style. However, this method produces noticeable disadvantages, including blocky artifacts during texture magnification and severe aliasing, such as moiré patterns, under minification without mipmaps, leading to visually distracting results. In terms of implementation, nearest-neighbor interpolation involves a direct memory fetch from the GPU's texture unit, specified in APIs like using the GL_NEAREST mode for both magnification and minification filters. This hardware-accelerated operation ensures efficient processing with no interpolation overhead, making it the default filtering method in many graphics pipelines.

Bilinear Filtering

Bilinear filtering performs 2D linear interpolation across the four texels nearest to a given texture coordinate, yielding smoother results than point sampling for isotropic texture access. The process begins by determining the integer and fractional components of the texture coordinates in the u and v directions, with the fractional offsets s and t ranging from 0 to 1. These offsets are used to compute bilinear weights for blending the four surrounding texels, denoted as c_{00}, c_{10}, c_{01}, and c_{11}. The resulting color value is given by the formula: \text{color} = (1-s)(1-t) c_{00} + s(1-t) c_{10} + (1-s) t c_{01} + s t c_{11} This interpolation can be implemented separably by first performing horizontal linear interpolation and then vertical, or directly via the four-term weighted sum. Bilinear filtering is applied within a single mipmap level, selected based on the level of detail (LOD), and supports both texture magnification—where screen pixels are larger than texels—and minification—where multiple texels map to a single pixel. In terms of quality, bilinear filtering effectively reduces the blocky artifacts associated with nearest-neighbor sampling but introduces blurring of high-frequency details and can still produce aliasing artifacts during severe minification, as it assumes uniform texel contribution without accounting for perspective distortion. All modern GPUs provide hardware support for bilinear filtering through the GL_LINEAR parameter in , which is a core feature of the API and enables efficient 2x2 texel interpolation during rendering. Common variants include using bilinear filtering exclusively for magnification by setting GL_TEXTURE_MAG_FILTER to GL_LINEAR while applying GL_NEAREST for minification to preserve sharpness in distant views, or pairing it with nearest-neighbor selection in performance-critical applications to minimize computational overhead. This method forms the foundation for extensions like trilinear filtering, which blends bilinear results across adjacent mipmap levels.

Trilinear Filtering

Trilinear filtering extends bilinear filtering by incorporating interpolation across levels to achieve smoother transitions during texture minification. It selects the two mipmap levels closest to the computed level of detail (), applies bilinear interpolation within each level to sample the texture, and then performs linear interpolation between these two bilinearly filtered results based on the fractional portion δ of the LOD value. The resulting color is computed as \text{color} = (1 - \delta) \cdot \text{bilinear}_n + \delta \cdot \text{bilinear}_{n+1}, where n is the integer LOD and bilinear operations use the nearest texels in each mipmap. This method eliminates abrupt "popping" artifacts that occur in discrete mipmap selection, where sudden switches between levels cause visible quality jumps, particularly under camera motion or object movement. By blending samples from adjacent mipmaps, trilinear filtering provides continuous LOD transitions, resulting in smoother visual quality and reduced aliasing in dynamic scenes. The computational cost of trilinear filtering is approximately twice that of bilinear filtering alone, as it requires performing two bilinear interpolations (eight texel samples total) plus an additional linear blend. This increased overhead arises from the dual mipmap accesses and extra arithmetic operations, making it more demanding on GPU texture units. In graphics APIs, trilinear filtering is implemented via the minification filter mode GL_LINEAR_MIPMAP_LINEAR in OpenGL and equivalent settings in DirectX, which automatically handle the mipmap selection and blending during texture sampling. Despite its advantages, trilinear filtering assumes an isotropic (circular) pixel footprint in texture space, leading to over-blurring of details when textures are viewed obliquely, where the actual footprint elongates. This limitation can cause softened or indistinct features on angled surfaces, such as distant ground planes or walls.

Anisotropic and Advanced Filtering

Anisotropic Filtering Techniques

In texture mapping, surfaces viewed at grazing angles cause texels to project as elongated ellipses in screen space rather than isotropic squares, resulting in directional blurring where detail is lost along the major axis of elongation while potentially preserving sharpness along the minor axis. This anisotropy problem arises from the perspective transformation, which unevenly distorts texture coordinates based on the view angle relative to the surface normal. A foundational approach to mitigate this is elliptical weighted average (EWA) filtering, introduced by in 1986. EWA models the pixel footprint in texture space as an ellipse derived from the Jacobian of the texture mapping, then samples multiple points along the ellipse's major and minor axes. These samples are weighted by a Gaussian kernel centered on the ellipse to reconstruct the filtered texel value, ensuring anti-aliased results that adapt to the degree of anisotropy without over-blurring perpendicular directions. This method provides high-quality filtering for complex projections, though its computational demands limited early adoption to software implementations. Ripmaps, proposed by Heckbert in 1989, extend traditional by constructing independent pyramid chains along the u and v texture axes, enabling separate resolution selection for each direction as an approximation for . This structure allows bilinear or trilinear sampling within each directional chain, better matching the elliptical footprint than a single isotropic level and reducing aliasing on angled surfaces by tailoring detail to the dominant stretching direction, though it increases memory usage compared to standard mipmaps and is not used in typical modern GPU hardware implementations, which favor multi-sampled approaches. Contemporary GPUs integrate anisotropic filtering directly into the texture sampling units, with NVIDIA's implementation supporting up to 16x anisotropy that dynamically computes and applies multiple offset samples along the principal axis. AMD offers comparable modes, often up to 16x, configurable via drivers for automatic application across rendering pipelines. These hardware variants typically build on trilinear filtering as a baseline, enhancing oblique texture clarity with minimal developer intervention. The performance overhead of anisotropic filtering stems from increased sample counts, ranging from 4x to 16x more than isotropic methods depending on the anisotropy factor and hardware setting. Level-of-detail selection occurs per axis to optimize this, using formulas such as \lambda_u = \log_2 \left( \left| \frac{\partial u}{\partial x} \right| + \left| \frac{\partial u}{\partial y} \right| \right) and analogously for \lambda_v, which guide mipmap level choice and the number of linear samples along the elongated direction. This adaptive sampling preserves detail efficiently but can impact fill rate on bandwidth-limited scenarios.

Percentage-Closer Filtering

Percentage-Closer Filtering (PCF) is a fundamental technique in shadow mapping that mitigates aliasing artifacts by performing multiple depth comparisons per pixel, softening hard shadow edges that arise from the discrete resolution of depth textures. In standard shadow mapping, self-shadowing artifacts known as shadow acne occur due to finite precision in depth comparisons, while excessive bias to counteract acne leads to peter-panning, where shadows detach from casters. PCF reduces these issues by averaging multiple samples from the light-view depth map, computing the fraction (percentage) of samples where the stored depth is closer to the light than the current surface depth, thereby producing smoother transitions without requiring additional bias adjustments. Percentage-Closer Soft Shadows (PCSS) extends PCF to generate physically plausible soft shadows for area light sources using depth textures, by dynamically estimating penumbra sizes based on blocker geometry. The algorithm proceeds in three phases for each shaded pixel: first, a blocker search samples the shadow map (typically 36 times using a Poisson disk kernel) within a region scaled by the light radius and receiver distance to find depths closer to the light than the surface, averaging the blocker depths if any are found; second, if blockers exist, the penumbra width is estimated as w_{\text{penumbra}} = \frac{(d_{\text{receiver}} - d_{\text{blocker}}) \cdot w_{\text{light}}}{d_{\text{blocker}}}, where d_{\text{receiver}} and d_{\text{blocker}} are distances from the light, and w_{\text{light}} is the light radius; third, percentage-closer filtering applies a larger kernel (e.g., 64 Poisson disk samples) proportional to the penumbra to average occlusion ratios, simulating soft transitions. Poisson disk sampling ensures evenly distributed, low-discrepancy points that minimize noise and aliasing in the soft shadow penumbra. This approach builds on mipmapping for shadow map level-of-detail selection to handle varying distances efficiently. As an enhancement to explicit multi-sampling in PCF and PCSS, Variance Shadow Maps (VSM) store the first two moments (mean depth \mu and variance \sigma^2) of depth distributions in the shadow map, enabling hardware-accelerated filtering like bilinear interpolation or mipmapping without per-pixel sample enumeration. During shading, VSM uses Chebyshev's inequality to bound the probability that a random depth in the filtered region exceeds the surface depth: P(\tilde{z} \geq z) \leq \frac{\sigma^2}{\sigma^2 + (z - \mu)^2} if z > \mu, providing an upper bound on visibility that approximates soft shadows. This moment-based method avoids the computational cost of dense sampling in PCSS while supporting large kernels for realistic penumbras. Despite their effectiveness, PCF and PCSS suffer from noise artifacts due to sparse sampling, particularly in low-density blocker regions, and bias in penumbra estimates that can over-soften or distort . VSM mitigates sampling but introduces light bleeding (false ) from variance overestimation at edges and requires careful depth scaling to avoid precision loss. These techniques remain prevalent in real-time rendering engines.

Modern Developments

Integration with Ray Tracing

In tracing, texture sampling occurs at arbitrary intersection points along rays, diverging from the structured grid-based approach of rasterization where level-of-detail () selection relies on fixed derivatives. This shift necessitates on-demand LOD computation during ray traversal, as rays can hit surfaces at varying angles and distances without predefined screen-space footprints, potentially leading to or blurring if not addressed. Techniques such as ray differentials approximate the ray's footprint by propagating partial derivatives of ray origin and direction with respect to image-plane coordinates through and , enabling mipmapped texture lookups with appropriate LOD bias. For more precise filtering, methods using ray cones compute coverage within the ray's projected footprint to model angular spread and derive kernels that account for surface curvature. Hardware acceleration for texture filtering in ray-traced pipelines is integrated into modern GPUs, with NVIDIA's RTX providing dedicated RT cores alongside texture units that support bilinear and trilinear sampling directly in ray shaders. Similarly, AMD's RDNA repurpose texture units for ray intersection testing while retaining standard texture sampling capabilities in shaders for LOD computation. API support includes Vulkan's VK_KHR_ray_tracing extension, which exposes acceleration structures and shader stages allowing texture operations during ray traversal. The integration enhances rendering of complex effects, such as accurate reflections and refractions on curved surfaces, by enabling filtered sampling that adapts to the ray's angular extent and reduces in specular highlights. In indirect scenarios, it mitigates noise from sparse ray samples by prefiltering environment textures, improving convergence without excessive ray counts. Post-2020 advancements standardize these operations: (DXR) 1.1, released in 2020, introduces inline ray querying from any stage, facilitating seamless sampling within hybrid rasterization-ray tracing workflows. Vulkan 1.3, finalized in 2022, promotes ray tracing to core functionality with enhanced bindings for access, enabling efficient strategies across vendors.

and Adaptive Methods

Machine learning has emerged as a transformative approach in texture filtering since , enabling adaptive techniques that dynamically optimize sampling and based on content and constraints. These methods leverage neural to predict high-quality texture details from lower-resolution inputs, surpassing traditional fixed-kernel filters by incorporating temporal and spatial context for reduced and improved efficiency. Unlike earlier deterministic algorithms, AI-driven filtering adjusts sample counts in , prioritizing complex textures while minimizing computational overhead in rendering pipelines. A prominent example is NVIDIA's Deep Learning Super Sampling (DLSS) 3, released in 2022, which integrates AI-based upscaling with temporal filtering to adaptively handle texture reconstruction. DLSS 3 employs a convolutional neural network trained on high-fidelity game footage to upscale rendered frames, dynamically adjusting reconstruction based on motion vectors and depth buffers for smoother transitions across levels of detail (LODs). This results in enhanced texture sharpness and reduced artifacts, such as shimmering on fine details, while boosting frame rates by up to 4x in supported titles. Subsequent variants, like DLSS 3.5 in 2023, further refine temporal accumulation to preserve texture fidelity during high-motion scenes. In January 2025, NVIDIA introduced DLSS 4 for GeForce RTX 50 Series GPUs, featuring multi-frame generation and enhanced super resolution with improved ray reconstruction for better texture denoising and detail preservation in ray-traced scenes. Neural compression represents another key advancement, allowing on-the-fly generation of mipmaps to cut storage needs without quality loss. In 2021, the NeuMIP framework introduced multi-resolution neural materials, using coordinate-based neural networks to regress material appearances across LODs, effectively creating compact representations that expand during rendering. This approach compresses data by up to 16x compared to traditional mipmapping, enabling filtering of complex surfaces like procedurally generated environments. By learning hierarchical features, NeuMIP avoids the storage overhead of precomputed mipmaps, adapting ratios to content complexity. Machine learning-based denoising has also integrated with texture filtering to mitigate noise in advanced rendering, particularly when combined with sparse sampling. Intel's Xe Super Sampling (XeSS), launched in 2022, uses an AI model for temporal super-resolution that denoises textures by analyzing multi-frame data, reducing artifacts from low-sample renders while maintaining edge details. In October 2025, Intel released XeSS 3, expanding multi-frame generation support to all prior XeSS 2 titles and enhancing AI-driven upscaling for improved texture stability on a wider range of GPUs. Similarly, AMD's FidelityFX Super Resolution 3 ( 3), introduced in 2023, incorporates estimation to enhance frame , aiding denoising of textured elements by predicting motion between frames and filtering out inconsistencies. AMD's 4, launched in 2025, introduces machine learning-based upscaling for further temporal stability and texture quality improvements in over 75 titles. These techniques achieve frame rates exceeding 60 in demanding scenarios, such as ray-traced games, by adaptively blending denoised texture contributions. Despite these gains, challenges persist in deploying ML for texture filtering, including the need for extensive high-quality datasets to generalize across diverse textures and the introduction of from neural inference in pipelines. requires millions of paired low- and high-resolution frames, often sourced from offline renders, which can models toward specific or art styles. Additionally, ensuring low- execution on GPUs remains critical, as even optimized networks like those in DLSS can add 1-2 ms per frame, necessitating accelerations like Tensor Cores. These hurdles highlight ongoing research into lightweight models and to broaden adoption beyond high-end systems.

References

  1. [1]
    Texture filtering - UWP applications | Microsoft Learn
    Oct 20, 2022 · Texture filtering produces a color for each pixel in the primitive's 2D rendered image when a primitive is rendered by mapping a 3D primitive onto a 2D screen.
  2. [2]
    Chapter 27. Advanced High-Quality Filtering - NVIDIA Developer
    This chapter provides a general overview of GPU-based texture-filter implementation issues and solutions, with an emphasis on texture interpolation and ...
  3. [3]
    [PDF] feibush-texture-filtering.pdf - Stanford Computer Graphics Laboratory
    This paper describes the implementation of two filtering processes used for displaying textured polygons. Both the texture filter and the edge filter are based ...
  4. [4]
    [PDF] SURVEY OF TEXTURE MAPPING - cs.Princeton
    Their application to texture mapping was apparently first proposed in Catmull's PhD work. -. --. Page 9. S. -9- everal texture filters that employ prefiltering ...
  5. [5]
    How the Computer Graphics Industry Got Started at the University of ...
    Jun 11, 2023 · In 1974 Edwin Catmull, then also a doctoral student at the university, developed the principle of texture mapping, a method for adding ...
  6. [6]
    Pyramidal parametrics | Proceedings of the 10th annual conference ...
    Pyramidal parametrics. The mapping of images onto surfaces may substantially increase the realism and information content of computer-generated imagery.
  7. [7]
    [PDF] Parameterization-Aware MIP-Mapping - Texas A&M University
    The original description of mipmapping generated down- sampled images using a box filter, but one can easily imagine using higher-order filters at each level, ...
  8. [8]
    glGenerateMipmap - OpenGL 4 Reference Pages - Khronos Registry
    glGenerateMipmap and glGenerateTextureMipmap generates mipmaps for the specified texture object. For glGenerateMipmap, the texture object that is bound to ...Missing: documentation | Show results with:documentation
  9. [9]
    ID3D11DeviceContext::GenerateMips (d3d11.h) - Win32 apps
    Jul 26, 2022 · GenerateMips uses the largest mipmap level of the view to recursively generate the lower levels of the mip and stops with the smallest level that is specified ...
  10. [10]
    [PDF] Non-Power-of-Two Mipmap Creation - NVIDIA
    Apr 15, 2005 · This document describes and gives code for a mipmap construction algorithm for non-power-of- two textures that yields good quality results. The ...
  11. [11]
    [PDF] OpenGL 4.6 (Core Profile) - May 5, 2022 - Khronos Registry
    May 1, 2025 · This document, referred to as the “OpenGL Specification” or just “Specification ... mipmap complete set of texture images consistent with ...
  12. [12]
    glTexParameter - OpenGL 4 Reference Pages - Khronos Registry
    In 1D textures, linear filtering accesses the two nearest texture elements. In 3D textures, linear filtering accesses the eight nearest texture elements.
  13. [13]
    glTexParameteriv function (Gl.h) - Win32 apps | Microsoft Learn
    Mar 9, 2021 · GL_NEAREST. Returns the value of the texture element that is nearest (in Manhattan distance) to the center of the pixel being textured.
  14. [14]
    Chapter 9 Texture Mapping
    GL_LINEAR_MIPMAP_LINEAR uses linear interpolation to compute the value in each of two maps and then interpolates linearly between these two values. As you might ...
  15. [15]
    Texture filtering - Advanced VR Graphics Techniques
    For example, trilinear filtering is twice the cost of bilinear filtering yet the visual advantages are not always apparent especially on textures being applied ...Missing: limitations | Show results with:limitations
  16. [16]
    Creating Raster Omnimax Images from Multiple Perspective Views ...
    To minimize aliasing during resampling, the mapping program uses the elliptical weighted average filter, a space-variant filter we developed for this ...Missing: heckbert | Show results with:heckbert
  17. [17]
    [PDF] Texture Minification using Quad-trees and Fipmaps - Eurographics
    includes summed area tables and Paul Heckbert's elliptical ... ripmap scale the texture only in the x- and y ... ripmap. The particular tuning of ...
  18. [18]
    Customize Graphics Settings with AMD Software: Adrenalin Edition
    Mar 3, 2025 · Anisotropic Filtering - Increase and sharpen the quality of textures on surfaces that appear far away or at odd angles, such as road surfaces or ...
  19. [19]
    [PDF] Texture Filtering - Computer Graphics Lab
    Without EWA filtering. With EWA filtering. Page 12. Footprint Assembly ... • Anisotropic filtering (8x). – Makes the textures much sharper along ...
  20. [20]
    Chapter 8. Summed-Area Variance Shadow Maps | NVIDIA Developer
    Another problem with percentage-closer filtering is that it inherits, and indeed exacerbates, the classic nuisances of shadow mapping: "shadow acne" and biasing ...Chapter 8. Summed-Area... · 8.4 Variance Shadow Maps · 8.5 Summed-Area Variance...
  21. [21]
    [PDF] Percentage-Closer Soft Shadows | NVIDIA
    Percentage-Closer Soft Shadows. The PCSS technique builds on regular shadow mapping: when shading each pixel in the eye view, PCSS returns a floating-point ...Missing: original | Show results with:original
  22. [22]
    [PDF] Variance Shadow Mapping | NVIDIA
    Variance shadow maps (VSM) replace the standard shadow map query with an analysis of the distribution of depth values. VSM employs variance and. Chebyshev's ...
  23. [23]
    [PDF] Tracing Ray Differentials - Stanford Computer Graphics Laboratory
    Filtering such a texture by taking bilinear samples for a set of stochastic rays requires the tracing of a correspondingly large number of rays. This is ...
  24. [24]
    [PDF] Improved Shader and Texture Level of Detail Using Ray Cones
    Jan 25, 2021 · In real-time ray tracing, texture filtering is an important technique to increase image quality. Current games, such as Minecraft with RTX ...Missing: du | Show results with:du
  25. [25]
    Real-Time Ray Tracing | NVIDIA Developer
    Explore tools and technologies that support real-time ray tracing, a method of graphics rendering that simulates the physical behavior of light.
  26. [26]
    AMD's RDNA 2: Shooting For the Top - Chips and Cheese
    Feb 19, 2023 · At the CU level, AMD augmented the memory pipeline to add hardware raytracing acceleration. Specifically, the texture unit can now perform ray ...
  27. [27]
    Ray Tracing In Vulkan - The Khronos Group
    Mar 17, 2020 · This blog summarizes how the Vulkan Ray Tracing extensions were developed, and illustrates how they can be used by developers to bring ray tracing ...Overview · Acceleration Structures · Host OperationsMissing: 1.3 | Show results with:1.3
  28. [28]
    [PDF] REFRACTION RAY CONES FOR TEXTURE LEVEL OF DETAIL
    Texture filtering is an important implementation detail of every rendering system. Its purpose is to achieve high-quality rendering of textured surfaces,.
  29. [29]
    Denoising and Filtering: Part V of Ray Tracing Gems
    Feb 19, 2019 · Techniques such as texture filtering and prefiltering can help minimize aliasing in ray tracing by reducing the impact of point samples.
  30. [30]
    DirectX Raytracing (DXR) Tier 1.1 - Microsoft Developer Blogs
    Nov 6, 2019 · Both the dynamic-shading and inline forms of raytracing use the same opaque acceleration structures. Inline raytracing in shaders starts with ...
  31. [31]
    Introducing NVIDIA DLSS 3 | GeForce News
    Sep 20, 2022 · DLSS 3 is a revolutionary breakthrough in AI-powered graphics that massively boosts performance, while maintaining great image quality and responsiveness.Missing: adaptive texture filtering
  32. [32]
    NVIDIA DLSS 4 Technology
    DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality.Dlss Multi Frame Generation · Dlss Frame Generation · Dlss Ray ReconstructionMissing: adaptive filtering
  33. [33]
    Intel® XeSS Super Resolution (XeSS-SR) Developer Guide 2.0
    Mar 17, 2025 · X e SS-SR is an AI-based temporal super sampling and anti-aliasing technology, implemented as a sequence of compute shader passes.
  34. [34]
    AMD FidelityFX™ Super Resolution 3 (FSR 3) - GPUOpen
    FSR 3 does this by adding two new technologies – Frame Interpolation and Optical Flow enhanced from AMD Fluid Motion Frames. AMD FSR 3.1 further expands ...Missing: filtering | Show results with:filtering