Fact-checked by Grok 2 weeks ago

Reflection mapping

Reflection mapping, also known as environment mapping, is a technique that approximates the appearance of reflective or refractive surfaces by projecting a precomputed image of the surrounding environment onto an object, using vectors derived from surface normals and viewer position to index into the environment map. This method enables efficient simulation of mirror-like reflections without the computational cost of full ray tracing, assuming the environment is distant and static relative to the reflecting object. The technique was first introduced by James F. Blinn and Martin E. Newell in their 1976 paper "Texture and Reflection in Computer Generated Images," where they described using a intensity map projected onto a to compute reflected light intensities for curved surfaces, providing accurate normals via a subdivision algorithm. Early implementations focused on mapping, which stores the environment as a single representing reflections on a spherical mirror, but it suffers from sampling artifacts and inefficient use of space due to polar coordinate distortions. A significant advancement came in 1986 with Ned Greene's proposal of , which divides the environment into six square images corresponding to the faces of a cube centered at the object, offering more uniform sampling, easier generation, and better support for real-time applications like and simulations. Other variants include dual paraboloid mapping for hemispherical coverage and more recent methods like screen-space reflections for dynamic scenes, though traditional environment mapping excels in interactive rendering by prefiltering the environment for specular highlights and diffuse interreflections. Despite its efficiency, reflection mapping has limitations, such as inability to handle occlusions, self-reflections, or effects accurately, often requiring approaches with tracing for high-fidelity results in modern pipelines.

Fundamentals

Definition and Purpose

Reflection mapping, also known as mapping, is a technique that simulates the appearance of reflections on glossy or curved surfaces by projecting a precomputed two-dimensional image of the surrounding onto the object's surface, using the surface and viewer to the appropriate environmental radiance. This method models the incoming light from all directions as if the environment were mapped onto an infinitely large sphere surrounding the object, allowing for the approximation of specular highlights and mirror-like effects without tracing individual light paths. The primary purpose of reflection mapping is to achieve efficient rendering of realistic reflections in real-time applications, such as video games and interactive simulations, by avoiding the high computational cost of exact reflection calculations like ray tracing, which require solving integral equations for light transport across the entire scene. Instead, it provides a local approximation that treats the environment as static and distant, enabling fast texture lookups to enhance visual fidelity for materials exhibiting shiny or reflective properties, such as metals, glass, and water surfaces. This approach significantly reduces rendering time while maintaining perceptual realism, making it suitable for hardware-constrained systems. Unlike techniques, such as ray tracing, which compute accurate light interactions including multiple bounces and inter-object reflections, reflection mapping simplifies the process by ignoring object-environment occlusions and assuming no or motion in the surroundings relative to the reflective surface. Developed in the to overcome limitations in early hardware, it laid the groundwork for subsequent advancements in interactive rendering. One common implementation involves , where the environment is captured across six orthogonal faces of a for uniform sampling.

Basic Principles

Reflection mapping builds upon the foundational concept of , which assigns two-dimensional coordinates, often denoted as (u, v), to points on a three-dimensional surface to sample colors or patterns from a pre-defined image, thereby enhancing the visual detail of rendered objects without increasing geometric complexity. In reflection mapping, the "texture" is an environment map—a panoramic representation of the surrounding scene—allowing for the of reflective surfaces by approximating how light from the environment interacts with an object. The core workflow of reflection mapping involves three primary steps. First, an environment map is captured or generated, typically as a 360-degree panoramic image of the surroundings, which can be obtained through (such as using a mirrored ), rendering a simulated scene, or artistic creation to represent the distant environment. Second, for a given point on the object's surface, the reflection vector is computed, which indicates the direction from which incoming light would appear to reflect toward the viewer based on the local surface and viewing direction. Third, this reflection vector is used to sample the environment map, retrieving the color and intensity that correspond to the reflected light, which is then applied to shade the surface point. This process enables efficient per-pixel during rendering. The vector embodies an idealized model of perfect , adhering to the law of reflection where the incident equals the reflection relative to the surface , though adapted for approximate computation in without tracing actual rays. One early and simple realization of this approach is sphere mapping, which treats the environment as projected onto an imaginary surrounding sphere centered at the object. Reflection mapping operates under key assumptions to maintain computational efficiency: the environment is static, with the object undergoing minimal (though is permissible), and reflections are local to the object without accounting for inter-object bounces or effects. It accommodates various material types—diffuse for scattered light, specular for mirror-like highlights, and glossy for intermediate roughness—by blending contributions from separate precomputed maps for diffuse and specular reflections, weighted by material properties.

Mapping Techniques

Sphere Mapping

Sphere mapping, the foundational technique in reflection mapping, simulates mirror-like reflections by projecting the environment onto the inner surface of a virtual sphere that surrounds the reflecting object. Introduced by Blinn and Newell in , this method models the environment as a distant, static , ignoring effects like or self-shadowing to enable computation. The spherical assumes the object is at the sphere's center, allowing reflections to be approximated solely based on surface normals and viewer position. The setup uses a single 2D texture to store the environment, captured in an equirectangular (latitude-longitude) format that unwraps the sphere into a rectangular image, with horizontal coordinates representing azimuth and vertical ones representing elevation. For rendering, the reflection vector at each surface point is normalized and transformed into spherical coordinates—specifically, latitude and longitude values—for direct texture sampling, blending the result with diffuse lighting to produce the final color. This approach is computationally lightweight, requiring only a vector normalization and coordinate conversion per fragment. Visually, sphere mapping excels at creating seamless, continuous reflections for far-field scenes, such as skies or large indoor spaces, where the infinite-distance assumption holds. However, it introduces noticeable distortions for nearby , as the uniform spherical warps angular relationships, compressing equatorial regions and exaggerating polar areas. The equirectangular exacerbates this with singularities at the poles, where infinite stretching occurs, leading to uneven distribution and potential during sampling. Compared to later cube mapping, which mitigates these issues through orthogonal planar faces for more isotropic sampling, sphere mapping's simplicity made it ideal for hardware-limited systems but limits its use in scenarios with close-range or anisotropic environments.

Cube Mapping

Cube mapping, a prominent in , involves projecting the surrounding environment onto the six faces of an imaginary centered at the reflecting object, creating a 360-degree representation akin to a skybox that captures views in all directions. This approach allows for -based sampling, where a reflection or view direction is used to query the appropriate cube face and compute the corresponding coordinates, enabling accurate simulation of environmental reflections without relying on spherical coordinates. The method was first described in detail by Miller and Hoffman in 1984, who proposed storing reflection maps across six cube faces for efficient lookup in illumination computations. In setup, the cube map is typically constructed from six square textures—either as separate images or arranged in a cross-shaped layout for single-texture binding in graphics APIs like —or generated dynamically by rendering the scene from the cube's center toward each face. The reflection vector, computed from the surface normal and incident light or view direction, determines the target face by identifying the axis with the largest absolute component, after which UV coordinates are derived by normalizing the remaining components and them to the [0,1] range on that face. This process, formalized by Greene in 1986 using for each face, ensures perspective-correct sampling from the cube's interior. Cube maps support dynamic updates, such as rotating the entire map to simulate object movement relative to the environment, which is particularly useful in interactive applications. Compared to earlier sphere mapping techniques, cube mapping provides uniform sampling across directions with minimal distortion, as each face uses a linear rather than the warping inherent in cylindrical or spherical unwraps, resulting in more accurate reflections especially near edges. For filtering, cube maps employ ping to handle multi-resolution levels for distant or low-detail reflections, combined with within each face to smooth transitions and reduce aliasing on glossy surfaces; extends this by interpolating between mipmap levels for seamless blending. In practice, this technique is widely adopted in modern game engines, such as , where it facilitates real-time reflections on dynamic objects like vehicles or architectural elements by capturing scene cubemaps on the fly.

HEALPix and Other Methods

, or Hierarchical Equal Area isoLatitude , is a spherical scheme originally developed for analyzing radiation data, where it enables efficient discretization and fast of spherical datasets. In , it has been adapted for environment mapping to provide isotropic sampling of environments, partitioning the sphere into 12 base regions that subdivide hierarchically into equal-area pixels, each spanning identical angles for uniform coverage without polar . This setup facilitates high-fidelity reflections in simulations by mapping a 360-degree environment onto a single rectangular , supporting mipmapping and compression while preserving visual details comparable to higher-resolution cubemaps, such as using 90×90 pixels per base quad for approximately 97,200 total pixels versus 98,304 for a 128×128 cubemap. Paraboloid mapping employs dual paraboloid projections to cover the full using two hemispherical , projecting directions onto paraboloid surfaces centered at the reflection point for efficient environment sampling. Introduced as an alternative to cubic methods, it simplifies rendering by requiring only two updates instead of six, reducing memory and computational overhead while enabling vertex-shader implementations for applications. This approach balances quality and performance, though it can introduce minor filtering artifacts at the seam between paraboloids. Cylindrical mapping projects the spherical environment onto a cylinder unrolled into a rectangular texture, typically using azimuthal angle for horizontal coordinates and latitude for vertical, making it suitable for panoramic scenes with horizontal dominance. In reflection mapping, it indexes textures based on the reflection vector's cylindrical coordinates, providing a straightforward way to handle 360-degree surroundings like indoor or urban environments, though it suffers from stretching near the poles. HEALPix finds unique applications in visualizations, where its equal-area partitioning supports accurate rendering of data, such as star fields or radiation maps, adapted for to simulate isotropic reflections in scientific simulations. Paraboloid mapping is particularly efficient for mobile , enabling soft shadows and reflections on resource-constrained devices through techniques like concentric spherical representations combined with dual updates. Cylindrical mapping excels in panoramic scene rendering, commonly used for immersive environments in or architectural visualizations.
MethodPros vs. Sphere/CubeCons vs. Sphere/Cube
Uniform solid-angle sampling reduces distortion for high-fidelity isotropic reflections; single texture simplifies management and supports compression.More complex computation (~20 lines but hierarchical); less than cubemaps.
Fewer textures (2 vs. 6 for cube) lower memory and update costs; efficient for mobile and vertex shading.Potential seams at join require special filtering; less uniform than cube for full-sphere coverage.
CylindricalSimple for panoramic horizontals; easy unrolling for 360-degree photos.Severe polar distortion unlike cube's even distribution; unsuitable for vertical-heavy scenes.

Mathematical Foundations

Coordinate Systems and Transformations

In reflection mapping, computations occur across multiple coordinate systems to align surface properties with the environment representation. Object space defines local vertex positions and surface normals relative to the reflective object. These are transformed to world space via the model matrix, where the reflection vector is calculated to match the fixed orientation of the environment map, typically assuming an infinite, static surroundings. Texture space then parameterizes the environment map itself, such as a 2D texture for spherical mapping or six faces for cubic mapping. The view matrix influences environment orientation by transforming the world-space reflection directions when simulating viewer movement, ensuring consistent reflections across scene rotations, though static maps often bypass per-frame view updates for efficiency. The core reflection vector \vec{R} is derived in world space from the view direction \vec{I} (from the surface point to the viewer) and the normalized surface normal \vec{N}, following the law of : \vec{R} = 2 (\vec{I} \cdot \vec{N}) \vec{N} - \vec{I} This vector is then normalized to unit length (\|\vec{R}\| = 1) to represent a pure for environment sampling, avoiding scale distortions in subsequent mappings. The formula ensures in ideal specular reflections, tracing the path light would take if perfectly mirrored off the surface. Transformations from the normalized \vec{R} = (x, y, z) to texture coordinates vary by mapping type. For sphere mapping, the coordinates are computed as u = \frac{1}{2} + \frac{x}{\sqrt{x^2 + y^2 + z^2 + 1}} and v = \frac{1}{2} + \frac{y}{\sqrt{x^2 + y^2 + z^2 + 1}}. This projection approximates reflections on a spherical mirror but introduces distortions at the texture edges. For cube mapping, the face is selected by the coordinate component with the maximum absolute value (e.g., if |x| is largest, select the \pm x face based on sign); the remaining two components are divided by the absolute value of the dominant component, then scaled and biased to [0,1] for u and v on that face (noting potential sign adjustments for orientation, e.g., flipping y on some faces), providing uniform sampling without polar singularities. To incorporate realism, the reflection intensity is often modulated by the Fresnel effect, which boosts reflections at grazing angles (\theta near 90° between \vec{I} and \vec{N}). Schlick's approximation computes the as F = F_0 + (1 - F_0)(1 - \cos\theta)^5, where F_0 is the material's base Fresnel reflectivity (e.g., 0.04 for dielectrics). This blends reflection with diffuse or terms per pixel. Edge cases, such as near-parallel incidence causing infinite in recursive reflections or singularities in mapping projections, are mitigated by clamping \vec{R} components to avoid invalid samples or by layering multiple maps with alpha blending for controlled depth.

Texture Sampling and Interpolation

In reflection mapping, once the reflection vector \vec{R} has been transformed into the appropriate coordinate system, texture sampling retrieves the color from the environment map using these coordinates as indices. For cube maps, the process begins by identifying the dominant axis of \vec{R}, which determines the face to sample: the face corresponding to the component with the largest absolute value (e.g., positive or negative x, y, or z). The UV coordinates on that face are then computed by projecting \vec{R} onto the plane perpendicular to the dominant axis, normalizing by dividing the other two components by the absolute value of the dominant component, and scaling to the range [0, 1] (with orientation adjustments such as flipping the v coordinate on certain faces). This projection ensures the direction \vec{R} maps to a point on the selected face's texture. Interpolation is essential for smooth transitions and to mitigate artifacts during sampling. blends the four nearest texels on the selected face based on the of the UV coordinates, providing continuous color values across the surface. For enhanced quality, especially in scenes with varying distances or resolutions, extends this by linearly blending between two levels, incorporating level-of-detail () selection to average texels appropriately and reduce from high-frequency details. chains are precomputed for the environment map, with typically computed automatically by the GPU based on the partial of the reflection vector across the ; in physically-based rendering contexts, an explicit may be selected based on to simulate blurred reflections. Advanced techniques address specific challenges in reflection quality. Anisotropic filtering samples multiple texels along the direction of greatest variation in the reflection , compensating for stretching effects common in off-axis views or curved surfaces, thereby preserving detail without excessive blurring. Seams at face edges, where adjacent faces meet, can introduce visible discontinuities; these are handled through edge blending during preprocessing or runtime adjustments, such as averaging texels across boundaries with a falloff function to ensure seamless transitions. Hardware support in modern graphics APIs, like seamless map filtering in and , automates much of this by treating the as a continuous surface during . Performance considerations favor GPU shader implementation, where sampling occurs per fragment in pipelines. Texture units handle face selection, , and filtering efficiently using dedicated , enabling high-throughput operations even for complex scenes with dynamic reflections. This integration minimizes CPU overhead and supports vectorized computations for millions of samples per frame.

Historical Development

Origins and Early Techniques

Reflection mapping emerged as a key technique in during the 1970s, driven by the need to simulate realistic reflections on curved surfaces amid the severe limitations of the era, where full ray tracing was computationally prohibitive even for offline rendering. James F. Blinn and Martin E. Newell developed the method while at the , introducing it in their seminal 1976 paper "Texture and Reflection in Computer Generated Images," presented at the conference. The approach approximated reflections by precomputing an "environment map"—a representation of the distant surroundings viewed from the object's center—and using surface reflection vectors to sample colors from this map, thereby decoupling reflection calculations from the full scene geometry. The prototype implementation relied on sphere mapping, the earliest form of environment mapping, which projects the surrounding scene onto the interior of an imaginary sphere enclosing the object and stores it as a texture using spherical coordinates (latitude and longitude). To prevent empty or undefined pixels in the texture—a common issue with forward projection due to uneven sampling density—Blinn and Newell employed an inverse mapping strategy: rays are traced outward from the sphere's center through evenly distributed texture coordinates to sample the environment, then the colors are assigned back to those coordinates, ensuring complete and uniform coverage without gaps. This innovation minimized rendering-time needs and handled the spherical distortion efficiently on limited 1970s hardware. The technique was first demonstrated at 1976, featuring examples like the with reflections of a synthetic room environment generated via a custom paint program. In the , reflection mapping gained traction for practical applications as graphics hardware evolved modestly but remained constrained, prompting refinements for in animations and film. Blinn, having joined NASA's (JPL) in 1978, contributed to visualizations of Voyager and planetary encounters using various innovations. Concurrently, independent advancements at institutions like the extended the method to photographically captured environments; Gene and Ken Perlin, along with Michael Chou and Lance Williams, created real-world reflection maps around 1982–1983 using omnidirectional photos from mirrored spheres (gazing balls) and ornaments, enabling more convincing composites of elements with live footage. These photo-based techniques were highlighted in Williams' 1983 paper "Pyramidal Parametrics," which demonstrated a reflection-mapped robot arm, and further detailed in Miller and C. Robert Hoffman's 1984 SIGGRAPH course notes on illumination mapping. Early adoption extended to commercial software for film CGI, where reflection mapping enhanced environmental integration without ray tracing overhead. Lucasfilm's Computer Graphics Division used environment mapping to render the translucent stained-glass knight in "Young Sherlock Holmes" (1985), the first feature film with a fully CGI character, approximating its reflections to integrate with the live-action scene. Ned Greene's 1986 paper "Environment Mapping and Other Applications of World Projections" in IEEE Computer Graphics and Applications formalized extensions like pre-filtering with summed-area tables to mitigate during texture minification, influencing hardware implementations. By 1987, the method was routinely showcased in graphics conferences, including panels on and texturing, solidifying its role in bridging conceptual realism with feasible computation.

Evolution and Modern Adoption

The 1990s marked a significant advancement in reflection mapping with the introduction of , which addressed the distortions inherent in earlier sphere mapping techniques by providing a more uniform representation of surrounding environments through six orthogonal faces. This method was formalized in the ARB_texture_cube_map extension, approved in 1998, enabling hardware-accelerated cube map texturing with dedicated reflection modes for generating environment reflections. The extension, supported by vendors like and SGI, facilitated seamless integration into graphics pipelines, improving accuracy for curved surfaces and distant reflections without the pinching artifacts of spherical projections. In the 2000s, reflection mapping evolved with GPU acceleration through programmable shaders, particularly during the 9 era around 2002–2003, which allowed dynamic computation of reflection vectors in real-time applications. A key milestone was the use of dynamic cube maps in video games, exemplified by Valve's in 2004, where pre-computed and runtime-generated cube maps captured local environment reflections for metallic and glossy surfaces, enhancing immersion in dynamic scenes like vehicle mirrors and water. This integration with the Source engine demonstrated cube mapping's scalability for moving cameras, blending static environment maps with per-object updates to simulate realistic specular highlights. Graphics researcher Paul Heckbert's foundational work on fundamentals, including surveys from the mid-1980s onward, influenced these developments by establishing efficient warping and filtering techniques that underpinned shader-based implementations. Post-2010, reflection mapping advanced with (HDR) environment maps, which captured a wider range of intensities to enable more realistic in real-time rendering. These HDR cube maps became integral to (PBR) workflows, simulating energy-conserving light interactions for materials with varying roughness and . In game engines like Unity 5 (released 2015) and Unreal Engine 4 (2014 onward), PBR systems incorporated reflection mapping via (IBL), using pre-baked or streamed HDR maps to approximate diffuse and specular reflections efficiently. As of 2025, hybrid approaches combining traditional reflection mapping with ray tracing have gained traction in real-time engines, particularly on hardware, where hardware-accelerated ray tracing augments cube maps for accurate primary reflections in complex scenes. This evolution, seen in 5-based titles, balances performance by using rasterized cube maps for distant approximations and ray-traced bounces for close-up details, reducing artifacts in dynamic environments like mirrors and glass.

Applications and Limitations

Practical Uses in Computer Graphics

Reflection mapping is widely employed in to simulate realistic reflections on dynamic surfaces such as bodies and , enabling efficient rendering. In the series, is utilized to generate environment reflections on cars, capturing surrounding scenery like foliage and buildings to enhance visual fidelity without full ray tracing. Dynamic reflection probes, which update environment maps at key scene locations, are commonly integrated into game engines like for indoor environments, providing localized reflections on metallic objects or glass to maintain performance in complex levels. In and production, precomputed reflection maps are applied in software such as to approximate reflections on metallic and glossy surfaces, significantly reducing offline rendering times compared to full ray tracing. These maps, often derived from high-dynamic-range images (HDRIs) of real-world environments, contribute to photorealistic in sci-fi films, where they simulate light interactions on hulls or robotic elements during workflows. Pioneering techniques in this domain, including with reflection maps, have been foundational for capturing and relighting subjects in VFX pipelines. Architectural visualization leverages reflection mapping to render environment captures on building materials like glass facades and polished floors, facilitating immersive VR walkthroughs where users assess material realism in context. In tools such as , reflection capture actors generate cubemaps from surrounding geometry, allowing designers to preview how interior spaces interact with exterior lighting in virtual tours of proposed structures. In scientific simulations, reflection mapping variants like enable accurate all-sky environment representations for applications, such as rendering starfields and data on spherical displays or virtual observatories. Modern graphics pipelines often combine reflection mapping with screen-space reflections (SSR) in hybrid approaches to balance quality and efficiency, where precomputed cubemaps handle distant environments while SSR resolves near-field details like dynamic objects. This integration, as seen in techniques blending environment maps with local ray tracing, optimizes reflection accuracy in real-time applications without excessive computational overhead.

Advantages, Drawbacks, and Alternatives

Reflection mapping offers several key advantages in rendering. It achieves high performance through constant-time per-fragment computations, primarily via efficient lookups that impose minimal overhead on modern GPUs, enabling seamless integration into pipelines without significant drops. This efficiency makes it scalable across , from mobile devices to high-end systems, where it can run with very low additional processing time per frame in typical setups. Additionally, its straightforward implementation allows for easy adoption in existing rendering workflows, supporting approximate glossy and specular effects with broad compatibility. Despite these benefits, reflection mapping has notable drawbacks that limit its realism. It is restricted to static environments, incapable of capturing dynamic object-to-object reflections or interactions within the scene, which results in inconsistent visuals during motion. Common artifacts include blurring on curved surfaces due to improper handling and visible seams at boundaries, particularly in maps, leading to lower perceived visual quality compared to more accurate methods—mean opinion scores () for mapping approximations average around 1.24-1.76 on a 5-point in VR evaluations, versus 3.19-3.67 for ray-traced references. These issues render it outdated for photorealistic applications, where diverging pictorial cues like distorted perspectives exacerbate breaks. Several alternatives address these limitations, often at the trade-off of increased computational cost. Screen-space reflections (SSR) enable dynamic reflections in real-time scenes by tracing rays within the current view frustum, capturing moving objects visible on-screen, though it fails to reflect off-screen geometry and suffers from similar artifacts in occluded areas. Full ray tracing, such as Monte Carlo methods, provides high-fidelity accuracy for both local and global reflections by simulating light paths explicitly, but incurs substantial overhead—typically 1-2 ms per frame for global illumination components on high-end GPUs like the RTX 2080 Ti, compared to negligible costs for reflection mapping. Hybrid approaches, including cascaded or dynamic cubemaps, combine environment mapping for distant/static elements with localized ray tracing or SSR for nearby dynamics, balancing performance and quality; for instance, layered environment maps can reduce ray queries by up to 25x while approximating parallax. Reflection mapping remains ideal for approximate glossy effects in applications where performance is paramount, such as and , but developers should transition to ray tracing for offline rendering or next-generation scenarios demanding , especially as hardware advances mitigate the significantly higher computational cost of ray tracing compared to traditional methods for simulating , such as screen-space approximations.

References

  1. [1]
    Texture and reflection in computer generated images
    This paper describes extensions of this algorithm in the areas of texture simulation and lighting models.
  2. [2]
    [PDF] Reflection Techniques in Real-Time Computer Graphics
    The first type is the billboard. It approximates an object by mapping its image to a textured quadrilateral which can easily be intersected with reflected.Missing: original | Show results with:original
  3. [3]
    Environment Mapping and Other Applications of World Projections
    Blinn and Newell introduced reflection mapping for simulating mirror reflections on curved surfaces. Miller and Hoffman have presented a general illumination ...
  4. [4]
    [PDF] Environment Mapping (Reflections and Refractions)
    Environment mapping creates the appearance of reflective and refractive surfaces without ray tracing. Cube and sphere maps are used, and cube maps can also be ...<|separator|>
  5. [5]
    [PDF] Texture and Reflection in Computer Generated Images - CumInCAD
    This paper describes extensions of Catmull's al- gorithm in the areas of texture and reflection. The developments make use of digital signal processing theory ...
  6. [6]
    [PDF] Reflected-Scene Impostors for Realistic Reflections at Interactive ...
    The technique uses impostors (billboards and depth maps) to approximate the reflected scene's geometry, rendering reflections at interactive rates.
  7. [7]
    [PDF] APPLICATIONS OF WORLD PROJECTIONS - Graphics Interface
    Surface shading and texture filtering issues related to environment mapping are discussed including approximate methods for obtaining diffuse and specular.
  8. [8]
    Texture and reflection in computer generated images
    In 1974 Ed Catmull developed a new algorithm for rendering images of bivariate surface patches. This paper describes extensions of this algorithm in the ...Missing: original | Show results with:original
  9. [9]
    [PDF] Page 1 Illumination and Reflection Maps: Simulated Objects in ...
    Jul 23, 1984 · Since the illumination map is an image, it can be created, manipulated and stored by conventional optical and digital image processing methods.
  10. [10]
    Miscellaneous Transformations and Projections - Paul Bourke
    Direct Polar, known as Spherical or Equirectangular Projection. While not strictly a projection, a common way of representing spherical surfaces in a ...
  11. [11]
    [PDF] Environment Mapping - UCSD CSE
    SIGGRAPH 1984 “Advanced Computer Graphics Animation” Course Notes. CSE 167 ... Environment mapping: vertex shader. #version 400 in vec3 vp; // positions ...
  12. [12]
    [PDF] OpenGL Programming Guide - Media Arts and Technology
    Suppose you're writing a flight simulator ... Once you've created a texture designed for environment mapping, you need to invoke OpenGL's environment-mapping.
  13. [13]
  14. [14]
    Environment Mapping and Other Applications of World Projections
    This article proposes a uniform framework for representing and using world projections and argues that the best general-purpose representation is the is ...
  15. [15]
    [PDF] Survey of Cube Mapping Methods in Interactive Computer Graphics
    This paper gives an overview of alternative cube map- ping methods used ... Greene, N.: Environment mapping and other applications of world projections ...
  16. [16]
    Real-time Environment Mapping with Equal Solid-Angle Spherical ...
    In this article, we demonstrate that the computation of HEALPix mapping can be efficiently implemented using only around 20 lines of shader code. We then show ...
  17. [17]
    Dual Paraboloid Mapping in the Vertex Shader - GameDev.net
    Apr 17, 2006 · To find the intersection point on the paraboloid, we must have the direction of the incident ray as well as the direction of the reflected ray.
  18. [18]
    Illumination and Reflection Maps - Paul Debevec
    This paper presents an efficient and uniform framework for modeling light for realistic scene simulation.
  19. [19]
  20. [20]
    [PDF] Texture Mapping - UT Computer Science
    Texture mapping ensures that “all the right things” happen as a textured polygon is transformed and rendered. Page 6. University of Texas at Austin CS354 - ...
  21. [21]
    None
    ### Summary of Environment Mapping Techniques
  22. [22]
    [PDF] Real-Time Reflection Mapping with Parallax
    Its 4D nature enables us to render accurate reflections with motion paral- lax independent of the scene complexity. Our reflection mapping method is enabled by ...
  23. [23]
    Project 4: Image-based Lighting
    Oct 26, 2015 · The reflection vectors can then be converted to, providing the latitude and longitude (phi and theta) of the given pixel (fixing the distance ...<|control11|><|separator|>
  24. [24]
    [PDF] An Inexpensive BRDF Model for Physically-based Rendering
    An Inexpensive BRDF Model for Physically-based Rendering. Christophe Schlick. LaBRI U. 35X cours de la Lib éeration, 33o05 Talence, FRANCE.
  25. [25]
    Jim Blinn - Caltech Heritage Project
    Texture mapping was just getting started, and so I made some improvements on that. ZIERLER: What is texture mapping? BLINN: The idea being that you have a ...Missing: s | Show results with:s
  26. [26]
    The Story of Reflection Mapping - Paul Debevec
    This web page tells the story of how the original techniques came to be. Blinn and Newell 1976. In my SIGGRAPH 98 paper I referenced reflection mapping using ...
  27. [27]
    8.2 Wavefront Technologies - The Ohio State University Pressbooks
    Wavefront Technologies, founded in 1984, developed software for computer graphics, including Personal Visualizer, and was later acquired by Silicon Graphics. ...Missing: reflection mapping 1980s
  28. [28]
    ARB_texture_cube_map - Khronos Registry
    This extension also provides two new texture coordinate generation modes for use in conjunction with cube map texturing. The reflection map mode generates ...Missing: 1998 | Show results with:1998
  29. [29]
    [PDF] NVIDIA OpenGL Extension Specifications
    Nov 15, 1999 · ... ARB_texture_cube_map ... 1998. NOTE: This extension no longer has its own specification document, since it has been included in the OpenGL ...
  30. [30]
    [PDF] Half-Life® 2 / Valve Source™ Shading
    Mar 22, 2004 · Cube maps are pre-computed in-engine from the level data using rendering. • World surfaces pick up the “best” cube map, or cube maps can be ...<|separator|>
  31. [31]
    [PDF] SURVEY OF TEXTURE MAPPING - cs.Princeton
    Texture mapping enhances visual richness of images by mapping a source image onto a surface in 3D space, then to the screen. It can be used for color, ...
  32. [32]
    [PDF] Fundamentals of Texture Mapping and Image Warping
    Jun 17, 1989 · Paul S. Heckbert. Dept. of Electrical Engineering and Computer ... Texture mapping is a shading technique for image synthesis in which a ...
  33. [33]
    The Beginner's Guide to Physically Based Rendering in Unity
    Nov 17, 2015 · Unity 5 uses physically based rendering, which is a new lighting model that simulates the natural interactions of light rays and real-world ...
  34. [34]
    Physically Based Materials in Unreal Engine - Epic Games Developers
    This document is designed to provide guidelines and best practices for working within Unreal Engine's physically based Materials system.
  35. [35]
    Ray tracing performance in 2025 is seriously levelling up - GameSpew
    A few UE5-based racers and horror games now ship with hybrid RT pipelines that make their worlds feel deeper and more tactile without hammering performance into ...
  36. [36]
    Real-Time Ray Tracing Changes the Way We See Games and CGI
    Oct 26, 2025 · Hyper-realistic reflections: Say goodbye to fake “cube map” reflections. Now mirrors and windows actually reflect what's in front of them in ...Minecraft Rtx · Nvidia Rtx Gpus · Amd Radeon Rx Series
  37. [37]
    Forza Horizon 5 PC modded to add ray tracing in-game
    Nov 12, 2021 · Hardware accelerated ray tracing further embellishes effect by adding RT reflections of the vehicle onto the car itself, something the cube-maps ...
  38. [38]
    Reflections Based on Local Cubemaps in Unity - Game Developer
    Cube mapping uses the six faces of a cube as the map shape. The environment is projected onto each side of a cube and stored as six square textures, or unfolded ...
  39. [39]
    Simulated reflections - Maya Help - Autodesk product documentation
    To simulate reflections, which can significantly reduce rendering time, you must use a texture as a reflection map. You apply the map to the Reflected color ...Missing: film VFX
  40. [40]
    The Story of Reflection Mapping - Paul Debevec
    Reflection mapping was one such application, and they demonstrated the technique by applying an environment map taken in a cafe to a torus shape:Blinn And Newell 1976 · The First Renderings · Interface - 1985
  41. [41]
    HEALPix: A Framework for High-Resolution Discretization and Fast ...
    HEALPix—the Hierarchical Equal Area isoLatitude Pixelization—is a versatile structure for the pixelization of data on the sphere. An associated library of ...Missing: reflection simulations
  42. [42]
    [2409.13901] Reflection Matrix Imaging for Wave Velocity Tomography
    Sep 20, 2024 · In this paper, we go beyond the first Born approximation and show how a map of c(\mathbf{r}) can be retrieved in epi-detection.
  43. [43]
    [PDF] Realistic Reflections and Refractions on Graphics Hardware With ...
    Abstract. We introduce hybrid rendering, a scheme that dynamically ray traces the local geometry of reflective and refractive objects, but approximates more ...Missing: paper | Show results with:paper
  44. [44]
    GeForce RTX: A Whole New Way To Experience Games - NVIDIA
    Sep 26, 2018 · Before now, the best reflection tech available was Screen Space Reflections (SSR) combined with static environment maps, and it is virtually ...Missing: advantages drawbacks
  45. [45]
    RTX Global Illumination Part I | NVIDIA Technical Blog
    Jun 3, 2019 · RTX Global Illumination (RTX GI) creates changing, realistic rendering for games by computing diffuse lighting with ray tracing.Missing: comparison | Show results with:comparison