Fact-checked by Grok 2 weeks ago

Normal mapping

Normal mapping is a technique in used to add the appearance of complex surface details, such as bumps, grooves, and scratches, to models by encoding perturbations to surface normals in a texture map, thereby simulating realistic lighting interactions without increasing the model's count. This method enhances visual fidelity in real-time rendering applications like and animations by modifying how reflects off surfaces during calculations. The core principle of normal mapping involves storing vector data representing normal directions in an RGB texture, where the red, green, and blue channels correspond to the X, Y, and Z components of the normal vectors, typically in for efficient reuse across different model orientations. In , the Z-axis aligns with the surface normal, the X-axis follows the U texture direction, and the Y-axis follows the V direction, allowing the map to perturb the interpolated vertex normals in the fragment shader for per-pixel lighting effects. This approach builds on earlier concepts, such as those introduced by in 1978, but normal mapping specifically uses precomputed normals from high-resolution meshes to create these textures, as first demonstrated in the 1996 paper by Venkat Krishnamurthy and Marc Levoy, which described fitting smooth surfaces to dense meshes and deriving bump maps from their normals. Subsequent advancements, including tangent-space normal mapping for , were detailed in the 1999 SIGGRAPH paper by Wolfgang Heidrich and Hans-Peter Seidel, enabling efficient implementation on graphics hardware using techniques like multi-texturing and pixel shaders to achieve realistic shading models such as Blinn-Phong at interactive frame rates. Key benefits include reduced computational overhead compared to geometric subdivision, lower memory demands through compressed formats, and seamless integration with other material properties like diffuse and specular maps in pipelines. Normal mapping has become a standard in modern 3D graphics, powering detailed visuals in industries from to while maintaining performance on consumer hardware.

Overview

Definition and Purpose

Normal mapping is a technique in that simulates intricate surface details on models by storing precomputed normal vectors—directions perpendicular to the surface—in a specialized called a normal map, which perturbs the interpolated surface normals during the shading process to alter lighting computations without modifying the underlying . These normal vectors are typically encoded in the RGB channels of the , where red, green, and blue components represent the x, y, and z coordinates, respectively, often normalized to fit the [0,1] range for storage. The primary purpose of normal mapping is to enhance the perceived geometric complexity of low-polygon models, enabling the simulation of fine details such as bumps, scratches, wrinkles, or other microstructures that affect how light interacts with the surface, thereby approximating the and shadowing behavior of much higher-resolution while maintaining computational efficiency suitable for real-time rendering applications like . This approach allows artists and developers to achieve visually rich scenes without the performance overhead of additional polygons, focusing where it is most noticeable—on the surface normals that influence rather than the model's silhouette. In its basic workflow, a vector is sampled from the normal map based on the model's texture coordinates () at each fragment or during rendering; this sampled vector is then transformed from its stored space (often relative to the surface) to a common space like model or space to align with the and view directions; finally, the perturbed normal is plugged into a , such as the Lambertian model for or the Phong model for specular highlights, to compute the final color contribution. For instance, applying a normal map textured with patterns to a flat can create the illusion of protruding bricks, where incident casts realistic in the mortar grooves and highlights on the raised surfaces, all derived from the modified normals rather than actual .

Advantages Over Traditional Methods

Normal mapping offers significant performance benefits in rendering by reducing the required count compared to traditional geometric methods like subdivision or , allowing complex scenes to run efficiently on consumer hardware without excessive vertex processing. This technique simulates fine surface details through texture-based normal perturbations, enabling the illusion of infinite detail on low-resolution meshes while keeping computational costs low, as it relies on per-pixel rather than increasing . In contrast to , which approximates surface perturbations using scalar height values and can lead to less accurate lighting, normal mapping encodes full 3D normal vectors in RGB channels for more precise interactions with light sources. However, normal mapping has notable limitations: it does not modify the underlying , so silhouettes remain unchanged and self-shadowing is not inherently supported, unlike true methods that alter positions. Additionally, it can produce artifacts such as inconsistent or "swimming" effects at grazing angles due to filtering challenges, and it requires precomputed normal maps, limiting dynamic adaptability.
TechniqueDetail RepresentationComputational CostGeometry ImpactKey AdvantagesKey Limitations
Normal MappingRGB-encoded normals for Low (per-pixel texture lookup)None (illusion only)High performance; detailed without polygon increaseNo silhouette change; no self-shadowing; grazing angle artifacts
Bump MappingGrayscale height for normal perturbationLow (similar to normal)NoneSimple implementation; fastLess accurate normals; limited to basic bumps
Displacement MappingVertex displacement from heightHigh ( required)Alters True geometry; correct silhouettes and shadowsExpensive; increases vertex count and GPU load

Background Concepts

Surface Normals in 3D Graphics

In , a surface normal is defined as a to a surface at a specific point, representing the of that surface relative to incident light and the viewer. This vector plays a crucial role in lighting calculations, where it determines the amount of diffuse and specular reflection by quantifying the angle between the surface and light rays. For instance, in the Phong illumination model, the normal influences both the diffuse component, based on the cosine of the angle between the normal and light direction, and the specular component, involving the reflection vector. For polygonal surfaces, such as triangles in a mesh, the face is computed using the of two vectors originating from one . Given vertices v_0, v_1, and v_2, the \vec{n} is derived as: \vec{n} = \frac{(v_1 - v_0) \times (v_2 - v_0)}{\|(v_1 - v_0) \times (v_2 - v_0)\|} This ensures \vec{n} is a pointing outward from , assuming a counterclockwise ordering. s, used for smoother , are often averaged from adjacent face s to approximate underlying . To achieve realistic on curved or approximated surfaces, normals are interpolated across faces during rasterization. In , lighting intensities are computed at vertices and linearly interpolated across the polygon interior, which can miss specular but is computationally efficient. Conversely, interpolates the normals themselves between vertices before applying the lighting model at each fragment, preserving sharp highlights and better simulating smooth surfaces at the cost of higher computation. Within the , surface normals are integral to per-fragment lighting, particularly through the with the normalized light direction \vec{l}, yielding the incident as I = \max(0, \vec{n} \cdot \vec{l}). This clamped cosine term models Lambertian , where I = 0 indicates the light is behind the surface, preventing negative contributions. Such computations enable realistic illumination effects essential for rendering convincing 3D scenes.

Texture Coordinates and UV Mapping

Texture coordinates, often denoted as UV coordinates, provide a 2D parameterization of a 3D model's surface, where each vertex is assigned a pair of values (u, v) typically ranging from 0 to 1, effectively unwrapping the geometry onto a plane for application. This mapping allows textures, including normal maps, to be projected onto the surface by associating points in texture space with corresponding locations on the model. UV coordinates can be generated through various methods depending on the model type and application needs. For parametric surfaces like cylinders or spheres, UVs are often created procedurally during the modeling process, using mathematical parameterizations that align the coordinates with the surface's natural . Manual unwrapping involves artist-driven tools in software such as or to cut and flatten the mesh, optimizing layout to reduce irregularities. Automatic generation, exemplified by UVs in engines like , employs algorithms to produce secondary UV sets (e.g., Mesh.uv2) that prioritize minimal overlap, wide margins between UV islands, and low distortion in angles and scales relative to the original . Challenges in UV mapping include seams and distortion, which can lead to visible artifacts or uneven texture application. Seams arise at the edges where the 3D surface is cut during unwrapping, potentially causing discontinuities if adjacent UV islands do not align perfectly in the texture. Distortion occurs when the 2D projection stretches or compresses areas disproportionately to the 3D surface, such as in polar regions of spherical mappings. Mitigation strategies involve adding (wide margins) around UV islands to prevent during or mipmapping, and selecting projection methods like cylindrical unwrapping that better preserve uniformity for elongated surfaces. In the context of normal mapping, UV coordinates play a crucial role by determining the precise location in the 2D normal map from which to sample the RGB-encoded normal vectors for each surface fragment. This sampling, typically performed using interpolated UVs within a fragment shader (e.g., via texture lookup functions), ensures that the perturbed normals align correctly with the underlying , enhancing surface detail without additional polygons.

Technical Details

Coordinate Spaces Involved

In 3D computer graphics, normal mapping involves several coordinate spaces to handle the transformation and application of surface normals accurately for lighting calculations. Object space, also known as model space, is local to the individual and defines positions and normals relative to the object's . World space provides a global where all objects in the scene are positioned, allowing for consistent interactions like from fixed sources. View space, or camera space, is relative to the observer's viewpoint, often used in the rendering for transformations. , however, is a per-surface aligned with the at each fragment, enabling efficient storage and perturbation of normals without recomputing the entire . Multiple coordinate spaces are necessary because normals must transform non-uniformly to maintain perpendicularity to the surface under deformations or ; this requires the inverse transpose of the , unlike positions which use the standard . In contrast to object-space normals, which are fixed to the and expensive to update for animations, tangent-space normals allow for seamless mapping across deformed surfaces by remaining relative to the local geometry. Tangent space is defined by a local orthonormal basis consisting of the tangent vector (T), bitangent vector (B), and surface normal (N), forming the TBN matrix. Normal maps store perturbation vectors in this tangent space, where the RGB channels represent offsets along the T (X, red), B (Y, green), and N (Z, blue) axes, typically with the blue channel dominant to simulate small surface variations. This representation compresses data efficiently since perturbations are relative to the surface rather than absolute directions. The transformation pipeline for normal mapping begins in texture space, where UV coordinates sample the normal map to retrieve the tangent-space normal. This normal is then transformed via the TBN matrix to world space, where it aligns with global vectors for per-fragment computations. This process ensures that the perturbed normal accurately reflects local surface details in the scene's absolute orientation.

Tangent Space Computation

In normal mapping, the at each is defined by an consisting of the surface \vec{N}, the \vec{T}, and the bitangent vector \vec{B}, collectively forming the tangent-bitangent- (TBN) matrix used to transform normals from texture space to object space. The computation begins with deriving \vec{T} and \vec{B} from the partial derivatives of the vertex position \vec{p} with respect to the texture coordinates u and v, approximated discretely using edge vectors in the . For a given triangle with vertices \vec{P}_1, \vec{P}_2, \vec{P}_3 and corresponding UV coordinates (u_1, v_1), (u_2, v_2), (u_3, v_3), the edge vectors and UV deltas are defined as \Delta \vec{P}_1 = \vec{P}_2 - \vec{P}_1, \Delta \vec{P}_2 = \vec{P}_3 - \vec{P}_1, \Delta u_1 = u_2 - u_1, \Delta v_1 = v_2 - v_1, \Delta u_2 = u_3 - u_1, and \Delta v_2 = v_3 - v_1. These satisfy the linear system \begin{pmatrix} \Delta \vec{P}_1 \\ \Delta \vec{P}_2 \end{pmatrix} = \begin{pmatrix} \Delta u_1 & \Delta v_1 \\ \Delta u_2 & \Delta v_2 \end{pmatrix} \begin{pmatrix} \vec{T} \\ \vec{B} \end{pmatrix}, which is solved for the unnormalized \vec{T} and \vec{B} by inverting the 2x2 UV matrix, provided it is non-singular (i.e., the texture coordinates are not degenerate). This process is repeated for all triangles adjacent to a vertex, with the resulting \vec{T} and \vec{B} vectors averaged across those faces to obtain a smoothed per-vertex basis, ensuring continuity where texture seams are absent. To ensure orthonormality, the averaged \vec{T} is orthogonalized against \vec{N} using Gram-Schmidt: \vec{T}' = \vec{T} - (\vec{N} \cdot \vec{T}) \vec{N}, followed by normalization. The bitangent is then computed as \vec{B} = m (\vec{N} \times \vec{T}'), where m = \pm 1 accounts for handedness to match the coordinate system (left-handed or right-handed), often determined by the sign of the scalar triple product or the determinant of the TBN matrix. This handedness factor m is typically stored in the w-component of the tangent vector to avoid passing an additional attribute. In rendering pipelines, the per-vertex TBN basis is interpolated across the surface during rasterization, allowing per-pixel reconstruction in the fragment shader for accurate normal perturbation.

Normal Map Generation and Application

Normal maps are generated by transferring surface details from a high-polygon (high-poly) model to a low-polygon (low-poly) model through a process known as . This enables the of fine on simpler meshes without increasing counts. The primary method involves ray tracing, where rays are projected orthogonally from each on the low-poly model's surface toward the high-poly model; the point determines the local surface orientation, and the difference between this orientation and the low-poly forms the tangent-space perturbation stored in the . An alternative approach uses difference s, computing the between corresponding vertices or points on the high- and low-poly models to derive the normal perturbation directly. Specialized tools automate this process: xNormal employs ray tracing to cast rays from the low-poly mesh and capture high-poly details, supporting formats like for input. Similarly, Substance Painter uses integrated Substance Bakers to generate tangent-space normal maps via ray-based projection, with adjustable parameters like ray distance to control and avoid self-intersections. Once generated, normal maps are stored as RGB textures where the channels encode the X, Y, and Z components of the surface vector in . Each component, ranging from -1 to 1, is linearly remapped to the [0,1] interval for (i.e., n' = \frac{n + 1}{2}), resulting in a characteristic purple-blue appearance for flat surfaces due to the neutral tangent (0, 0, 1) mapping to RGB (0.5, 0.5, 1.0). In the common Z-up convention, the blue defaults to 1.0 for unperturbed areas, emphasizing outward-facing normals. For efficient in applications, formats like DXT5 (BC3) are applied, often with swizzling—placing the X component in the alpha —to leverage higher-fidelity alpha and minimize artifacts in correlated channels like red and green. In the rendering pipeline, normal maps are applied during the fragment shading stage to perturb interpolated vertex normals per pixel. UV coordinates sample the map, unpack the RGB values to a tangent-space vector via \vec{t} = 2 \times \text{sampled} - 1, and transform it to world space using the tangent-bitangent-normal (TBN) matrix, which orients the perturbation relative to the surface. The resulting vector often replaces the geometry normal (or blends with it), is renormalized to unit length, and feeds into lighting equations, such as the Lambertian diffuse term. This integration occurs after vertex shading passes the TBN matrix to the fragment shader but before specular or ambient computations. The following GLSL pseudocode illustrates a basic implementation in the fragment shader:
vec3 tangentNormal = texture(normalMap, uv).xyz * 2.0 - 1.0;
vec3 worldNormal = normalize(TBN * tangentNormal);
float diffuse = max([dot](/page/Dot)(worldNormal, normalize(lightDir)), 0.0);
Here, TBN is the 3x3 matrix from vertex attributes, uv are texture coordinates, and lightDir is the world-space light direction; the diffuse factor scales the base color for illumination. Despite these techniques, artifacts can occur during application. Mirroring seams appear at UV edges where islands are flipped, as tangent directions reverse inconsistently across the seam, causing lighting discontinuities; fixes include separating mirrored UVs by a small (2-4 pixels) during layout or enforcing consistent basis computation between baking and rendering tools. MIP-mapping issues stem from bilinear filtering averaging non-linear normal vectors, leading to shortened vectors and darkened silhouettes at distance; mitigation involves custom MIP generation using derivative-based filters (e.g., Sobel operators on height-derived normals) to preserve vector lengths and directions across levels.

Historical Development

Origins in the 1970s and 1980s

The foundational concepts for normal mapping emerged in the late through pioneering work in aimed at simulating fine surface details via perturbations to surface normals, a technique initially termed . This innovation addressed the challenge of rendering realistic textures and irregularities on low-polygon models, avoiding the high computational cost of explicit geometric subdivision. Early experiments focused on modulating normal vectors to alter shading without changing the underlying surface geometry, setting the stage for more advanced mapping methods. A key milestone was James F. Blinn's 1978 SIGGRAPH , "Simulation of Wrinkled Surfaces," which formalized as a method to perturb surface using a procedural texturing . Blinn described compute a modified by adding a small derived from the partial derivatives of a height field or texture , applied before illumination calculations. This approach enabled the of high-frequency details like fabric weaves or skin pores, with examples including a wrinkled and embossed lettering, demonstrating significant efficiency gains. The technique relied on analytic fields rather than discrete maps, emphasizing conceptual perturbation over storage. In the early 1980s, these perturbation ideas influenced reflection models that explicitly incorporated normal distributions to model material roughness. Robert L. Cook and Kenneth E. Torrance's 1981 SIGGRAPH paper, "A Reflectance Model for Computer Graphics," introduced a microfacet-based approach where specular highlights arise from oriented facets with varying normals, using a distribution function to represent microsurface orientations. This model extended normal modulation to physically grounded bidirectional reflectance, providing a foundation for later extensions allowing anisotropic effects like brushed metal by tilting the effective normal distribution, and provided a bridge from Blinn's shading perturbations to comprehensive surface simulation in rendering pipelines. These foundational ideas evolved in the 1990s into normal mapping techniques that stored precomputed normal perturbations in discrete texture maps. A seminal contribution was the 1996 SIGGRAPH paper by Venkat Krishnamurthy and Marc Levoy, which described an for fitting smooth surfaces to dense meshes of arbitrary and deriving normal maps from the normals of the original high-resolution meshes to simulate fine details on the fitted low-resolution surfaces.

Adoption in Real-Time Rendering

Further advancements toward implementation came in 1999 with the paper "Realistic, Hardware-Accelerated and " by Wolfgang Heidrich and Hans-Peter Seidel, which detailed tangent-space normal mapping techniques for efficient using multi-texturing and early programmable to achieve realistic effects like Blinn-Phong . The adoption of normal mapping in rendering accelerated in the early , driven by advancements in graphics that supported multi-texturing and programmable . NVIDIA's GeForce 3 GPU, released in 2001, introduced four texture units and basic vertex and programmability, enabling efficient implementation of tangent-space normal mapping through DOT3 operations without full support. This breakthrough allowed developers to fetch and perturb normals per in , marking a shift from software-based approximations to techniques. Key milestones in the mid-2000s solidified normal mapping as a standard in interactive applications. id Software's , released in 2004 and powered by the engine, was an early major adopter, extensively using normal maps alongside dynamic per-pixel lighting to enhance surface detail on low-polygon models. Similarly, Valve's , also launched in 2004, integrated normal mapping via its engine, employing a radiosity normal mapping technique to combine precomputed with per-pixel bump effects for efficient, high-fidelity . Concurrently, the release of 9 in 2002 and 2.0 in 2004 provided robust shader frameworks—pixel shader model 2.0 and GLSL, respectively—that facilitated tangent-space normal computations in vertex and fragment stages, broadening accessibility across platforms. Post-2010 developments integrated normal mapping into (PBR) workflows, enhancing realism in modern engines. 5, building on PBR foundations introduced in 4 around 2014, employs normal maps as a core component of material systems, where they contribute to microfacet-based specular reflections and diffuse under energy-conserving lighting models. For resource-constrained environments like mobile rendering, compressed formats such as BC5 (also known as ATI2 or 3Dc) became prevalent, storing X and Y normal components in two 8-bit-per-pixel channels with hardware decompression, reducing by up to 75% compared to uncompressed RGBA textures while preserving detail. This era also witnessed an industry-wide transition from offline rendering paradigms—exemplified by Pixar's RenderMan, which historically prioritized ray-traced accuracy over speed—to systems capable of approximating similar fidelity. Post-2010 APIs like (introduced in 2016) further streamlined this shift by exposing low-level GPU control for shader-based normal mapping, allowing efficient pipeline integration with compute shaders for tasks like basis computation and supporting cross-platform applications without the overhead of higher-level abstractions.

Applications and Extensions

Use in Video Games

Normal mapping plays a crucial role in by enabling high-fidelity surface details on low-polygon models, thereby optimizing performance on resource-constrained hardware such as the PlayStation 3. In Uncharted 2: Among Thieves (2009), developers utilized normal maps to capture intricate details from high-resolution sculpt meshes, applying them to low-poly game meshes for character skin and fabrics. This approach allowed for visually rich assets with only 246 joints per main character, reducing counts and memory usage while maintaining smooth gameplay at 30 on PS3 hardware. Integration into major game engines streamlines the use of normal maps within development workflows. In , normal maps are imported by placing the texture asset in the project folder and setting the Texture Type to "Normal Map" in the import settings, which automatically processes the RGB channels for tangent-space normals and enables compatibility with the Standard Shader for materials. Similarly, in , textures are imported via the Content Browser, with compression settings adjusted to "Normalmap (DXT5 or BC5)" in the Texture Editor to ensure proper handling of the green channel for bitangent information, facilitating seamless application in materials. Level-of-detail (LOD) systems in both engines further enhance efficiency by switching between mesh variants at runtime; for instance, Unity's LOD Group component can assign higher-resolution normal maps to close-range LODs and lower-resolution or disabled variants for distant ones, balancing detail and draw calls. Optimizations specific to rendering in games include pre-baking the tangent-bitangent-normal (TBN) into attributes stored in buffers, avoiding per-fragment computations and reducing overhead. This technique, common in modern pipelines, transforms normal map samples directly in , improving throughput on GPUs. For mobile platforms, 3-component normal maps (RGB encoding XYZ directions) are often compressed using formats like BC5, which dedicates channels efficiently to red and green while deriving blue, minimizing and —critical for maintaining 60 on devices with limited VRAM. In practice, normal mapping has been pivotal in landmark titles for enhancing visual depth. (2015) rendered its immersive at 1080p on consoles with dynamic , benefiting from normal mapping in environmental rendering. More recently, in the 2020s, (2020) integrates normal mapping within its (PBR) materials, where tangent-space normals refine shading under ray-traced and reflections, achieving photorealistic surfaces on high-end PCs at resolutions with 50-90 using DLSS as of 2020. These examples illustrate normal mapping's evolution from console-era efficiency to hybrid rendering paradigms.

Implementations in Other Media

In film and visual effects (VFX) production, normal mapping is widely utilized in offline rendering workflows to enhance the fidelity of complex assets, such as creature skin and organic surfaces, without requiring excessive geometric subdivision. Tools like integrated with the renderer support normal mapping by perturbing interpolated surface normals using RGB textures, enabling photorealistic in high-budget productions where computational constraints are less stringent than in applications. This approach allows artists to bake intricate details from high-poly sculpts into tangent-space normal maps, which are then applied during ray-traced rendering to simulate micro-surface variations efficiently. Architectural visualization leverages normal mapping in real-time engines like to create immersive walkthroughs of building interiors and exteriors, where it adds realistic surface texture to materials such as , , and . In these pipelines, normal maps are frequently combined with (POM) to simulate depth on flat geometry, enhancing the illusion of three-dimensionality for elements like wall panels or flooring without increasing polygon counts, which is crucial for smooth navigation in interactive presentations. Twinmotion, a -based tool tailored for architectural workflows, incorporates normal mapping within its material systems to boost depth and realism in scene renders. In virtual reality (VR) and augmented reality (AR) applications, normal mapping is used for surface detailing on mobile and standalone devices, such as those in the Meta Quest ecosystem, to deliver immersive environments under strict performance budgets. However, Meta's rendering guidelines recommend parallax mapping over normal mapping, as the latter can appear flat due to lack of binocular disparity in stereoscopic viewing and is better suited for providing lighting-based depth cues when supplemented by parallax techniques to mitigate artifacts from head movement. Detail maps, often using parallax for repeated textures like foliage or terrain in VR scenes, help maintain visual consistency across varying distances while adhering to hardware limits on texture resolution and shader complexity. Beyond traditional texturing, normal mapping integrates with in non-real-time pipelines to balance geometric deformation for macro-scale features—like wrinkles on skin or cracks in stone—with fine-scale normal perturbations for micro-details, a common practice in VFX and film rendering where can handle the added complexity. Recent advancements include AI-generated normal maps using models; for example, variants augmented with (introduced in 2023) allow conditioning on input images or sketches to produce plausible normal maps, streamlining asset creation in creative workflows.

References

  1. [1]
    What is Normal Mapping? - Adobe
    Normal mapping allows you to add in a minute surface details such as bumps, grooves, and scratches to 3D computer graphics. Learn about normal maps.
  2. [2]
    Texture Mapping | Autodesk
    Texture mapping is a fundamental technique in computer graphics used to apply detailed surface appearances to 3D models. ... Normal mapping: Normal maps ...
  3. [3]
    [PDF] Fitting Smooth Surfaces to Dense Polygon Meshes
    In this paper, we present an algorithm and system for convert- ing dense irregular polygon meshes of arbitrary topology into ten- sor product B-spline surface ...
  4. [4]
    [PDF] Realistic, Hardware-accelerated Shading and Lighting
    Realistic, Hardware-accelerated Shading and Lighting. Wolfgang Heidrich∗. Hans-Peter Seidel. Max-Planck-Institute for Computer Science. Abstract. With fast 3D ...
  5. [5]
    [PDF] Applications of Pixel Textures in Visualization and Realistic Image ...
    [10] Wolfgang Heidrich. High-Quality Shading and Lighting for. Hardware-Accelerated Rendering. PhD thesis, University of. Erlangen-Nürnberg, 1999. in ...
  6. [6]
    [PDF] High Quality Normal Map Compression
    Abstract. Normal mapping is a widely used technique in real-time graphics, but so far little research has focused on com- pressing normal maps.
  7. [7]
    [PDF] Illumination for Computer Generated Pictures
    This paper is concerned with the shading problem; the contour edge problem is discussed by the author and F.C. Crow in [7]. Influence of Hidden Surface ...
  8. [8]
    [PDF] Normal Mapping and Tangent Spaces - Texas Computer Science
    Step 1: Find texture coordinate of surface. Step 2: Look up texel at that coordinate. Step 3: Find rotation that maps tangent space normal to object space.
  9. [9]
    Chapter 7. Adaptive Tessellation of Subdivision Surfaces with ...
    Normal mapping requires some extra attributes from the tessellator: a tangent, and normal map texture coordinates.
  10. [10]
    [PDF] Silhouette Texture - Hongzhi Wu
    Abstract. Using coarse meshes with textures and/or normal maps to represent detailed meshes often results in poor visual quality along silhouettes.
  11. [11]
    [PDF] Practical Parallax Occlusion Mapping for Highly Detailed Surface ...
    • Swimming artifacts at grazing angles. • Flattens geometry at grazing angles ... We render the lowest level of details using regular normal mapping shading.<|separator|>
  12. [12]
    [PDF] 15-462: Computer Graphics
    A normal vector is a vector perpendicular to a surface. A unit normal is a normal vector of magnitude one. • Normal vectors are important to many graphics.
  13. [13]
    Introduction to Shading (Normals, Vertex Normals and Facing Ratio)
    crossProduct(v2-v0); Figure 2: The face normal of a triangle can be computed from the cross product of two edges of that triangle. If the triangle lies in ...
  14. [14]
    Continuous shading of curved surfaces | Seminal graphics
    Continuous shading of curved surfaces. Author: Henri Gouraud. Henri Gouraud ... This paper was originally published as https://dl.acm.org/doi/10.1109/T-C ...
  15. [15]
    Phong Shading and Gouraud Shading - Cornell University
    Interpolation of normal allows highlights smaller than a polygon. the value of x, z and I for each scan line.
  16. [16]
    WebGL2 3D - Directional Lighting
    If we know what direction the light is traveling and we know what direction the surface of the object is facing we can take the dot product of the 2 directions ...
  17. [17]
    3-D Lighting and Surface Normals
    Angles between 90 degrees and 270 degrees will return negative values. Thus, the dot product for vectors in this range will be negative and the surface will be ...
  18. [18]
    [PDF] Texture Mapping
    Assign (u,v) coordinates for each mesh vertex. Inside each triangle interpolate using barycentric coordinates. A fter a slide by D aniele Panozzo ...
  19. [19]
    Introduction to Texturing - Scratchapixel
    The idea behind texture mapping is based on associating each vertex of a polygonal mesh with a 2D coordinate that indicates a position within the texture. These ...How Does This Work · A Practical Implementation... · Step 3: Texture Lookup<|control11|><|separator|>
  20. [20]
    Unity - Manual: Generate lightmap UVs
    ### Summary of Lightmap UVs Generation in Unity
  21. [21]
    Normal Mapping - LearnOpenGL
    Normal vectors in a normal map are expressed in tangent space where normals always point roughly in the positive z direction. Tangent space is a space that's ...Missing: Heidrich | Show results with:Heidrich
  22. [22]
    [PDF] Representing geometric details with textures - CS@Cornell
    So for tangent-space normal mapping, the operations are: 1. Attributes: position, normal, tangent, binormal, texture coordinates. 2. Vertex shader: pass ...Missing: spaces | Show results with:spaces
  23. [23]
    [PDF] Real-Time Normal Map DXT Compression | NVIDIA
    Feb 7, 2008 · Using today's graphics hardware, normal maps can be stored in several compressed formats that are decompressed on the fly in hardware during.
  24. [24]
    [PDF] LIGHTING AND NORMAL MAPPING IN COMPUTER GRAPHICS
    May 21, 2016 · • world space. • camera space. • screen space. 3.7 Object space ... Object space normal map [26]. Page 49. 49. 6.3 Tangent space normal mapping.
  25. [25]
    Computing Tangent Space Basis Vectors for an Arbitrary Mesh
    Mar 15, 2004 · The vectors T and B are the tangent and bitangent vectors aligned to the texture map, and these are what we'd like to calculate. ( s 2 , t 2 ) ...
  26. [26]
    Chapter 22. Baking Normal Maps on the GPU - NVIDIA Developer
    In this chapter we have shown that the graphics processor can generate normal maps very efficiently, which allows fast, high-resolution previews.
  27. [27]
    [PDF] Analysis and Compilation of Normal Map Generation Techniques for ...
    Dec 19, 2022 · The most common methods for generating a normal map stem from different ideas, ranging from manual painting of these maps to the use of tools ...
  28. [28]
    Common XNormal Baking Errors and How to Solve Them - CryEngine
    The way xNormal bakes the normal map, the rays that are cast are coming in a perpendicular direction from the low poly mesh, meaning that is only capturing what ...<|control11|><|separator|>
  29. [29]
    Bake Mesh Maps
    ### Summary of Normal Map Baking in Substance 3D Painter
  30. [30]
    Seams with normal map and mirrored uvs - mrdoob/three.js - GitHub
    Feb 6, 2020 · To eliminate seams and shading artifacts, the game engine and the normal map baking tool should use the same tangent basis. ...
  31. [31]
    [PDF] Frequency Domain Normal Map Filtering - Columbia CS
    Abstract. Filtering is critical for representing detail, such as color textures or normal maps, across a variety of scales. While MIP-mapping tex-.Missing: explanation | Show results with:explanation
  32. [32]
    JBWC - Publications - Jim Blinn
    Major Papers ; Leather donut, Simulation of Wrinkled Surfaces SIGGRAPH 78, pp. 286-292. (Introduces Bump Mapping.) ; saturn rings, Light Reflection Functions for ...
  33. [33]
    Simulation of wrinkled surfaces | ACM SIGGRAPH Computer Graphics
    This paper presents a method of using a texturing function to perform a small perturbation on the direction of the surface normal before using it in the ...
  34. [34]
    The Cg Tutorial - Chapter 1. Introduction - NVIDIA
    The third generation of GPUs (2001) includes NVIDIA's GeForce3 and GeForce4 Ti, Microsoft's Xbox, and ATI's Radeon 8500. This generation provides vertex ...
  35. [35]
    [PDF] Tutorial 5 Programming Graphics Hardware - NVIDIA
    Bump Mapping. Bump mapping is about fetching the normal from a texture (called a normal map) instead of using the interpolated normal to compute lighting at ...
  36. [36]
    id Tech 4 - The Doom Wiki at DoomWiki.org
    id Tech 4 added several new graphical features absent in its predecessor, id Tech 3. These included bump mapping, normal mapping, and specular highlighting.
  37. [37]
    [PDF] Half-Life® 2 / Valve Source™ Shading
    Mar 22, 2004 · Radiosity Normal Mapping. • Normal mapping is typically accumulated one light at a time. – Multiple diffuse lights handled by summing multiple ...
  38. [38]
    [PDF] Introduction to the DirectX 9 Shader Models - NVIDIA
    DirectX 9 shader models include Vertex Shaders (vs_2_0, vs_3_0) and Pixel Shaders (ps_2_0, ps_3_0). Legacy 1.x shaders are not discussed.
  39. [39]
    Physically Based Materials in Unreal Engine - Epic Games Developers
    This document is designed to provide guidelines and best practices for working within Unreal Engine's physically based Materials system.
  40. [40]
    Understanding BCn Texture Compression Formats - Nathan Reed
    Feb 12, 2012 · BC5 is a two-channel format in which each block is just two BC4 blocks. This is very useful for tangent-space normal maps, if the the X and Y ...
  41. [41]
    Texture Compression in 2020 - Aras Pranckevičius
    Dec 8, 2020 · Majority of textures on the GPU end up using “block compression” formats, to save sampling bandwidth, memory usage and for faster texture loads.
  42. [42]
    [PDF] Art and Technology at Pixar, from Toy Story to today
    Vulkan, DX12, and Metal. As the realms of offline and real-time rendering converge, we continue to push for real-time approximations of final-frame fidelity.
  43. [43]
    [PDF] Uncharted 2 Character Pipeline: - GDC Vault
    Arbitrary Game Mesh: Page 8. Texturing/Sampling: Sample High resolution details from sculpt mesh to Arb game mesh (Normal Map). If high mesh was used to ...
  44. [44]
    Texture Properties | Unreal Engine 4.27 Documentation
    From the Texture Editor you can do a wide range of items like apply color adjustments, modify compression and adjust the Textures LOD settings from here.
  45. [45]
    Normal Mapping without Precomputed Tangent Space Vectors
    Jan 22, 2013 · ... normal mapping and the TBN vectors are usually precomputed from texture coordinates and stored as a vertex attribute. When precomputed ...
  46. [46]
    Performance Analysis: The Witcher 3: Wild Hunt | Digital Foundry
    May 20, 2015 · PS4 and Xbox One tested head-to-head, with some very mixed results - The Witcher 3 is a roaring success in the critical sphere, ...<|control11|><|separator|>
  47. [47]
    Cyberpunk 2077 PC tech analysis: a closer look at the ultra high ...
    Dec 10, 2020 · Red Engine's approach to illumination without ray tracing is, of course, less accurate but still delivers striking results. The main drawbacks ...
  48. [48]
    Normal Map - Arnold for Maya - Autodesk product documentation
    Normal mapping works by replacing the interpolated surface normal by the one evaluated from an RGB texture.
  49. [49]
    Twinmotion Materials Pack | Unreal Engine 4.27 Documentation
    A flexible Master Material, Glass is one of the most used Materials in architectural visualization. ... Adds more depth and realism to Normal and Bump Mapping.
  50. [50]
    Parallax Occlusion Mapping | 5-Minute Materials [UE5] - YouTube
    Jun 21, 2024 · Hello! Today we're looking at the Parallax Occlusion Mapping function inside Unreal Engine (also known as POM) This handy function can be ...
  51. [51]
    Rendering - Meta for Developers
    Use parallax mapping instead of normal mapping. Normal mapping provides realistic lighting cues to convey depth and texture without adding to the vertex detail ...
  52. [52]
    Detail Normal Maps in VR Environments - 80 Level
    Aug 10, 2017 · Detail Normal Maps in VR Environments. Andrew Severson gave some tips on how to build interactive game environments for VR games.Missing: Oculus mobile
  53. [53]
    Normal vs Displacement vs Bump Maps: Differences and when to ...
    Sep 24, 2021 · You use normal maps when you want to add in minute details onto your model in a way that doesn't destroy performance.<|control11|><|separator|>
  54. [54]
    lllyasviel/sd-controlnet-normal - Hugging Face
    The model was trained for 100 GPU-hours with Nvidia A100 80G using Stable Diffusion 1.5 as a base model. The extended normal model further trained the initial ...Model Details · Introduction · Released Checkpoints<|control11|><|separator|>