Fact-checked by Grok 2 weeks ago

Parallax occlusion mapping

Parallax occlusion mapping () is a technique that enhances the perceived depth and detail of textured surfaces by simulating displacement and self-occlusion using a height map, without altering the underlying or increasing counts. In this method, implemented in pixel shaders, a virtual ray is cast from the viewer's perspective through layers of the height map in , iteratively sampling depths to detect intersections and offset texture coordinates accordingly, which creates a perspective-correct of bumps, grooves, and protrusions. This approach builds directly on earlier variants by incorporating between sampled depth layers to improve accuracy, particularly for steep surface angles, while maintaining efficiency on consumer GPUs. The technique was first introduced in 2004 by Zoe Brawley and Natalya Tatarchuk as an advancement in self-shadowing, perspective-correct via reverse height map tracing, with subsequent refinements presented in 2006 to incorporate approximate soft shadows and dynamic lighting for more realistic surface rendering. These developments addressed limitations in prior methods like basic , which lacks true parallax, and simple , which often fails to handle occlusions or steep displacements without artifacts. POM's typically divides the height field into a fixed number of layers (e.g., 16–32) for , balancing visual fidelity with performance, and supports adaptive sampling to reduce at grazing view angles. Compared to more computationally intensive alternatives like tessellation-based or full ray tracing, POM offers significant advantages in scalability and integration into game engines, enabling high-detail surfaces such as brick walls, , or fabrics in applications while using minimal additional for , , and diffuse maps. It has become a staple in modern rendering pipelines, including those in and , for achieving photorealistic effects without hardware demands beyond shader model 3.0 capabilities.

Overview

Definition and Purpose

Parallax occlusion mapping (POM) is a texture-based rendering technique in that simulates self-occlusion and effects on flat polygonal surfaces, creating the illusion of geometric depth without requiring additional polygons. This method encodes surface details into height maps, which are displacement textures representing the relative height of surface features, allowing for view-dependent perturbations that mimic three-dimensional structure on two-dimensional geometry. The primary purpose of in rendering is to enhance the visual complexity and realism of materials such as bricks, , or fabrics by approximating fields from these maps to handle occlusions that vary with the viewer's . It achieves this by integrating depth cues and self-shadowing, which provide more convincing surface interactions under dynamic lighting compared to earlier approaches, all while maintaining performance suitable for interactive applications like . As a simpler precursor, perturbs surface normals to simulate lighting variations but lacks the geometric and occlusion handling of . POM emerged in the mid-2000s as a bridge between traditional texturing methods and computationally intensive full geometric , with its foundational description appearing in as an extension of techniques. This development addressed key limitations in prior approaches by enabling efficient, per-pixel simulation of detailed surfaces that respond realistically to motion and viewpoint changes.

Core Concept

Parallax in computer graphics refers to the apparent relative displacement of surface features when observed from different viewpoints, a perceptual cue that humans use to infer depth in three-dimensional scenes. Parallax occlusion mapping (POM) builds on this principle to generate convincing illusions of surface relief on otherwise flat polygons, by dynamically offsetting texture sampling to simulate how closer elements occlude farther ones based on the observer's position. This approach mimics real-world visual parallax without the computational expense of actual geometry displacement, providing enhanced depth perception through perspective-dependent shifts in rendered details. At the heart of POM is the use of height maps—grayscale textures that encode relative surface elevations, where lighter pixels represent raised features and darker ones indicate depressions. These maps guide the calculation of per-pixel offsets in texture coordinates, effectively "lifting" or "sinking" sampled colors to create the appearance of varied . By integrating the view direction with height values, the technique ensures that offsets vary naturally with camera movement, reinforcing the effect and avoiding the flat, static look of simpler methods. The intuition for occlusion and self-shadowing in POM lies in simulating from the viewer's eye through the virtual height field: imaginary rays step along the surface until they intersect a raised feature, determining the visible point and blocking light to shadowed areas behind it. This prevents overdraw on elevated elements, such as peaks casting shadows into valleys, and heightens realism by respecting geometric self-interactions in the perceived depth. For example, grooves etched into a rendered deepen and shift parallax-wise as the viewpoint changes, making the texture feel embedded in a tangible, multi-layered rather than painted on. POM is especially valuable for adding intricate detail to static meshes in real-time rendering scenarios like , where it boosts immersion without inflating counts.

Historical Development

Origins in Research

emerged in the early 2000s amid broader efforts to enhance rendering of surface details through heightfield-based techniques, addressing limitations in traditional bump and by incorporating and effects. This development built directly on foundational work in relief texture mapping, introduced by Manuel M. Oliveira, Gary Bishop, and in 2000, which extended standard to support surface details and view-dependent motion via pre-warped height-augmented textures processed in two passes. Shortly thereafter, Tomomichi Kaneko and colleagues proposed in 2001, a per-pixel method that offsets texture coordinates along the view ray according to height values to simulate depth without geometric modifications, though it did not fully account for self-occlusions. These approaches laid the groundwork for more sophisticated approximations of in rasterization pipelines, drawing inspiration from ray tracing principles to enable efficient per-fragment depth simulation on early programmable GPUs. The core concept of was formalized in 2004 by Zoe Brawley and Natalya Tatarchuk, who introduced it as an advancement over basic offset and methods by integrating linear through the heightfield to resolve occlusions and self-shadowing accurately. Their technique, titled "Parallax Occlusion Mapping: Self-Shadowing, Perspective-Correct Using Reverse Height Map Tracing," employed reverse tracing from the viewer through the height map to find visible surface points, significantly improving and accuracy compared to prior non-occluding offsets. This innovation was particularly influential for its balance of visual fidelity and computational efficiency, adapting ray tracing-like intersection tests to fragment shaders without requiring . Early prototypes demonstrated its efficacy in rendering detailed surfaces like bricks and , outperforming in handling steep angles and inter-penetrations while maintaining interactive frame rates on commodity hardware. Subsequent refinements in academic further solidified parallax occlusion mapping's role in graphics research. In 2006, Natalya Tatarchuk extended the method with dynamic lighting support and approximate soft shadows via adaptive sampling and level-of-detail adjustments, presented at the Symposium on Interactive Graphics and Games. This work highlighted the technique's adaptability to GPU rasterization, incorporating binary search optimizations for ray-height intersections to reduce steps from dozens to a handful, thus enhancing performance for complex scenes. Experiments in SIGGRAPH proceedings and related venues showcased its superior illusion of geometry over simpler mapping variants, with qualitative evaluations emphasizing reduced artifacts in grazing view angles and improved cues, establishing it as a high-impact method for applications.

Adoption in Industry

Parallax occlusion mapping, first detailed in research from ATI's demo team in the mid-2000s, saw its initial commercial application in video games shortly thereafter. A pivotal milestone occurred with the 2007 release of Crysis, where CryEngine 2 employed POM for terrain effects to simulate detailed surface relief without additional geometry, marking one of the earliest high-profile uses in a major title. Broader industry adoption accelerated alongside the introduction of DirectX 10 in 2006, DirectX 11 in 2009, and OpenGL 3.0 in 2008, as these APIs enabled efficient per-pixel ray marching on consumer GPUs through advanced shader models. ATI (acquired by AMD in 2006) and contributed significantly to POM's optimization, with ATI pioneering GPU-accelerated implementations via HLSL and GLSL shaders in their developer resources, while integrated related techniques into their GPU Gems series for real-time rendering pipelines. Major game engines integrated POM through shader libraries, beginning with around 2008 via its 10 renderer in Unreal Engine 3, which supported custom POM for enhanced material detail. followed in the mid-2010s, with developers leveraging Unity 5's surface shaders for POM effects around 2015, prior to official nodes in Shader Graph. By 2015, advancements in mobile graphics allowed POM's evolution to lower-end hardware, facilitated by 3.0's shader capabilities released in 2012, enabling its deployment in mobile titles without prohibitive performance costs.

Technical Details

Algorithm Fundamentals

Parallax occlusion mapping relies on three key input s to achieve its effects: a diffuse or color that defines the surface's base appearance, a normal in that provides per-pixel surface normals for accurate lighting, and a or , usually an 8-bit where pixel intensities encode relative values from 0 (highest, undisplaced reference surface) to 1 (lowest, deepest displacements or valleys). These inputs allow the algorithm to simulate geometric detail without modifying the underlying mesh geometry. At its core, the algorithm processes each fragment in the pixel shader by first computing the view direction in , which aligns the sampling with the texture's local . This direction guides an iterative offsetting of the fragment's coordinates based on depths sampled from the height map, effectively tracing a to find the visible surface point and handling self-occlusion. The process approximates the intersection of this ray with the height field, enabling perspective-correct displacement that varies with . The ray marching direction in texture space is computed as \Delta uv = \frac{view.xy}{view.z} \cdot scale, where view is the tangent-space view direction, and scale is a tunable parameter that controls the maximum displacement depth. This direction is then divided by the number of steps to determine the increment per iteration, starting from the original texture coordinates as the initial position on the base surface. Subsequent linear steps along this direction sample the height map to resolve occlusions more precisely. To efficiently approximate the point without excessive sampling, the algorithm uses a along the ray with an adaptive number of steps, typically ranging from 8 to 50 and adjusted based on the angle between the surface normal and view direction (fewer steps for grazing angles). This approach balances accuracy and performance, approximating ray-height field intersections using piecewise linear segments while avoiding artifacts in steep or complex height fields.

Ray Marching Process

The ray marching process in parallax occlusion mapping simulates depth by tracing a view ray through a virtual height field in , iteratively sampling the height map to detect the first point of with the displaced surface. This occurs per in the fragment , beginning with the of the view ray onto the surface plane, where the ray direction is transformed into tangent space coordinates for local surface calculations. The process approximates ray-height field s using a linear stepping method along the ray's path in texture space, enabling perspective-correct without additional geometry. The core iteration proceeds in discrete steps, starting from an initial position near and advancing toward the viewer. For each step i, the current coordinates are offset along the projected direction, and the is sampled to obtain height_{sampled}, a value typically normalized between 0 and 1 representing displacement from the base plane (higher values indicating deeper positions). The effective depth at that point is then computed as currentHeight = rayLength \cdot (1 - height_{sampled}), where rayLength is the accumulated distance along the up to the current step. This depth is compared against the current depth rayDepth, which decreases incrementally from an initial value (often 1.0) by a fixed step size \delta = 1 / n, with n being the number of steps. If currentHeight < rayDepth, the ray has not yet intersected , so the march continues to the next step; otherwise, an intersection is detected, indicating by the virtual geometry. To balance visual quality and performance, the process is limited to an adaptive maximum of 8–50 iterations per pixel, with the exact count often adjusted based on the viewing angle—fewer steps for near-grazing angles to reduce artifacts and computation. Early termination occurs upon detecting intersection, avoiding unnecessary further sampling and leveraging dynamic flow control in modern shaders for efficiency. If the maximum steps are reached without intersection, the ray is considered to pass above the surface, and the original texture coordinates are used. Upon intersection, the final coordinates are interpolated between the last two steps for sub-step , ensuring smooth transitions and accurate texturing. Let prevHeight and currHeight be the heights from the previous and current steps, with corresponding prevRayDepth and currRayDepth. The interpolation factor t is calculated as t = \frac{prevHeight - prevRayDepth}{ (prevHeight - currHeight) + (currRayDepth - prevRayDepth) }, and the UV offset is linearly blended as finalOffset = prevOffset + t \cdot (currOffset - prevOffset). These coordinates are then used to sample the diffuse, , and other maps, providing the basis for subsequent lighting computations.

Differences from Bump and Normal Mapping

Bump mapping, introduced in , simulates fine surface details by perturbing the interpolated surface normals based on a height-derived , thereby altering calculations to mimic bumps and wrinkles without modifying the underlying . This technique provides a static of relief under varying but lacks any view-dependent effects, such as shifts or self-occlusion, resulting in silhouettes that reveal the flat base surface from oblique angles. Normal mapping builds upon by encoding full normal vectors in RGB texture channels within the , enabling more precise per-pixel that accounts for surface orientation and without the approximations inherent in scalar height perturbations. Despite these improvements, normal mapping maintains a geometrically flat surface, offering no true ; details appear consistent regardless of viewpoint, failing to simulate or horizontal displacement as the observer moves. Parallax occlusion mapping advances these methods by incorporating a ray-marching process through a height field to compute view-dependent displacements, introducing authentic shifts where surface features appear to move relative to the background based on the camera's . This enables self-occlusion, as elevated elements like brick edges can mask lower areas, creating dynamic hiding and revealing effects that enhance geometric realism. Visually, while bump and yield fixed shading patterns that do not evolve with rotation—exposing their planar nature—parallax occlusion mapping delivers evolving depth cues, such as protruding cobblestones that shift and partially obscure adjacent grooves, providing a more immersive sense of three-dimensionality on flat meshes.

Evolution from Parallax Mapping

Parallax mapping, introduced by Kaneko et al. in 2001, provides a simple enhancement to normal mapping by applying a linear offset to texture coordinates (UVs) based on the viewer's angle relative to the surface and the height value from a height map. This offset simulates depth by shifting the sampled texture position, creating an illusion of surface protrusion or recession without altering the underlying geometry. However, the technique lacks occlusion handling, resulting in prominent "swimming" artifacts where displaced textures appear to float above the surface or fail to clip properly behind raised features. Parallax occlusion mapping (POM) evolved directly from this foundation in 2004, as developed by Brawley and Tatarchuk, by incorporating iterative ray stepping to address the core limitations of basic . In POM, a virtual is marched through the height field in discrete steps, sampling multiple height values along the parallax offset direction to locate the precise intersection point where the ray first encounters an occluding surface. This process prevents interpenetration of the virtual geometry, ensuring that viewed pixels respect depth discontinuities and avoid projecting through solid features. The key distinction lies in the sampling approach: while relies on a single offset computation per , POM performs multiple iterative samples—typically 8 to 50, adjusted dynamically by the angle between the surface and view direction—to achieve accurate resolution. This multi-step refinement significantly reduces errors at steep viewing angles, where basic distorts textures excessively due to its . Both methods share the use of height maps to encode surface relief, but POM's elevates the technique to provide more geometrically faithful rendering. In terms of artifacts, parallax mapping often exhibits unrealistic floating of textures over depressions or protrusions, as it cannot resolve self-occlusions, leading to visual inconsistencies during camera movement. POM mitigates these by clipping textures realistically at occlusion boundaries through its intersection-based sampling, yielding smoother and more convincing depth cues without the swimming effect.

Contrast with Displacement Mapping

Displacement mapping alters the actual of a model by modifying positions or employing to subdivide surfaces based on height data from a , thereby generating real geometric detail that supports accurate self-shadowing, silhouette edges, and physical interactions such as collisions. In contrast, parallax occlusion mapping (POM) operates solely in texture space by perturbing UV coordinates through against a height in the fragment , approximating depth and occlusion without modifying the underlying , which results in faster performance but precludes genuine global effects like cast shadows or . Performance-wise, displacement mapping demands significant computational resources due to the need for high levels—often implemented via hull and domain s in 11—to achieve smooth geometric displacement, whereas POM's per-pixel computations in the fragment enable efficient rendering on mid-range hardware without increasing polygon counts. Both techniques leverage height fields to simulate surface relief, but displacement produces tangible while POM yields a visual confined to . POM is typically suited for adding mid-range detail to diffuse surfaces in real-time applications like , whereas excels in high-fidelity scenarios requiring close-up geometric accuracy, such as detailed props or .

Applications and Usage

Implementation in Real-Time Rendering

Parallax occlusion mapping (POM) is typically implemented in the fragment of modern graphics pipelines using shading languages such as GLSL for or and HLSL for . The core logic involves uniforms for controlling the height scale, which adjusts the depth (often set to values like 0.05 to 0.1), and the maximum number of steps (commonly 16 to 64 layers for balancing quality and performance). In GLSL, the fragment samples a height map to perform iterative in , offsetting coordinates based on the view direction until the intersects the virtual surface height. Similar HLSL implementations follow an analogous structure, with shaders handling the marching loop and lookups, as demonstrated in real-time engines employing techniques. Integration into the rendering pipeline requires proper setup, achieved by passing attributes for position, normal, , and bitangent to the vertex shader, where a tangent-bitangent-normal (TBN) is computed to transform the view and fragment positions into . This ensures accurate offsets relative to the surface . POM is compatible with deferred rendering pipelines, where the calculations occur during the pass to populate the G-buffer with offset UV-derived normals, depths, and , allowing subsequent passes to utilize the enhanced surface details without additional modifications. To optimize performance in real-time applications, level-of-detail (LOD) strategies dynamically adjust the number of marching steps based on factors like viewing distance or angle, reducing layers (e.g., from 32 near the camera to 8 at a distance) to minimize fragment shader computations while preserving visual fidelity. Precomputed height maps, generated offline from displacement data, further reduce runtime overhead by avoiding on-the-fly height calculations and enabling efficient texture sampling with linear filtering. Artifacts such as edge clipping can be mitigated by discarding fragments whose offset coordinates fall outside the [0,1] UV range. Game engines provide built-in support for POM to streamline integration. In Godot, the BaseMaterial3D class includes properties like heightmap_deep_parallax to enable POM mode, heightmap_scale for depth control (default 5.0, representing centimeters), and heightmap_max_layers/heightmap_min_layers for LOD-based step counts, allowing direct assignment of a heightmap texture for real-time rendering. Blender's Eevee renderer supports custom POM materials through its node-based shader system, where users can construct the ray marching logic using texture nodes and math operations in the Principled BSDF setup for viewport and final renders.

Examples in Video Games and Simulations

Parallax occlusion mapping (POM) has been employed in several video games to enhance surface details, particularly on and props, providing a sense of depth without requiring additional geometry. In Farming Simulator 22 (2021), POM is utilized on and other surfaces, such as and fields, to create more realistic and dynamic textures by leveraging height maps for boosted surface detail. Similarly, the Metro series, including (2019), incorporates across all surfaces to simulate intricate details like rusty metal and debris, contributing to the post-apocalyptic environments' immersive quality through the 4A Engine's advanced texturing capabilities. In , plays a key role in enhancing immersion by adding depth to interactive elements. : Alyx (2020), built on 2 engine, employs techniques to provide realistic depth on props and environmental surfaces, allowing players to perceive subtle height variations in close-up interactions without compromising performance. Another example is (2017), a shooter developed with , where was applied to the interiors of buildings, including brick walls and structural elements, to generate parallax occlusion maps that simulate relief on flat meshes. These implementations demonstrate POM's impact on player engagement, as the added visual depth fosters greater in both games and simulations while maintaining smooth frame rates. In architectural visualization demos within , such as those showcasing brick walls and tiled floors, POM integrates seamlessly to elevate static scenes into more lifelike representations without significant performance overhead. Artists often combine POM with physically based rendering (PBR) materials in their workflows to achieve authentic wear and tear effects on surfaces. This approach involves using height maps from PBR texture sets alongside ambient occlusion and normal maps, enabling detailed simulations of aged materials like corroded metal or weathered stone in game assets. By layering POM over PBR shaders, creators can iteratively refine textures in tools like Substance Designer, ensuring consistent realism across diverse environments in both video games and simulations.

Limitations and Extensions

Performance Challenges

Parallax occlusion mapping imposes significant computational demands primarily due to the iterative process, which requires multiple dependent samples per fragment to the . Implementations commonly employ 16 to 32 samples per fragment for balanced quality and performance, though this can extend up to 50 samples in cases of high detail or viewing angles to minimize . These repeated texture lookups introduce latency on GPUs, as each sample depends on the previous one's result, exacerbating costs in fragment shaders. Benchmarks on mid-2010s hardware illustrate the scale: on an GeForce GTX 560 Ti, POM incurs rendering times ranging from 0.7 ms at near-perpendicular viewing angles to 1.3 ms at grazing (90-degree) viewing angles for linear search variants, rising with anisotropic filtering or steeper angles due to increased step counts. On earlier GPUs like the ATI Radeon X1600 XL, detailed POM scenes achieve only 32 FPS for high-resolution models, highlighting the technique's sensitivity to hardware capabilities and scene complexity. However, with modern GPUs (e.g., series as of 2025), these performance costs have significantly decreased due to improved shader throughput and caching. Key bottlenecks arise from dynamic branching in shaders to adapt sample counts based on view direction, leading to warp divergence where threads in a GPU execute different paths, reducing overall parallelism and efficiency. Overdraw is further amplified on steep surfaces, where grazing angles necessitate more marching steps per fragment, intensifying the per-pixel workload. While basic optimizations like early z-culling can discard non-visible fragments to limit unnecessary computations, these do not eliminate the inherent overhead in demanding rendering scenarios.

Modern Improvements and Variants

One notable enhancement to parallax occlusion mapping involves contact hardening techniques, such as steep parallax mapping variants, which reduce the number of steps required while maintaining sharp edge preservation and minimizing artifacts at grazing angles. In Tatarchuk's implementation, dynamic sampling rates adjust the number of height field evaluations (ranging from 8 to 50 samples) based on the viewing angle, achieving higher precision in ray-height field intersections without stair-stepping effects. This approach filters visibility samples to approximate soft shadows, enhancing realism for detailed surfaces like urban environments. A key variant, relief mapping, addresses performance limitations by employing binary search for ray-height field intersections instead of linear sampling, significantly reducing computational overhead while preserving accuracy. Introduced by Oliveira et al. in 2000, relief mapping factorizes the warping process into 1D pre-warps along rows and columns, enabling efficient simulation with fewer operations than traditional . Later refinements, such as those in Policarpo et al. (2006), extend this to multilayer relief for non-height-field details, using binary subdivision to refine intersection points iteratively. Self-shadowing extensions further improve POM by incorporating per-pixel occlusion for dynamic lighting, allowing extruded surface features to cast realistic shadows onto adjacent areas. Tatarchuk's 2006 method integrates approximate soft self-shadows via visibility term filtering, supporting real-time updates for moving lights and deformable height fields like water or impacts. These extensions leverage GPU shader instructions (e.g., PS 3.0 dynamic flow control) to optimize branching, yielding up to 50% performance gains over fixed-step approaches. Recent trends in engines like Unity's High Definition Render Pipeline (HDRP) include support for , as updated post-2020.

References

  1. [1]
    Practical parallax occlusion mapping with approximate soft shadows ...
    This paper presents a per-pixel ray tracing algorithm with dynamic lighting of surfaces in real-time on the GPU. First, we propose a method for increased ...Missing: origin | Show results with:origin<|separator|>
  2. [2]
    Parallax Mapping - LearnOpenGL
    Because Parallax Occlusion Mapping gives almost the same results as Relief Parallax Mapping and is also more efficient it is often the preferred approach.
  3. [3]
    [PDF] Parallax Mapping - Computer Graphics II
    Parallax mapping is a displacement mapping techniques (displace vertices based on geom. information stored inside a texture). • One way to do this is, ...
  4. [4]
    [PDF] Chapter 5: Practical Parallax Occlusion Mapping with Approximate ...
    This chapter presents a per-pixel ray tracing algorithm with dynamic lighting of surfaces in real-time on the GPU. First, we will describe a method for ...
  5. [5]
    What is parallax mapping? - Adobe Substance 3D
    What is parallax mapping? Parallax mapping is a technique that adds depth and detail to textured surfaces for computer-generated graphics.
  6. [6]
    [PDF] Practical Parallax Occlusion Mapping for Highly Detailed Surface ...
    ☢ What are we trying to solve? ☢ Quick review of existing approaches for surface detail rendering. ☢ Parallax occlusion mapping details. ☢ Discuss integration ...Missing: origin | Show results with:origin
  7. [7]
    Practical dynamic parallax occlusion mapping - ACM Digital Library
    Practical dynamic parallax occlusion mapping. Author: Natalya Tatarchuk ... Practical parallax occlusion mapping with approximate soft shadows for detailed ...Missing: original | Show results with:original
  8. [8]
    [PDF] Relief Texture Mapping
    ABSTRACT. We present an extension to texture mapping that supports the representation of 3-D surface details and view motion parallax.
  9. [9]
    [PDF] Detailed Shape Representation with Parallax Mapping - Tachi Lab
    Dec 5, 2001 · In this paper, we propose Parallax Mapping, a simple method to motion parallax effects on a polygon. This method has very fast per-pixel shape ...
  10. [10]
    Dynamic parallax occlusion mapping with approximate soft shadows
    This paper presents a technique for mapping relief textures onto arbitrary polygonal models in real time. In this approach, the mapping of the relief data ...Missing: origin | Show results with:origin
  11. [11]
    [PDF] Practical Parallax Occlusion Mapping for Highly Detailed Surface ...
    – Tatarchuk, N. Dynamic Parallax Occlusion Mapping with Approximate Soft Shadows. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games. Redwood City ...
  12. [12]
    Crysis 2 - DX9 vs DX11 - Parallax Occlusion Mapping - YouTube
    Jul 22, 2011 · Parallax Occlusion Mapping was also available in CryEngine 2 and has been re-introduced in CryEngine 3 for some of the details which will be ...
  13. [13]
    [PDF] Parallax Occlusion in Direct3D 11 - D3Dcoder
    Feb 11, 2012 · Parallax Occlusion Mapping is a technique that gives results similar to tessellation displacement mapping, and can be implemented in DirectX 10; ...
  14. [14]
    Chapter 8. Per-Pixel Displacement Mapping with Distance Functions
    In this chapter, we present distance mapping, a technique for adding small-scale displacement mapping to objects in a pixel shader.
  15. [15]
    New DirectX10 Unreal Renderer available for Deus Ex. - TTLG
    Apr 30, 2010 · * Parallax occusion mapping: either the standard detail textures or an external height map can be used for parallax occlusion mapping. * For ...
  16. [16]
    Add Parallax Occlusion/Relief mapping support to Standard Shader
    Aug 2, 2015 · It's just better than normal maps, for example it allows for something that looks like real bricks, but is only a cube model-wise: ...Missing: integration | Show results with:integration
  17. [17]
    About OpenGL ES 3.0 - Arm Developer
    This book introduces a number of features from OpenGL ES 3.x and the Android Extension Pack, that you can use with a Mali GPU.
  18. [18]
    [PDF] James F. Blinn Caltech/JPL Abstract Computer generated ... - Microsoft
    SIMULATION OF WRINKLED SURFACES. James F. Blinn. Caltech/JPL. Abstract. Computer generated shaded images have reached an impressive degree of realism with the ...
  19. [19]
    [PDF] Detailed Shape Representation with Parallax Mapping
    Parallax Mapping is a method to represent motion parallax on a polygon, using per-pixel texture coordinate addressing to enhance rendering quality.Missing: original | Show results with:original
  20. [20]
    Tessellation Stages - Win32 apps - Microsoft Learn
    Sep 15, 2020 · The tessellation technique implemented in the Direct3D 11 pipeline also supports displacement mapping, which can produce stunning amounts of ...New Pipeline Stages · Hull-Shader Stage · Tessellator Stage
  21. [21]
    [PDF] Review of Displacement Mapping Techniques and Optimization
    May 1, 2012 · Parallax Occlusion Mapping [Brawley and Tatarchuk 2004; Tatarchuk 2006] is an improvement of parallax mapping. The main difference between these ...Missing: origin | Show results with:origin
  22. [22]
    Chapter 9. Deferred Shading in S.T.A.L.K.E.R. | NVIDIA Developer
    This chapter is a post-mortem of almost two years of research and development on a renderer that is based solely on deferred shading and 100 percent dynamic ...Missing: compatibility | Show results with:compatibility
  23. [23]
    parallax occlusion mapping in a deferred renderer - GameDev.net
    Hi, I'm trying to implement parallax occlusion mapping as in: http://developer.amd...ketch-print.pdf. However the texture coordinates that are the results ...Missing: optimization | Show results with:optimization
  24. [24]
    BaseMaterial3D — Godot Engine (stable) documentation in English
    The number of layers to use for parallax occlusion mapping when the camera is far away from the material. Higher values result in a more convincing depth effect ...
  25. [25]
    Materials - Blender 4.5 LTS Manual
    EEVEE does not support per-fragment (pixel) sorting or per-triangle sorting. Only per-object sorting is available and is automatically done on all transparent ...
  26. [26]
    First Look at Parallax Occlusion Mapping in Farming Simulator 22
    Jul 1, 2021 · Parallax Occlusion will make the terrain and other surfaces in the game look more detailed and dynamic if the feature is turned on in the settings.<|separator|>
  27. [27]
    4A Engine | Metro Wiki - Fandom
    Metro 2033 featured superior volumetric fog, double PhysX precision, object blur, sub-surface scattering for skin shaders, parallax mapping on all surfaces, and ...
  28. [28]
    Parallax mapping - Valve Developer Community
    Sep 6, 2025 · Parallax mapping (also known as offset mapping or virtual displacement mapping) is a shading technique that displaces the individual pixel height of a surface.
  29. [29]
    Parallax Occlusion Tutorial for Unreal Engine - ArtStation
    Sep 4, 2019 · This is the process I used to generate POM's (parallax occlusion maps) for the interiors of the buildings in Archangel and Archangel: Hellfire.
  30. [30]
    First Look at Parallax Occlusion Mapping in Farming Simulator 22
    Jul 1, 2021 · Parallax Occlusion will make the terrain and other surfaces in the game look more detailed and dynamic if the feature is turned on in the settings.
  31. [31]
    Creation of Parallax Occlusion Mapping (POM) in details
    Sep 15, 2024 · Parallax Occlusion Mapping (POM) is a complex texture-based approach to add more details for meshes. With POM we get more depth visually than we ...
  32. [32]
    Learn Substance 3D Designer The PBR Guide - Part 2 - Adobe Learn
    The PBR shader will also use ambient occlusion, normal and possibly height for parallax or displacement mapping (Figure 18). In the metal/roughness workflow ...
  33. [33]
    [PDF] PBR Workflow Implementation for Game Environments - IS MUNI
    May 21, 2017 · In the practical part, I have created a game environment from scratch and demonstrated modern techniques such as normal baking,. PBR shaders, ...
  34. [34]
    Relief texture mapping | Proceedings of the 27th annual conference ...
    Our approach is very simple: a relief texture (texture extended with an orthogonal displacement per texel) is mapped onto a polygon using a two-step process: ...
  35. [35]
    [PDF] Relief Mapping of Non-Height-Field Surface Details
    The coordinates of the intersection point are refined using a binary search, and then used to sample the normal map and the color texture. Figure 2: Ray ...
  36. [36]
    Chapter 18. Relaxed Cone Stepping for Relief Mapping
    During rendering, the depth map can be dynamically rescaled to achieve different effects, and correct occlusion is achieved by properly updating the depth ...Missing: AMD | Show results with:AMD<|separator|>