Fact-checked by Grok 2 weeks ago

Shadow mapping

Shadow mapping is a technique for rendering shadows in three-dimensional scenes, introduced by Lance Williams in 1978 as a method to cast curved shadows onto curved surfaces using depth buffering. The approach involves two primary rendering passes: first, generating a (or shadow map) from the perspective of the light source by rendering the scene and storing the depth values of visible surfaces; second, during the main scene render from the viewer's perspective, transforming fragment coordinates into light space and comparing their depths against the shadow map to determine if they are occluded and thus shadowed. This image-based method offers significant advantages for interactive applications, such as and simulations, due to its linear computational cost relative to scene complexity—approximately twice that of standard rendering—and its compatibility with via graphics APIs like and . It supports dynamic shadows for both static and moving objects without requiring additional geometric primitives, making it suitable for large-scale environments. However, shadow mapping is prone to artifacts, including from limitations, self-shadowing "acne" on surfaces due to floating-point errors, and where varies unevenly across the scene; these issues are commonly mitigated through techniques like depth bias, percentage-closer filtering, and offsetting. Over time, variants have addressed these limitations to enhance quality and performance. Cascaded shadow maps divide the view frustum into multiple depth ranges, allocating higher to nearer cascades for improved detail in foreground shadows. Variance shadow maps store depth variance in the map to enable soft shadows via statistical sampling, reducing without multiple samples per fragment. Other improvements include adaptive adjustments and with modern GPU features for handling complex , such as point lights via cube-mapped shadow maps. These evolutions have made shadow mapping a foundational technique in real-time rendering pipelines, including those in engines like .

Fundamentals

Definition and History

Shadow mapping is a rasterization-based technique used to approximate hard in rendered by generating a , known as the shadow map, from the viewpoint of a light source and then comparing depths during the primary rendering to determine shadowed regions. This image-space method leverages depth buffering to efficiently handle occlusions without explicit ray tracing, making it suitable for both static and dynamic . The technique was invented by Lance Williams in 1978, detailed in his seminal paper "Casting Curved Shadows on Curved Surfaces," which introduced the core idea of projecting depth information from a light's perspective to cast shadows onto arbitrary surfaces, including curved ones. Initially, shadow mapping found application in offline rendering for pre-computed animations and , particularly in the 1980s as computational power allowed for more complex scene illumination in . For instance, researchers extended the method in 1987 to handle antialiased shadows using depth maps for area light sources, enabling higher-quality results in ray-traced environments like those in early computer-animated films. Shadow mapping transitioned to real-time rendering in the late 1990s and early , driven by advancements in graphics hardware that supported programmable shaders and depth textures. The GeForce 3 GPU, released in 2001, provided hardware acceleration for shadow maps via 8 and extensions, allowing efficient implementation in interactive applications. This milestone facilitated its adoption in video games, marking one of the earliest uses of shadow mapping for dynamic shadows. By the mid-, integration into standard rendering pipelines in and enabled widespread use for handling multiple dynamic lights in scenarios, evolving from its offline origins to a cornerstone of modern graphics engines.

Principles of Shadows and Shadow Maps

Shadows in optical physics arise from the occlusion of by intervening , preventing direct illumination from reaching certain surfaces. When an opaque object blocks rays from a source to a , it casts a consisting of two distinct regions: the umbra, where the light source is completely obstructed and no direct reaches the surface, and the penumbra, where partial occurs, allowing some light rays to graze the edges of the occluder and create a transitional of reduced intensity. This formation depends on the relative positions of the , occluder, and , with the umbra being the darkest core and the penumbra providing a softer boundary. The nature of shadows—hard or soft—fundamentally stems from the size and distance of the light source relative to the occluder. A point light source, idealized as having zero extent, produces sharp, hard shadows with no penumbra because all rays are either fully blocked or fully transmitted, resulting in binary occlusion. In contrast, extended light sources, such as area lights with non-negligible size comparable to the occluder distance, generate soft shadows featuring prominent penumbrae, as varying portions of the source remain visible around the occluder's edges, blending the transition from full shadow to illumination. Larger source sizes or closer occluder distances amplify the penumbra width, enhancing realism but increasing computational complexity in simulation. In , shadow maps digitally represent these principles as a capturing the minimum depth from the source to visible surfaces within its view , serving as a for determining shadowed regions during rendering. This encodes, for each ( in ), the closest distance along rays emanating from the , effectively approximating the umbra and penumbra boundaries by comparing depths against stored values. The technique relies on rasterization pipelines that employ to transform world coordinates into the light's view via view-projection matrices, which define the as a volume bounding the illuminated . Depth buffering, a core prerequisite in this rasterization process, maintains a per-pixel buffer storing the minimum depth value encountered during scene traversal, resolving visibility by discarding fragments farther from the viewpoint (or , in shadow map generation). ensures accurate mapping by applying homogeneous transformations—combining view matrices (positioning the as camera) and projection matrices ( or orthographic)—to clip and normalize coordinates within the , enabling the shadow map to align seamlessly with the 's optical projection. This foundation allows shadow maps to efficiently proxy real-world without explicit tracing of every .

Core Algorithm

Generating the Shadow Map

The generation of the shadow map constitutes the first pass of the shadow mapping algorithm, where the scene is rendered solely from the perspective of the light source to capture depth information about occluding geometry. This process utilizes the light's viewpoint to determine visible surfaces, storing their distances in a that serves as the shadow map. Introduced by Williams in 1978, this depth-only rendering leverages Z-buffer techniques to efficiently compute the nearest surface depth for each pixel in the light's view . To initiate the generation, the view matrix for the light is established by positioning a virtual camera at the light source and orienting it along the light's direction, transforming world-space coordinates into light-view space. The is then configured based on the light type: an for directional lights to model parallel rays emanating from an infinite distance, and a projection for spot lights to simulate the conical illuminated by the source with a defined and angle. For point lights, which emit in all directions, a projection is applied across multiple faces of a cubemap to encompass the full 360-degree surroundings, though basic implementations often limit this to simpler cases. The scene is subsequently rendered using these matrices, employing a fragment or render state that discards color output and writes only the depth values to the attached depth buffer. These depths are stored in a texture, typically at a resolution like 1024×1024 pixels, which provides a balance between shadow detail and rendering overhead. During rendering, the depth value z_{\text{light}} for each fragment is derived from the light-space position of the world vertex, computed as
z_{\text{light}} = \text{projection}_{\text{light}} \cdot \text{view}_{\text{light}} \cdot \mathbf{pos}_{\text{world}},
and then normalized and clamped to the [0,1] range suitable for texture storage, representing the relative distance from the light to the surface. This value records the minimum depth (closest occluder) per texel via depth testing, ensuring the shadow map encodes only the frontmost geometry visible to the light.
For scenes with multiple light sources, shadow maps are generated sequentially for each active light, producing distinct depth textures that can later be sampled independently during scene rendering. This per-light approach accommodates varying types and positions but scales the computational cost with the number of shadow-casting lights, often necessitating optimizations like limiting shadows to key sources in applications.

Rendering the Scene with Shadows

In the rendering pass from the camera's viewpoint, the scene is drawn normally, but with additional computations to incorporate shadows using the previously generated shadow map. For each fragment, its world-space position is transformed into the light's view space by applying the light's view-projection matrix, yielding texture coordinates and a depth value in light space. These coordinates are used to sample the corresponding depth from the shadow map, and the fragment's light-space depth is compared to this sampled value: if the fragment's depth exceeds the sampled depth, the fragment is deemed to be in shadow and receives reduced illumination from that light source. This process effectively projects the shadow map onto the scene geometry to identify shadowed regions. To mitigate self-shadowing artifacts, known as shadow acne, where surfaces incorrectly shadow themselves due to precision limitations in depth comparisons, a is applied to the fragment's depth value before the comparison. In the original formulation, this is a small constant subtracted from the transformed depth to push the surface slightly closer to the light, preventing erroneous shadowing while potentially introducing minor edge discrepancies. Modern implementations often employ a -scale depth , which dynamically adjusts the based on the surface's relative to the light direction—steeper slopes receive larger biases to better handle grazing angles and reduce acne without excessive detachment of shadows from casters. The result of the depth comparison yields a binary shadow factor (0 for shadowed, 1 for lit), which is multiplied by the light's contribution in the shading equation to attenuate illumination in shadowed areas. For instance, the shadowed can be computed as \min(1, \text{compare}(\text{depth}_\text{map}, \text{depth}_\text{fragment} - \text{bias})), where the compare function returns 1 if the fragment is visible to the light and 0 otherwise; this factor scales the diffuse, specular, or other terms from that light in the final fragment color. This integration occurs in the fragment shader, allowing shadows to be seamlessly blended with the rest of the model without altering the core rendering significantly.

Implementation Challenges

Coordinate Transformations

In shadow mapping, coordinate transformations are essential to project scene geometry from world space into the light's view for depth comparison during rendering. The process begins by transforming a world-space position \mathbf{p}_w = (x_w, y_w, z_w, 1)^T into light clip space using the light's view \mathbf{V}_L and \mathbf{P}_L, resulting in \mathbf{p}_c = \mathbf{P}_L \mathbf{V}_L \mathbf{p}_w. This operation positions the geometry relative to the light source, analogous to the camera's view-projection in standard rendering. The homogeneous clip-space coordinates \mathbf{p}_c = (x_c, y_c, z_c, w_c)^T then undergo a divide to obtain normalized device coordinates (NDC): \mathbf{p}_n = (x_c / w_c, y_c / w_c, z_c / w_c, 1)^T, where the NDC for x and y is [-1, 1] across major graphics (OpenGL and Direct3D), while the z depends on the API: [-1, 1] in OpenGL and [0, 1] in Direct3D. To map these to texture coordinates in the [0, 1] suitable for shadow map sampling, a scale-and-bias operation is applied, which varies by API. In OpenGL, \mathbf{t} = 0.5 \cdot \mathbf{p}_n + 0.5, yielding t_x = 0.5 \cdot (x_c / w_c) + 0.5, t_y = 0.5 \cdot (y_c / w_c) + 0.5, and t_z = 0.5 \cdot (z_c / w_c) + 0.5. In Direct3D, the x and y components use t_x = 0.5 \cdot (x_c / w_c) + 0.5 and t_y = -0.5 \cdot (y_c / w_c) + 0.5 (accounting for the inverted y-axis), while t_z = z_c / w_c (no scaling, as z_NDC is already [0, 1]). This can be expressed compactly in the OpenGL convention as: \mathbf{t} = \frac{\mathbf{P}_L \mathbf{V}_L \mathbf{p}_w}{w_c} \cdot 0.5 + 0.5 The operations are typically implemented using an API-specific bias multiplied by the clip-space before the divide. For directional lights, which model parallel rays like , an \mathbf{P}_L is used instead of , simplifying the transformation since w_c = 1 for all points, eliminating the perspective divide's nonlinear effects on depth. This results in linear z-depth distribution in NDC, aiding uniform sampling across the shadow map. A crop may further align the light's to the camera's view , defined as: \mathbf{C} = \begin{pmatrix} S_x & 0 & 0 & O_x \\ 0 & S_y & 0 & O_y \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, where S_x, S_y scale to fit the view extents and O_x, O_y for centering, ensuring efficient usage without extraneous areas. Challenges in these transformations include the perspective divide, which can amplify numerical instability near w_c \approx 0 (e.g., at the light's near ), potentially causing incorrect sampling. Frustum misalignment may lead to artifacts: if the light inadequately covers the view , shadows appear missing; if oversized, resolution is wasted on empty space, exacerbating . Proper alignment via the crop matrix mitigates this by tightly bounding the light to the relevant scene volume.

Depth Testing and Precision Issues

In the rendering phase of shadow mapping, depth testing determines whether a fragment is in shadow by comparing its depth in light space to the corresponding value stored in the shadow map. For a given fragment, its position is transformed into the light's view , yielding a depth value d_{\text{fragment}} and projected coordinates for sampling the shadow map. The shadow map provides a sampled depth d_{\text{map}} at those coordinates. The fragment is considered shadowed if d_{\text{fragment}} > d_{\text{map}} + b, where b is a small value added to prevent surface self-shadowing due to numerical inaccuracies. This comparison is typically implemented as a conditional test in a : \text{shadow} = \begin{cases} 1.0 & \text{if } d_{\text{fragment}} \leq d_{\text{map}} + b \\ 0.0 & \text{otherwise} \end{cases} The b is crucial, as it offsets the comparison to account for floating-point precision limits and minor geometric discrepancies between the shadow map generation and scene rendering passes. Precision issues arise primarily from the finite resolution of the depth buffer used to store the shadow map, leading to artifacts such as , commonly known as shadow acne. Shadow acne manifests as speckled or noisy self-shadowing on surfaces, where the limited (e.g., 16-bit floating-point format) causes small depth value differences to be indistinguishable, resulting in incorrect comparisons for nearby . Using a 32-bit depth format improves by providing more granular depth values, reducing acne in scenes with fine surface details, though it increases usage. Increasing the bias to mitigate can introduce peter panning, where shadows detach from their casters and "float" away, creating unnatural gaps due to overcompensation in the depth comparison. This artifact is exacerbated in low-precision buffers, as the nonlinear distribution of depth values in perspective projections allocates fewer bits to distant geometry, amplifying errors over large depth ranges. One solution to improve depth precision distribution is the use of logarithmic depth buffers, which remap depth values to a during rendering, providing higher relative precision for both near and far depths in the shadow map. This approach helps alleviate and related artifacts in scenes with significant depth variations, though it requires careful adjustment of the and may introduce minor distortions in uniform depth sampling.

Basic Enhancement Techniques

Filtering and Smoothing

Filtering and smoothing techniques address the artifacts inherent in basic shadow maps, where raw depth comparisons produce harsh, blocky shadow edges due to limited . These methods apply simple or approximation during the shadow test to soften transitions, improving visual quality without simulating physically accurate penumbrae. Percentage-closer filtering (PCF) is a foundational approach that reduces hard edges by sampling multiple points around the projected in the shadow map and averaging the results. PCF was first introduced in 1987 by William Reeves, David Salesin, and Robert Cook in their work on rendering antialiased shadows with depth maps for offline rendering. Uniform sampling variants, using fixed grid offsets such as a 2x2 grid (4 taps), became feasible for applications with the advent of programmable GPUs around 2001. For each sample, the fragment's depth is compared to the stored shadow map depth plus a to prevent self-shadowing; the proportion of samples where the fragment is farther (occluded) determines the shadow factor between 0 (fully lit) and 1 (fully shadowed), effectively convolving a binary . This uniform box filter is implemented directly in the fragment during rendering, requiring no pre-processing of the shadow map. With early hardware, simple 1-4 tap PCF became feasible for applications, though it increases sampling cost linearly with tap count—e.g., 4 taps quadruple the comparisons per fragment. On low- shadow maps (e.g., 1024x1024), this yields smoother edges at the expense of performance, often trading 20-50% for reduced in dynamic scenes.

Percentage Closer Filtering

Percentage Closer Filtering (PCF) is a widely adopted technique for mitigating artifacts in shadow mapping by softening shadow edges through multi-sample depth comparisons. Developed in 1987 by Reeves et al. as part of early efforts to render antialiased shadows using depth maps, PCF computes the shadowed fraction of a surface fragment by determining the proportion of nearby shadow map depths that occlude it from the light source. This approach reverses the typical filtering order, first performing binary comparisons and then averaging the results to produce smooth transitions rather than binary hard shadows. In PCF, the process begins by projecting the fragment's position into the light's view and sampling multiple depth values from the shadow map around the corresponding , often using a square such as 3x3 (9 samples) or 5x5 (25 samples) to cover a local region. For each sample, a standard depth test is applied, comparing the fragment's depth to the sampled shadow map depth plus a small to prevent self-shadowing due to surface . The outcomes (occluded or visible) are averaged to yield a shadow factor between 0 (fully lit) and 1 (fully shadowed), enabling partial shadowing that approximates penumbral effects. For example, a 3x3 might result in a shadow factor of 0.44 if 4 out of 9 samples indicate , providing a subtle gradient at shadow boundaries. The mathematical foundation of PCF is given by the following equation: \text{[shadow](/page/Shadow)} = \frac{1}{n} \sum_{i=1}^{n} \left( d_{\text{fragment}} > d_{\text{map},i} + \text{[bias](/page/Bias)} \right) where n is the number of samples, d_{\text{fragment}} is the fragment's depth in light space, d_{\text{map},i} is the i-th sampled depth from the shadow map, and the yields 1 if occluded (in ) or 0 otherwise. This formulation is efficiently implemented in shaders using GPU hardware features like texture LOD for multi-tap sampling, which automates the depth and on modern graphics hardware such as GPUs. Optimizations in PCF often involve non-uniform sampling patterns to avoid the blocky blurring of uniform grids and better distribute samples for isotropic softening. Poisson disk sampling, which places samples at minimum distances to ensure even coverage without clustering, is a common choice, originally inspired by jittered methods in early implementations to enhance efficiency and reduce visible patterns. While a 5x5 uniform can effectively smooth larger , it imposes substantial performance overhead on GPU fill rate through increased lookups and arithmetic operations, typically requiring reductions to 4-9 samples via dithered or rotated patterns for rendering in applications like .

Advanced Shadow Mapping Methods

Cascaded Shadow Maps

Cascaded shadow maps address the limitations of standard shadow mapping in scenes with large view distances by partitioning the camera's view into multiple depth ranges, or cascades, each rendered with its own dedicated shadow map. This approach allocates higher resolution to nearer cascades where shadows require finer detail to minimize artifacts, while coarser resolution suffices for distant ones, optimizing overall shadow quality across varying depths. Introduced as parallel-split shadow maps, the technique divides the using planes parallel to the view plane, enabling efficient handling of expansive environments without excessive memory or performance costs. To set up cascaded shadow maps, split distances are first computed based on the camera 's near and far planes. For a frustum divided into m cascades, the split positions C_i (where i = 0 to m, C_0 = n the near plane, and C_m = f the far plane) determine the boundaries of each cascade. Common methods include uniform spacing for even depth sampling, though it leads to poor distribution, and logarithmic spacing to achieve more uniform . The logarithmic split is given by: C_i = n \left( \frac{f}{n} \right)^{i/m} A practical variant balances these by averaging the logarithmic and uniform splits, adjusted by a small bias \delta to fine-tune distribution: C_i = \frac{ n \left( \frac{f}{n} \right)^{i/m} + n + (f - n) \frac{i}{m} }{2} + \delta Once splits are defined, the scene is rendered from the light's perspective for each cascade, clipping to the corresponding frustum slice to generate separate depth maps—typically 1 to 4 in number, stored in a texture array for efficient access. During the final scene rendering from the camera's view, each fragment's depth z is compared against the split positions to select the appropriate cascade; for instance, if C_{i-1} \leq z < C_i, the i-th shadow map is sampled after transforming the fragment coordinates into that cascade's light space. This selection ensures precise depth comparisons while referencing basic shadow map generation principles for each individual map. The NVIDIA implementation further refines cascade alignment by computing a crop matrix to tightly fit the light frustum to each camera slice, enhancing depth buffer precision and reducing wasted resolution. Typically, 4 cascades suffice for most real-time applications, with nearer ones using higher resolutions (e.g., full texture size) and farther ones downsampled, balancing quality and GPU overhead. This method significantly improves shadow fidelity in large-scale scenes compared to single-map approaches, though it increases rendering passes proportional to the number of cascades.

Variance and Exponential Shadow Mapping

Variance Shadow Mapping (VSM) is a storage-efficient technique that approximates shadow tests using statistical properties of depth distributions rather than storing raw depth values per texel. Instead of a single depth value, VSM stores the first two moments—the mean depth \mu and the variance \sigma^2—of the depths within each texel, enabling filtered shadow computation without requiring multiple per-sample comparisons during rendering. This approach leverages Chebyshev's inequality to bound the probability that a fragment is lit, allowing for hardware-accelerated filtering methods like mipmapping and anisotropic filtering to produce soft shadows efficiently. In VSM, the shadow map is generated by computing the mean \mu = E and second moment E[z^2] for each texel, where z represents the depth distribution (higher values indicate greater distance from the light source), and the variance is derived as \sigma^2 = E[z^2] - \mu^2. During rendering, for a fragment depth d, the lit visibility factor V (0 fully shadowed, 1 fully lit) is computed as follows: if d > \mu, then V = 0; otherwise, V = \min\left(1, \frac{\sigma^2}{\sigma^2 + (\mu - d)^2}\right). This provides an upper bound on the lit probability P(z \geq d) via , avoiding explicit sampling of multiple depths and addressing the high GPU cost of techniques like Percentage Closer Filtering (PCF) that rely on multi-sample comparisons. However, VSM can suffer from light leakage artifacts, where shadowed areas appear partially lit due to the loose nature of the bound, particularly in regions of high variance or depth complexity; this is mitigated by applying a to the mean depth \mu (e.g., shifting it toward the light source by decreasing \mu) and clamping the variance to reduce overestimation in low-complexity scenes. Exponential Shadow Mapping (ESM) provides another approximation-based alternative, transforming depth values into an exponential domain to facilitate pre-filtering and hardware mipmapping for soft shadows. During shadow map generation, each stores the exponential of the depth, e^{c z} (where z is the occluder depth and c > 0 is a constant, often around 80 for 32-bit floats), approximating the visibility integral under an exponential shadow test e^{-c(d - z)} for fragment depth d. This storage enables efficient in the exponential space, as the filtered shadow value becomes e^{-c d} \cdot (w * e^{c z})(p), where w is the filter kernel and p the projected position, allowing direct use of GPU mipmaps or additional Gaussian blurs (e.g., 5x5 kernels) for high-quality, translation-invariant soft shadows without per-fragment sampling overhead.

Soft and Realistic Shadows

Soft Shadow Algorithms

Soft shadow algorithms extend traditional shadow mapping by simulating the penumbra regions that arise when light sources have finite extent, such as disks or rectangular areas, leading to gradual transitions from umbra to full illumination rather than abrupt hard edges. Physically, this is based on the over the light source area, where the attenuation at a point is the average visibility fraction across sampled light positions, approximating the from an extended source. The penumbra width at a receiver surface is proportional to the light source multiplied by the of the blocker-to-receiver to the light-to-blocker , enabling realistic gradient computation without exhaustive ray tracing. One approach involves shadow map warping to redistribute samples preferentially in penumbral regions for efficient softness approximation. Penumbra maps achieve this by rendering a standard from the light center, then generating a secondary map from object edges projected as cones and sheets, with intensity modulated by depth differences to concentrate samples where penumbra forms, avoiding uniform blurring artifacts. Similarly, view-warped multi-view soft shadowing warps a central enlarged view of occluders into multiple s for area light samples, using GPU compute to reproject fragments and distribute visibility queries across penumbra via depth operations, yielding accurate gradients 2-5 times faster than naive multi-view rasterization. Layered maps address area lights by storing multiple depth and visibility layers per pixel to capture occlusion hierarchies. Layered attenuation maps precompute a layered depth image from numerous light-sampled shadow maps, warping and sorting depths to compute per-layer attenuation fractions, which are then projected during rendering to modulate pixel colors with soft visibility averages. Multilayer transparent shadow maps extend this for complex geometry like volumes or fur, accumulating multiple opaque and transparent layers in a single pass, then ray-tracing through the layers at render time to evaluate visibility integrals for area lights, achieving production-quality softness 4-5 times faster than equivalent multi-view methods. Fitted distribution sampling provides analytical control over sample placement for soft by adapting partitions to the projected distribution. Sample distribution shadow maps reconstruct world positions from camera and buffers to fit tight Z-partitions in space, concentrating in occupied regions and enabling exponential variance filtering for view-dependent penumbra widths with minimal . Post-2010 advances emphasize adaptive sampling guided by scene geometry to reduce computational cost while preserving penumbra accuracy. Axis-aligned filtering with ray tracing adaptively adjusts sample counts and filter sizes per based on local variance and geometric features, reducing required samples by 4-10 times compared to methods and enabling interactive rates (2-39 ) for complex scenes with up to 309K vertices. These techniques integrate seamlessly with deferred rendering pipelines, where shadow maps are computed upfront and sampled during the lighting pass to apply view-dependent softness without additional geometry traversals. More recent developments as of 2025 include neural extensions to shadow mapping, such as Neural Shadow Mapping, which uses to refine hard shadows into soft ones in real-time with high quality and low cost. Additionally, Importance Deep Shadow Maps adaptively distribute samples using hardware ray tracing for improved soft shadows in dynamic scenes.

Contact Hardening and Temporal Methods

Contact hardening techniques in shadow mapping aim to simulate the realistic tightening of shadow edges near occluders, where penumbras are smaller due to proximity, transitioning to broader softness farther away. This effect enhances perceptual realism by mimicking how shadows appear sharper at contact points in the physical world. One prominent approach uses signed distance fields (SDFs) to approximate occluder geometry, enabling ray marching from receivers to determine shadow hardness based on the minimum distance to nearby surfaces. Introduced in Unreal Engine around 2014 and refined in subsequent versions, this method generates mesh distance fields during preprocessing, storing the signed distance to the nearest surface in a 3D texture. Shadows are then computed by tracing rays along the light direction; the hardness factor can be modeled as \exp\left(-\frac{d}{r}\right), where d is the distance to the contact point and r is the light radius, ensuring sharp umbras near occluders and gradual softening with distance. Screen-space approximations provide an alternative for implementation without full geometric precomputation. For instance, erosion-based methods apply morphological operators to hard shadow maps in screen space, detecting edges via Laplacian filters and eroding them proportionally to estimated penumbra widths. The penumbra width is approximated as \omega_{\text{penumbra}} = \frac{(d_{\text{receiver}} - d_{\text{blocker}}) \cdot \omega_{\text{light}}}{d_{\text{blocker}} \cdot d_{\text{observer}}}, where depths are sampled from the shadow map and \omega_{\text{light}} is the light source size; this scales filtering kernels to tighten shadows near detected blockers. Multi-pass Gaussian filtering further refines this by unprojecting shadow map samples into world space and accumulating weighted contributions based on occluder distances, adaptively adjusting pass counts for efficiency in dynamic scenes. Temporal methods address flickering and in dynamic mapping by leveraging frame-to-frame through reprojection and accumulation. These techniques reproject the previous frame's map into the current view using inverse view-projection matrices, blending it with new samples to stabilize edges over time. A history stores accumulated tests, updated via : s(n) = w \cdot f(n) + (1 - w) \cdot s(n-1), where f(n) is the current frame's result and w is a confidence-weighted factor (e.g., raised to a power of 3–15 for rapid adaptation). To mitigate ghosting and , variance clipping rejects outlier history samples by comparing them against the local mean and standard deviation of current-frame neighborhoods, ensuring smooth transitions in scenes. This accumulation converges to pixel-accurate within 10–60 frames, reducing temporal at rates above 30 Hz while integrating naturally with for coherent dynamic . Exponential Variance Shadow Mapping (EVSM) enhances bounded softness in these temporal pipelines by warping depths with exponentials before variance computation, minimizing light bleeding while supporting filtered accumulation.

Applications and Comparisons

Real-Time Use in Games and Simulations

Shadow mapping has become a cornerstone of real-time rendering in major game engines, enabling dynamic shadows that enhance visual realism while maintaining interactive frame rates. In , cascaded shadow maps are employed to divide the view into multiple zones, each with tailored shadow resolution, allowing high-fidelity shadows near the camera without uniformly high costs across the . This approach integrates with percentage-closer filtering (PCF) techniques to soften shadow edges, supporting dynamic ing for moving objects in games like those developed with Unity's Universal Render Pipeline (URP). For performance optimization, developers often blend dynamic shadow mapping with baked shadows, where static elements precompute shadows into lightmaps to reduce runtime GPU load, reserving real-time for dynamic elements such as characters or vehicles. Unreal Engine similarly leverages cascaded shadow maps for whole-scene dynamic shadowing, splitting the camera frustum into cascades to optimize resolution allocation and mitigate perspective aliasing. Here, dynamic shadows via shadow mapping handle movable lights and objects, while baked shadows are used for stationary geometry to achieve higher quality at lower cost, with transitions managed through distance-based blending. This hybrid strategy is prevalent in titles built on Unreal, balancing visual depth with frame rates above 60 FPS on consumer hardware. In simulations such as (VR) and (AR) applications, shadow mapping ensures low- rendering critical for immersion and motion sickness prevention. For instance, AR systems like those on use real-time shadow mapping integrated with to cast photorealistic shadows from virtual objects onto dynamic real-world scenes, with light positioning computed once per session to minimize per-frame overhead. These setups prioritize single-pass depth rendering to minimize latency, enabling seamless integration in training simulations or interactive environments. Mobile optimizations for shadow mapping focus on reduced resolution and level-of-detail (LOD) techniques to accommodate limited GPU power. In Unity URP for mobile, developers lower shadow map resolutions (e.g., from 2048 to 1024 pixels) and enable soft shadows to maintain quality with fewer samples, while cascades act as by applying coarser shadows to distant objects. Unreal Engine mobile pipelines similarly cap cascade counts at two and use aggressive to limit shadow-casting objects, ensuring shadows contribute minimally to the rendering budget on devices like smartphones. These optimizations enable 30-60 in demanding games on modern mobile devices. On modern mobile GPUs, these optimizations allow 30-60 in demanding games. Overall, shadow mapping's GPU cost in real-time games typically ranges from 1-5 per frame on modern hardware like series, depending on resolution and cascade complexity, making it viable for 60+ rendering when paired with strategies for distant shadows.

Comparisons to Shadow Volumes and Ray Tracing

Shadow mapping, an image-space technique introduced by Williams in 1978, differs fundamentally from shadow volumes, a geometry-based object-space method proposed by in 1977. Shadow volumes extrude occluder silhouettes to form polygonal volumes that precisely determine shadowed regions through operations, yielding exact hard shadows for polygonal geometry without resolution-dependent at edges. However, this approach incurs high fill rates due to extensive overdraw, particularly in complex scenes with many lights or detailed models, limiting its scalability on hardware with constrained rasterization bandwidth. In contrast, shadow mapping leverages efficient GPU rasterization to render depth maps from the light's viewpoint, enabling rapid shadow determination for arbitrary , including non-polygonal surfaces like alpha-tested foliage or displacement-mapped . While shadow mapping introduces artifacts from finite resolution and requires bias adjustments to prevent self-shadowing, its performance advantages have made it the preferred choice for rendering in complex scenes since the mid-2000s, as programmable shaders and increased GPU throughput favored image-space parallelism over geometric extrusion. Shadow volumes, though offering superior edge accuracy for simple polygonal casters, became less viable in such environments due to their fill-rate sensitivity and difficulties with dynamic or high-detail content. Compared to ray tracing, which traces rays from surfaces to lights for physically accurate visibility queries, shadow mapping provides a faster approximation suited to real-time constraints. Traditional ray tracing delivers exact shadows, including soft variations from area lights, but its computational cost historically confined it to offline rendering, whereas shadow mapping achieves interactive rates through two-pass rasterization. In the 2020s, hardware-accelerated ray tracing on platforms like enables hybrid approaches, where ray-traced shadows supplement or denoise shadow maps to mitigate artifacts like peter-panning or bias-induced , combining rasterization speed with ray tracing's precision at a tolerable performance penalty. Key trade-offs highlight shadow mapping's rasterization efficiency against shadow volumes' geometric fidelity and tracing's exactness. Shadow mapping excels in speed for large, complex scenes but requires mitigation for resolution-limited and errors, issues absent in tracing's direct sampling; shadow volumes provide crisp boundaries without such biases but at the expense of , often necessitating hybrids for balanced quality and .

References

  1. [1]
    [PDF] Casting curved shadows on curved surfaces. - UCSD CSE
    CASTING CURVED SHADOWS ON CURVED SURFACES. Lance Williams. Computer Graphics Lab. New York Institute of Technology. Old Westbury, New York 11568. Abstract.
  2. [2]
    [PDF] Shadow Maps - UCSC Creative Coding
    Shadow mapping or shadowing projection is a process by which shadows are added to 3D computer graphics. This concept was introduced by Lance Williams in.
  3. [3]
    [PDF] Projective Texture Mapping - UCSD CSE
    Shadows make 3D computer graphics look better. Without them, scenes often feel unnatural and flat, and the relative depths of objects in the scene can be very ...
  4. [4]
    [PDF] Shadow Silhouette Maps - Stanford Computer Graphics Laboratory
    In this paper, we propose the method of silhouette maps, in which a shadow depth map is augmented by storing the lo- cation of points on the geometric ...
  5. [5]
    Casting curved shadows on curved surfaces - ACM Digital Library
    Casting curved shadows on curved surfaces. SIGGRAPH '78: Proceedings of the 5th annual conference on Computer graphics and interactive techniques.
  6. [6]
    Chapter 12. Omnidirectional Shadow Mapping - NVIDIA Developer
    The shadow-mapping algorithm came to computer graphics in 1978 when it was introduced by Lance Williams (Williams 1978). Today, this method is used in a ...Missing: history | Show results with:history
  7. [7]
    [PDF] ~ Computer Graphics, Volume 21, Number 4, July 1987
    The shadow algorithm presented here is independent of the type of region used. Each region to he shaded is first mapped into light space, giving a region in the ...
  8. [8]
    First Game EVER to use Shadow Mapping? - GameDev.net
    Nov 30, 2007 · I thought Oddworld: Munch's Oddysee was one of the first games I remember hearing about that used shadow mapping. It was released November 2001.
  9. [9]
    None
    ### Summary of Optical Principles of Shadows
  10. [10]
    [PDF] Shadows - MIT
    How do we look up the shadow map? • Use the 4x4 camera matrix from the light source. • We get (x',y',z').<|control11|><|separator|>
  11. [11]
    [PDF] Shadows
    • Image-space shadow determination. – Lance Williams published the basic idea in 1978. • By coincidence, same year Jim Blinn invented bump mapping. (a great ...Missing: history | Show results with:history
  12. [12]
    [PDF] Projections and Z-buffers - UT Computer Science
    We can write orthographic projection onto the z =0 plane with a simple matrix. But normally, we do not drop the z value right away. Why not? " x. " y.
  13. [13]
    Casting curved shadows on curved surfaces - ACM Digital Library
    Casting curved shadows on curved surfaces. Author: Lance Williams. Lance ... PDFeReader. Contents. SIGGRAPH '78: Proceedings of the 5th annual conference ...
  14. [14]
    Shadow Mapping - LearnOpenGL
    The first thing to do to check whether a fragment is in shadow, is transform the light-space fragment position in clip-space to normalized device coordinates.Point Shadows · Here · CSMMissing: integration | Show results with:integration
  15. [15]
    Tutorial 16 : Shadow mapping
    In this tutorial, we'll first introduce the basic algorithm, see its shortcomings, and then implement some techniques to get better results.
  16. [16]
    Common Techniques to Improve Shadow Depth Maps - Win32 apps
    Aug 31, 2020 · This technical article provides an overview of some common shadow depth map algorithms and common artifacts, and explains several techniques.
  17. [17]
    [PDF] Cascaded Shadow Maps | NVIDIA
    Cascaded Shadow maps (CSM) is a know approach that helps to fix the aliasing problem by providing higher resolution of the depth texture near the viewer and ...
  18. [18]
    Chapter 10. Parallel-Split Shadow Maps on Programmable GPUs
    In this chapter we present an advanced shadow-mapping technique that produces antialiased and real-time shadows for large-scale environments.
  19. [19]
    Chapter 11. Shadow Map Antialiasing - NVIDIA Developer
    This chapter shows how to significantly reduce shadow map aliasing in a shader. It describes how to implement a simplified version of percentage-closer ...
  20. [20]
    [PDF] Automatic Detection of Shadow Acne and Peter Panning Artefacts in ...
    The shadow acne artefact can be caused by two factors. The first reason is a limited computation precision of the depth maps. When both depth values compared in ...<|control11|><|separator|>
  21. [21]
    [PDF] High Quality Shadows for Real-time Surface Visualization
    Oct 24, 2016 · This section explains in detail the implementation of Cascaded Shadow Maps that was developed as part of the thesis. It is a technique that can ...
  22. [22]
    [PDF] Game Developer - November 2010 - GDC Vault
    Like shadow acne, Peter Panning is aggravated when there is insufficient precision in the depth buffer. Calculating tight near planes and far planes also ...
  23. [23]
    [PDF] Logarithmic Perspective Shadow Maps - GAMMA
    We present a novel shadow map parameterization to reduce perspective aliasing artifacts for both point and directional light sources.<|control11|><|separator|>
  24. [24]
    Parallel-split shadow maps for large-scale virtual environments
    In this paper, we present the Parallel-Split Shadow Maps (PSSMs) scheme, which splits the view frustum into different parts by using the planes parallel to the ...
  25. [25]
    [PDF] Variance Shadow Maps
    Williams introduced shadow maps [Williams 1978] as an ef- ficient algorithm for computing shadows in general scenes. However, he points out that the usual ...
  26. [26]
    [PDF] Exponential Shadow Maps - Jan Kautz
    Exponential Shadow Maps (ESMs) use an exponential function to approximate the shadow test, enabling pre-filtering and high-quality hardware filtering, with ...
  27. [27]
    [PDF] Efficient Image-Based Methods for Rendering Soft Shadows
    The contributions of this paper are the two algorithms summa- rized below, which represent two ends of a spectrum. Page 2. Layered Attenuation Maps: Our first ...
  28. [28]
    [PDF] Penumbra Maps: Approximate Soft Shadows in Real-Time.
    This paper introduces the penumbra map, which extends current shadow map techniques to interactively approximate soft shadows. Using object silhouette edges, as ...Missing: seminal | Show results with:seminal
  29. [29]
    [PDF] View-warped Multi-view Soft Shadowing for Local Area Lights
    Jul 26, 2018 · Our view-warped soft shadows algorithm is a combination of point-generation and rendering approaches that quickly and accurately produce many ...
  30. [30]
    [PDF] Soft Shadows by Ray Tracing Multilayer Transparent Shadow Maps
    We present a method for high quality soft shadows for area lights in cinematic lighting. The method is an extension of traditional shadow maps, ...
  31. [31]
    (PDF) Sample distribution Shadow Maps - ResearchGate
    Feb 18, 2011 · This paper introduces Sample Distribution Shadow Maps (SDSMs), a new algorithm for hard and soft-edged shadows that greatly reduces undersampling, oversampling ...
  32. [32]
    [PDF] Axis-aligned filtering for interactive sampled soft shadows
    Figure 1: (a) Soft shadows from a planar area light are computed accurately by raytracing at 2.3 fps, with adaptive sampling and filtering using our method.
  33. [33]
    [PDF] High-Quality Adaptive Soft Shadow Mapping - IRIT
    This paper proposes a new visibility computation, a multi-resolution shadow map, and a view-dependent adaptive strategy to improve soft shadow mapping.Missing: distribution | Show results with:distribution
  34. [34]
    Distance Field Soft Shadows in Unreal Engine
    Each Light type can use Distance Fields shadows to create soft area shadows. These shadows simulate real-world shadows by retaining sharp contact shadows closer ...Missing: hardening | Show results with:hardening
  35. [35]
    Distance Fields in Unreal Engine - Tom Looman
    Oct 10, 2014 · Unreal Engine leverages the power of Signed Distance Fields for Ambient Occlusion and more recently added Ray Traced Distance Field Soft Shadows.
  36. [36]
    (PDF) Contact Hardening Soft Shadows using Erosion - ResearchGate
    In this paper, we present an image based method for computing contact hardening soft shadows by utilizing an ero-sion operator. Our method is based on ...
  37. [37]
    Multi-pass Gaussian Contact-hardening Soft Shadows ...
    We have developed a new physically-based technique for computing soft shadows from a shadow map for spherical lights. Shadow-map samples unprojected into ...
  38. [38]
  39. [39]
    [PDF] msalvi_temporal_supersampling.pdf - NVIDIA
    To address these cases we propose a new method called Variance Clipping (VC). We first compute the first two raw moments of the local color sample distribution.Missing: reprojection | Show results with:reprojection
  40. [40]
    [PDF] Advanced Shadow Algorithms
    Exponential Variance Shadow Maps are a combination of VSM and ESM. It was introduced by. Lauritzen [15] in order to solve the light-bleeding issues of both VSM ...
  41. [41]
    Introduction to shadow cascades - Unity - Manual
    Shadow cascades let you improve the visual fidelity of shadows without increasing the shadow map resolution. For example, the following illustration shows ...
  42. [42]
    Directional Shadows - Catlike Coding
    Dec 30, 2019 · This is the fourth part of a tutorial series about creating a custom scriptable render pipeline. It adds support for cascaded shadow maps.Rendering Shadows · Cascaded Shadow Maps · Shadow Quality
  43. [43]
    Optimize shadow rendering in URP - Unity - Manual
    Using the Soft Shadows property lets you achieve higher visual fidelity with lower shadow resolution values. The Soft Shadows property might have a significant ...
  44. [44]
    Shadowing in Unreal Engine - Epic Games Developers
    Contact Shadows are a per-light additive feature to use on top of other shadowing methods employed by lights. It is a screen space effect that draws rays from a ...Missing: 2015 | Show results with:2015
  45. [45]
    Baked Lighting in Real-Time Rendering: A Complete 3D Artist's Guide
    Shadow baking stores pre-calculated shadow data into the lightmaps, giving static objects realistic shadows without the computational overhead of dynamic shadow ...
  46. [46]
    Real Time Shadow Mapping for Augmented Reality Photorealistic ...
    May 30, 2019 · In this paper, we present a solution for the photorealistic ambient light render of holograms into dynamic real scenes, in augmented reality applications.
  47. [47]
    [PDF] Realizing a Low-latency Virtual Reality Environment for Motor ...
    In terms of lighting we use the simple Phong lighting model, and we apply shadow mapping to the character and other objects in the scene (see Figure 1 and ...
  48. [48]
    TIP: How to Enable Dynamic Shadows & Correct Reflection Maps on ...
    May 19, 2015 · Edit the settings in Cascaded Shadow maps to optimize performance (for instance using 1 cascade is more efficient, but mobile supports up to 2 ...
  49. [49]
    A Sampling of Shadow Techniques - The Danger Zone
    Sep 11, 2013 · On his blog, Matt Pettineo writes about his experiments on cascaded shadow maps, including cascade optimization, shadow map filtering and GPU ...
  50. [50]
    Chapter 14. Perspective Shadow Maps: Care and Feeding
    We render all six cube map faces in separate passes for the shadow map; the near plane for each pass is shown in green. They're forming another small cube ...
  51. [51]
    [PDF] An Efficient Hybrid Shadow Rendering Algorithm - People | MIT CSAIL
    We present a hybrid algorithm for rendering hard shadows accurately and efficiently. Our method combines the strengths of shadow maps and shadow volumes.<|separator|>
  52. [52]
    Shadow silhouette maps | ACM SIGGRAPH 2003 Papers
    In contrast, shadow volumes generate precise shadow boundaries but require high fill rates. In this paper, we propose the method of silhouette maps, in which a ...Abstract · Information & Contributors · Cited By
  53. [53]
    [PDF] Ray Traced Shadows: Maintaining Real-Time Frame Rates
    While offline rendering applications use ray tracing for shadow evaluation [20], real-time applications typically use shadow maps [21]. Shadow mapping is highly.Missing: early | Show results with:early
  54. [54]
    [PDF] Hybrid Ray-Traced Shadows - NVIDIA
    Apr 16, 2015 · Hybrid shadows combine best of both worlds. ○ No need to re-write your engine. ○ Fast enough for games today! Page 45. References. ShaderX 5: ...Missing: RTX | Show results with:RTX