3D computer graphics
3D computer graphics is a subfield of computer graphics that involves the creation, representation, and manipulation of three-dimensional objects and scenes using computational methods. It relies on mathematical models to define geometry, appearance, and spatial relationships in a virtual environment, which are then projected and rendered onto two-dimensional displays to produce realistic or stylized images.[1] This process typically encompasses stages such as modeling, where 3D shapes are constructed from primitives like polygons or curves; transformation, involving scaling, rotation, and translation; and rendering, which simulates lighting, shadows, and textures to generate the final image.[2] The development of 3D computer graphics began in the early 1960s with pioneering work in interactive systems. Ivan Sutherland's 1963 Sketchpad program at MIT introduced the first graphical user interface capable of manipulating vector-based drawings, laying foundational concepts for 3D interaction.[3] In the 1970s, advancements at the University of Utah included algorithms for hidden surface removal, such as the z-buffer by Ed Catmull, as well as shading techniques like Gouraud shading by Henri Gouraud and the Phong reflection model by Bui Tuong Phong, which enabled more realistic surface appearances.[4] The 1980s and 1990s saw rapid growth driven by hardware improvements, such as the development of specialized graphics processors and accelerators, and software innovations like ray tracing for global illumination effects.[3] Key techniques in 3D computer graphics include rasterization for real-time rendering in applications like video games and ray tracing or ray casting for photorealistic offline rendering in film production.[2] These methods handle complex computations for perspective projection, where parallel lines converge to simulate depth, and parallel projection for technical drawings that preserve proportions without distortion.[5] 3D computer graphics finds extensive applications across industries, including entertainment for creating visual effects in movies and animations, computer-aided design (CAD) for engineering prototypes, medical imaging for visualizing anatomical structures, and scientific simulation for data analysis.[6] In gaming and virtual reality, it enables immersive environments, while in architecture and manufacturing, it supports precise modeling and prototyping.[7] Ongoing advancements, such as real-time ray tracing supported by modern GPUs, continue to enhance realism and efficiency in these domains.[2]Introduction
Definition and Principles
3D computer graphics refers to the computational representation and manipulation of three-dimensional objects and scenes in a virtual space, enabling the generation of visual representations on two-dimensional displays through algorithms and mathematical models.[8] This process simulates the appearance of real or imaginary 3D environments by processing geometric data to produce images that convey spatial relationships and depth cues.[9] In contrast to 2D graphics, which confine representations to a planar surface defined by x and y coordinates, 3D graphics introduce a z-axis to model depth, allowing for essential effects such as occlusion—where closer objects obscure farther ones—parallax shifts in viewpoint changes, and the depiction of volumetric properties like shadows and intersections.[10] This depth dimension is fundamental to achieving realistic spatial perception, as it enables computations for visibility determination and perspective distortion absent in flat 2D renderings.[11] Core principles rely on mathematical foundations, including vector algebra for representing positions as 3D vectors \mathbf{p} = (x, y, z) and surface normals as unit vectors \mathbf{n} = (n_x, n_y, n_z) to describe orientations.[12] Coordinate systems form the basis: Cartesian coordinates provide a straightforward Euclidean framework for object placement, while homogeneous coordinates extend points to four dimensions as (x, y, z, w) (with w = 1 for affine points), unifying transformations into matrix multiplications.[13] Transformations—such as translation by adding offsets, rotation via angle-axis or quaternion methods, and scaling by factors—are efficiently handled using 4×4 matrices in homogeneous space; for example, a translation matrix is: \begin{pmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{pmatrix} applied as \mathbf{p}' = M \mathbf{p}.[14] Projection techniques map the 3D scene onto a 2D plane, with perspective projection mimicking human vision by converging parallel lines, using the equations x' = \frac{x}{z/d}, y' = \frac{y}{z/d} where d is the distance from the viewpoint to the projection plane, effectively dividing coordinates by depth z to create foreshortening.[15] In contrast, orthographic projection maintains parallel lines without depth division, preserving dimensions for technical illustrations but lacking realism.[11] These principles ensure that 3D graphics can accurately transform and render complex scenes while accounting for viewer position and spatial hierarchy.Applications and Impact
3D computer graphics have transformed numerous industries by enabling immersive and realistic visual representations. In video games, real-time rendering technologies power interactive experiences, with engines like Unreal Engine facilitating the creation of high-fidelity 3D environments for titles across platforms.[16][17] In film and visual effects (VFX), computer-generated imagery (CGI) creates entire worlds and characters, as exemplified by the photorealistic alien ecosystems and motion-captured performances in Avatar, which utilized full CGI environments and virtual camera systems to blend live-action with digital elements.[18][19] Beyond entertainment, 3D graphics support practical applications in design and visualization. In architecture, virtual walkthroughs allow stakeholders to navigate detailed 3D models of buildings before construction, enhancing design review and client engagement through real-time rendering in browsers or dedicated software.[20][21] Medical visualization leverages 3D models derived from CT or MRI scans to reconstruct organs, aiding surgeons in planning procedures and educating patients with anatomically accurate representations.[22][23] In product design, 3D prototyping enables rapid iteration of digital models, reducing time and costs in manufacturing by simulating physical properties and testing ergonomics virtually.[24][25] The impact of 3D computer graphics extends to economic, cultural, and technological spheres. Economically, the global computer graphics market is projected to reach USD 244.5 billion in 2025, driven by demand in gaming, media, and simulation sectors.[26] Culturally, advancements have fueled concepts like the metaverse, where persistent 3D virtual spaces foster social interactions and heritage preservation, potentially reshaping global connectivity and tourism.[27][28] Technologically, GPU evolution from the 1990s' basic 3D acceleration to the 2020s' ray-tracing hardware, such as NVIDIA's RTX series, has enabled real-time photorealism, democratizing advanced rendering for broader applications.[29][30] Societally, 3D graphics have enhanced accessibility through consumer hardware like affordable GPUs, allowing individuals to create and interact with complex visuals on personal devices without specialized equipment.[31] However, ethical concerns arise, particularly with 3D deepfakes, where generative AI produces hyper-realistic synthetic media that blurs reality, raising issues of misinformation, privacy invasion, and consent in visual content creation.[32][33]History
Early Foundations
The foundations of 3D computer graphics emerged in the 1960s through innovative academic research focused on interactive systems and basic geometric representations, primarily at institutions like MIT, where early efforts shifted from 2D sketching to three-dimensional visualization. Ivan Sutherland's Sketchpad, developed in 1963 as part of his PhD thesis at MIT, introduced the first interactive graphical user interface using a light pen on a vector display, enabling users to create and manipulate line drawings with constraints and replication—concepts that directly influenced subsequent 3D modeling techniques.[34] Although primarily 2D, Sketchpad's architecture served as a critical precursor to 3D graphics by demonstrating real-time interaction and hierarchical object manipulation on cathode-ray tube (CRT) displays.[35] Building on this, early wireframe rendering for 3D objects developed at MIT in the mid-1960s, extending Sutherland's ideas to three-dimensional space. For instance, Sketchpad III, implemented in 1963 on the TX-2 computer at MIT's Lincoln Laboratory, allowed users to construct and view wireframe models in multiple projections, including perspective, using a light pen for input and real-time manipulation of 3D polyhedral shapes.[36] These wireframe techniques represented objects as line segments connecting vertices, facilitating the visualization of complex geometries without surface filling, and were displayed on vector-based CRTs that drew lines directly via electron beam deflection.[37] Key milestones in the late 1960s advanced beyond pure wireframes toward polygonal surfaces. In 1969, researchers at General Electric's Computer Equipment Division conducted a study on one of the earliest applications of computer-generated imagery in a visual simulation system for flight training, employing edge-based representations of 3D objects with up to 500 edges per view, which required resolving visibility through priority lists and basic depth ordering.[38] This work highlighted the potential for polygons as building blocks for more realistic scenes, though limited by computational constraints to simple shapes. By 1975, the Utah teapot model, created by Martin Newell during his PhD research at the University of Utah, became a seminal test object for 3D rendering algorithms; Newell hand-digitized the teapot's bicubic patches into a dataset of 2,000 vertices and 1,800 polygons, providing a standardized benchmark for evaluating surface modeling, hidden-surface removal, and shading due to its intricate handle, spout, and lid details.[39] Hardware innovations were essential to these developments, with early vector displays dominating the 1960s for their ability to render precise lines without pixelation. Systems like the modified oscilloscope used in Sketchpad and subsequent MIT projects employed analog deflection to trace wireframes at high speeds, supporting interactive rates for simple 3D rotations and views, though they struggled with dense scenes due to flicker from constant refreshing.[40] The transition to raster displays in the 1970s, driven by falling semiconductor memory costs, enabled pixel-based rendering of filled polygons and colors; early examples included framebuffers on minicomputers like the PDP-11, allowing storage of entire images for anti-aliased lines and basic shading, which proved crucial for handling complex 3D scenes without the limitations of vector persistence.[40] Academic contributions in shading algorithms further refined surface rendering during this period. In 1971, Henri Gouraud introduced an interpolation method for smooth shading of polygonal approximations to curved surfaces, computing intensity at vertices based on surface normals and linearly interpolating across edges and faces to simulate continuity without per-pixel lighting calculations.[41] This Gouraud shading technique significantly improved the visual quality of wireframe-derived models, reducing the faceted appearance common in early polygon renders. Complementing this, Bui Tuong Phong's 1975 work proposed a reflection model incorporating diffuse, specular, and ambient components, along with an interpolation-based shading algorithm that used interpolated normals for more accurate highlight rendering on curved surfaces approximated by polygons.[42] These methods established foundational principles for realistic illumination in 3D graphics, influencing pipeline designs for decades.Major Advancements
The 1980s saw significant progress in rendering techniques and hardware, bridging academic research to practical applications. In 1980, Turner Whitted introduced ray tracing, a method simulating light paths for realistic reflections, refractions, and shadows, which became essential for offline photorealistic rendering despite high computational cost.[43] Hardware advancements included specialized graphics systems from Evans & Sutherland and Silicon Graphics Incorporated (SGI), enabling real-time 3D visualization in professional workstations used for CAD and early CGI in films like Tron (1982), the first major motion picture to feature extensive computer-generated imagery.[44] The 1990s marked a pivotal era for 3D computer graphics with the advent of consumer-grade hardware acceleration, transforming graphics from niche academic and professional tools into accessible technology for gaming and personal computing. In November 1996, 3dfx Interactive released the Voodoo Graphics chipset, the first widely adopted 3D accelerator card that offloaded rendering tasks from the CPU to dedicated hardware, enabling smoother frame rates and more complex scenes in real-time applications like Quake.[45] This innovation spurred the development of the first consumer GPUs, such as subsequent iterations from 3dfx and competitors like NVIDIA's Riva series, which integrated 2D and 3D capabilities on a single board and democratized high-fidelity visuals for millions of users.[46] A landmark milestone came in 1995 with Pixar's Toy Story, the first full-length feature film produced entirely using computer-generated imagery (CGI), rendered via Pixar's proprietary RenderMan software, which implemented advanced ray tracing and shading techniques to achieve photorealistic animation.[47] Entering the 2000s, the field advanced toward greater flexibility and realism through programmable graphics pipelines. Microsoft's DirectX 8, released in November 2000, introduced vertex and pixel shaders, allowing developers to write custom code for transforming vertices and coloring pixels, moving beyond fixed-function hardware to enable effects like dynamic lighting and procedural textures in real time.[48] This programmability, supported by GPUs like NVIDIA's GeForce 3, revolutionized game development and visual effects, facilitating more artist-driven control over rendering outcomes. The 2010s and 2020s witnessed integration with emerging technologies and computational breakthroughs, particularly in real-time global illumination and AI-enhanced workflows. In March 2018, NVIDIA announced RTX technology with the Turing architecture, enabling hardware-accelerated real-time ray tracing on consumer GPUs, which simulates light paths for accurate reflections, refractions, and shadows at interactive speeds, fundamentally elevating graphical fidelity in games and simulations.[49] Complementing this, NVIDIA's OptiX ray tracing engine incorporated AI-accelerated denoising in the early 2020s, using deep learning to remove noise from incomplete ray-traced renders, drastically reducing computation time while preserving detail—often achieving visually clean images in seconds on RTX hardware.[50] Open-source efforts also flourished, exemplified by Blender's Cycles render engine, introduced in 2011 and continually refined through community contributions, which supports unbiased path tracing on CPUs and GPUs, making production-quality rendering freely available and fostering innovations in film, architecture, and scientific visualization.[51] Key milestones included the 2012 Kickstarter launch of the Oculus Rift, which revitalized virtual reality by leveraging stereoscopic 3D graphics and head-tracking for immersive environments, influencing graphics hardware optimizations for low-latency rendering.[52] By 2025, these advancements extended to scientific applications, with AI-accelerated simulations and high-resolution 3D visualizations enhancing climate modeling in platforms like NVIDIA's Earth-2, built on Omniverse, enabling researchers to analyze complex atmospheric interactions with unprecedented accuracy.[53]Core Techniques
3D Modeling
3D modeling involves the creation of digital representations of three-dimensional objects through geometric and topological structures, serving as the foundational step for subsequent processes like animation and rendering. These models define the shape, position, and connectivity of objects in a virtual space using mathematical descriptions that approximate real-world geometry. Common approaches emphasize efficiency in storage, manipulation, and computation, often balancing detail with performance in applications such as computer-aided design and visual effects. Fundamental building blocks in 3D modeling are geometric primitives, which include points (zero-dimensional locations defined by coordinates), lines (one-dimensional connections between points), polygons (two-dimensional faces typically triangular or quadrilateral), and voxels (three-dimensional volumetric elements analogous to pixels in 3D space).[54] These primitives enable the construction of complex shapes; for instance, polygons form the basis of surface models, while voxels support volumetric representations suitable for simulations like medical imaging.[55] One prevalent technique is polygonal modeling, where objects are represented as meshes composed of vertices (position points), edges (connections between vertices), and faces (bounded polygonal regions). This mesh structure allows for flexible topology and is widely used due to its compatibility with hardware-accelerated rendering pipelines. A survey on polygonal meshes highlights their role in approximating smooth surfaces through triangulation or quadrangulation, with applications in geometry processing tasks like simplification and remeshing.[56] For smoother representations, subdivision surfaces refine coarse polygonal meshes iteratively; the Catmull-Clark algorithm, for example, generates limit surfaces that approximate bicubic B-splines on arbitrary topologies by averaging vertex positions across refinement levels.[57] Another important method is digital sculpting, which simulates traditional clay sculpting in a digital environment using brush tools to push, pull, and deform high-resolution meshes. This technique excels at creating intricate organic forms like characters and creatures, often starting from a base mesh and adding detail through dynamic topology adjustments.[58] Curve- and surface-based methods, such as non-uniform rational B-splines (NURBS), provide precise control for freeform shapes. NURBS extend B-splines by incorporating rational weights, enabling exact representations of conic sections and complex geometries like car bodies in CAD systems. Introduced in Versprille's dissertation, NURBS curves are defined parametrically, with the surface form generalizing tensor-product constructions.[59] Spline interpolation underpins these, as seen in Bézier curves, where a curve of degree n is given by \mathbf{B}(t) = \sum_{i=0}^{n} \mathbf{P}_i B_{i,n}(t), \quad 0 \leq t \leq 1, with \mathbf{P}_i as control points and B_{i,n}(t) = \binom{n}{i} t^i (1-t)^{n-i} as Bernstein polynomials. This formulation ensures convexity and smooth interpolation between points.[60] To compose complex models from simpler ones, constructive solid geometry (CSG) employs Boolean operations on primitives: union combines volumes, intersection retains overlapping regions, and difference subtracts one from another. Originating in Requicha's foundational work on solid representations, CSG ensures watertight models by operating on closed sets, though it requires efficient intersection computations for practical use. Supporting these techniques are data structures like scene graphs, which organize models hierarchically as directed acyclic graphs with nodes representing objects, transformations, and groups. This allows efficient traversal for rendering and simulation by propagating changes through parent-child relationships.[61] For optimization, bounding volumes enclose models to accelerate queries; axis-aligned bounding boxes (AABBs), defined by min-max coordinates along axes, provide fast intersection tests in collision detection and ray tracing.[62]Animation and Scene Layout
Scene layout in 3D computer graphics involves the strategic arrangement of modeled objects, cameras, lights, and props to construct a coherent virtual environment that supports narrative or functional goals. Once 3D models are created, they are imported and positioned relative to one another, often using object hierarchies to manage complexity; these hierarchies establish parent-child relationships that propagate transformations such as translations and rotations efficiently across assemblies like characters or vehicles.[63] Camera placement is a critical aspect, defining the viewer's perspective and framing, with techniques ranging from manual adjustments to automated methods that optimize viewpoints for hierarchical storytelling or scene comprehension.[64] Environmental setup completes the layout by integrating lights to establish mood and directionality, alongside props that fill space and interact with primary elements, ensuring spatial relationships align with intended dynamics.[65] Animation techniques enable the temporal evolution of these laid-out scenes, transforming static compositions into dynamic sequences. Keyframing remains a foundational method, where animators define discrete poses at specific timestamps, and intermediate positions are generated through interpolation to create fluid motion; this approach draws from traditional animation principles adapted to 3D, emphasizing timing and easing for realism.[66] Linear interpolation provides the basic mechanism for blending between two keyframes \mathbf{P}_0 and \mathbf{P}_1 at parameter t \in [0,1]: \mathbf{P}(t) = (1-t) \mathbf{P}_0 + t \mathbf{P}_1 This is extended to cubic splines, which ensure C^2 continuity for smoother trajectories by fitting piecewise polynomials constrained at endpoints and tangents.[67] For character rigging, inverse kinematics (IK) solves the inverse problem of positioning end effectors—like hands or feet—while computing joint angles to achieve natural poses, contrasting forward kinematics by prioritizing goal-directed control over sequential joint specification. Motion capture (mocap) is another essential technique, involving the recording of real-world movements using sensors or cameras to capture data from actors or objects, which is then applied to digital models for highly realistic animations. This method reduces manual effort and captures nuanced performances, commonly used in film and video games.[68] Procedural animation complements these by algorithmically generating motion without manual keyframing, as in particle systems that simulate dynamic, fuzzy phenomena such as fire or smoke through clouds of independent particles governed by stochastic rules for birth, life, and death.[69] Physics-based simulations integrate realistic motion into animated scenes by modeling interactions under physical laws. Rigid body dynamics applies Newton's second law (\mathbf{F} = [m](/page/M) \mathbf{a}) to compute accelerations from forces and torques on undeformable objects, enabling collisions and constraints that propagate through hierarchies for believable responses like falling or tumbling. For deformable elements, cloth and soft body simulations employ mass-spring models, discretizing surfaces into point masses connected by springs that resist stretching, shearing, and bending; internal pressures or damping stabilize the system, allowing emergent behaviors like folding or fluttering.[70]Visual Representation
Materials and Texturing
In 3D computer graphics, materials define the intrinsic properties of surfaces to enable realistic or stylized rendering, independent of lighting conditions. Physically-based rendering (PBR) models form the foundation of modern material systems, approximating real-world optical behavior through key parameters such as albedo, roughness, and metallic. Albedo, often represented as a base color texture or factor, specifies the proportion of light reflected diffusely by the surface, excluding specular contributions. Roughness quantifies the irregularity of microscopic surface facets, with values ranging from 0 (perfectly smooth, mirror-like) to 1 (highly diffuse, matte), influencing the spread of specular highlights. The metallic parameter distinguishes between dielectric (non-metallic) and conductor (metallic) materials; for metals, it sets albedo to the material's reflectivity while disabling diffuse reflection, ensuring energy conservation in the model. These parameters adhere to microfacet theory, where surface appearance emerges from billions of tiny facets oriented randomly. Texturing enhances materials by mapping 2D images onto 3D geometry using UV coordinates, which parametrize the surface as a flattened 2D domain typically in the [0,1] range for both U and V axes. UV mapping projects textures onto models by unfolding the 3D surface into this 2D space, allowing precise control over how image details align with geometry features like seams or contours. To handle varying screen distances and reduce aliasing artifacts, mipmapping precomputes a pyramid of progressively lower-resolution texture versions, selecting the appropriate level of detail (LOD) based on the texel's projected size; this minimizes moiré patterns and improves rendering efficiency by sampling fewer texels for distant surfaces.[71] For adding fine geometric detail without increasing polygon count, normal mapping and bump mapping perturb surface normals during shading. Normal maps encode tangent-space normal vectors in RGB channels (with blue typically dominant for forward-facing perturbations), enabling detailed lighting responses like shadows and highlights on flat geometry. Bump mapping, an earlier precursor, uses grayscale height maps to compute approximate normals via finite differences, simulating elevation variations such as wrinkles or grains. Both techniques integrate seamlessly with PBR materials, applying perturbations to the base normal before lighting computations. Procedural textures generate patterns algorithmically at runtime, avoiding the need for stored images and allowing infinite variation. A prominent example is Perlin noise, which creates coherent, organic randomness through gradient interpolation across a grid, ideal for simulating natural phenomena like marble veins or wood grain; higher octaves of noise can be layered for fractal-like complexity. Multilayer texturing extends this by assigning separate maps to PBR channels—diffuse (albedo) for color, specular for reflection intensity and color (in specular/glossiness workflows), and emission for self-glow—often packed into single textures for efficiency, such as combining metallic and roughness into RG channels.[72][73] Texture sampling retrieves color values from these maps using coordinates, typically via the GLSL functiontexture2D(tex, uv), which applies bilinear filtering by linearly interpolating between the four nearest texels to produce smooth, anti-aliased results for non-integer coordinates. This process forms a core step in fragment shaders, where sampled values populate material parameters before final color computation. Materials and texturing thus prepare surfaces for integration into rendering pipelines, such as rasterization, where they inform per-pixel evaluations.
Lighting and Shading
Lighting and shading in 3D computer graphics simulate the interaction between light sources and material surfaces to determine pixel colors, providing visual depth and realism without global light transport computations. Light sources are defined by their position, direction, intensity, and color, while shading models calculate local illumination contributions at surface points based on surface normals and material properties. These techniques form the foundation for approximating realistic appearances in real-time and offline rendering.[42] Common types of light sources include point lights, which emit illumination uniformly in all directions from a fixed position, mimicking small bulbs or candles; directional lights, which send parallel rays from an infinite distance to model sources like sunlight; and area lights, which are extended geometric shapes such as rectangles or spheres that produce softer, more realistic shadows due to their finite size.[74] Point and area lights incorporate distance-based attenuation, following the inverse square law from physics, where light intensity I falls off as I \propto \frac{1}{d^2}, with d as the distance from the source, to prevent unrealistically bright distant illumination.[75] This attenuation is often implemented as a factor in the shading equation, such as \text{att} = \frac{1}{1 + kc \cdot d + kl \cdot d^2 + kq \cdot d^2}, where constants kc, kl, and kq adjust the falloff curve for artistic control.[75] Shading models break down surface response into components like ambient, diffuse, and specular reflection. The Lambertian diffuse model, ideal for matte surfaces, computes the diffuse intensity as I_d = k_d \cdot I_p \cdot (\mathbf{N} \cdot \mathbf{L}), where k_d is the diffuse reflectivity, I_p the light's intensity, \mathbf{N} the normalized surface normal, and \mathbf{L} the normalized vector from the surface to the light; the cosine term (\mathbf{N} \cdot \mathbf{L}) accounts for reduced illumination at grazing angles.[42] This model assumes perfectly diffuse reflection, where light scatters equally in all directions, independent of viewer position.[42] Specular reflection adds shiny highlights to simulate glossy materials. The original Phong model calculates specular intensity asI_s = k_s \cdot I_p \cdot (\mathbf{R} \cdot \mathbf{V})^n,
where k_s is the specular reflectivity, \mathbf{R} the perfect reflection vector \mathbf{R} = 2(\mathbf{N} \cdot \mathbf{L})\mathbf{N} - \mathbf{L}, \mathbf{V} the normalized view direction, and n the shininess exponent controlling highlight sharpness; higher n produces tighter, more mirror-like reflections.[42] The full Phong shading combines ambient I_a = k_a \cdot I_p, diffuse, and specular terms, often clamped to [0,1] and multiplied by material color.[42] The Blinn-Phong model refines specular computation for efficiency by using a half-vector approximation: compute \mathbf{H} = \frac{\mathbf{L} + \mathbf{V}}{|\mathbf{L} + \mathbf{V}|}, then I_s = k_s \cdot I_p \cdot (\mathbf{N} \cdot \mathbf{H})^n. This avoids explicit reflection vector calculation, reducing operations per vertex or fragment while closely approximating Phong highlights, especially for low n.[76] Blinn-Phong remains widely used in real-time graphics due to its balance of quality and performance.[76] For materials beyond opaque surfaces, subsurface scattering models light penetration and re-emission in translucent substances like wax or human skin. The dipole model approximates this by placing virtual point sources inside the material to solve diffusion equations, enabling efficient computation of blurred, soft appearances from internal scattering.[77] This approach captures effects like color bleeding and forward scattering, validated against measured data for materials such as marble and milk.[77] Volumetric lighting extends shading to media like atmosphere or fog, where light scatters within volumes to form visible beams known as god rays or crepuscular rays. These are simulated by sampling light density along rays from the camera through occluders, using techniques like ray marching to integrate scattering contributions and accumulate transmittance for realistic atmospheric effects.[78] In practice, post-processing methods project light sources onto a buffer, apply radial blurring, and composite with depth to achieve real-time performance.[78]