Fact-checked by Grok 2 weeks ago

3D computer graphics

3D computer graphics is a subfield of that involves the creation, representation, and manipulation of three-dimensional objects and scenes using computational methods. It relies on mathematical models to define , appearance, and spatial relationships in a virtual environment, which are then projected and rendered onto two-dimensional displays to produce realistic or stylized images. This process typically encompasses stages such as modeling, where 3D shapes are constructed from primitives like polygons or curves; , involving , , and ; and rendering, which simulates , shadows, and textures to generate the final image. The development of 3D computer graphics began in the early 1960s with pioneering work in interactive systems. Ivan Sutherland's 1963 program at introduced the first capable of manipulating vector-based drawings, laying foundational concepts for 3D interaction. In the 1970s, advancements at the included algorithms for hidden surface removal, such as the z-buffer by Ed Catmull, as well as shading techniques like by and the by , which enabled more realistic surface appearances. The 1980s and 1990s saw rapid growth driven by hardware improvements, such as the development of specialized graphics processors and accelerators, and software innovations like ray tracing for effects. Key techniques in 3D computer graphics include rasterization for real-time rendering in applications like and ray tracing or for photorealistic offline rendering in . These methods handle complex computations for perspective projection, where parallel lines converge to simulate depth, and for technical drawings that preserve proportions without distortion. 3D computer graphics finds extensive applications across industries, including entertainment for creating visual effects in movies and animations, computer-aided design (CAD) for prototypes, for visualizing anatomical structures, and scientific for . In gaming and , it enables immersive environments, while in and , it supports precise modeling and prototyping. Ongoing advancements, such as real-time ray tracing supported by modern GPUs, continue to enhance realism and efficiency in these domains.

Introduction

Definition and Principles

3D computer graphics refers to the computational representation and manipulation of three-dimensional objects and scenes in a virtual , enabling the generation of visual representations on two-dimensional displays through algorithms and mathematical models. This process simulates the appearance of real or imaginary 3D environments by processing geometric data to produce images that convey spatial relationships and depth cues. In contrast to graphics, which confine representations to a planar surface defined by x and y coordinates, graphics introduce a z-axis to model depth, allowing for essential effects such as —where closer objects obscure farther ones—parallax shifts in viewpoint changes, and the depiction of volumetric properties like shadows and intersections. This depth dimension is fundamental to achieving realistic spatial perception, as it enables computations for visibility determination and absent in flat renderings. Core principles rely on mathematical foundations, including for representing positions as vectors \mathbf{p} = (x, y, z) and surface normals as unit vectors \mathbf{n} = (n_x, n_y, n_z) to describe orientations. Coordinate systems form the basis: Cartesian coordinates provide a straightforward framework for object placement, while extend points to four dimensions as (x, y, z, w) (with w = 1 for affine points), unifying transformations into multiplications. Transformations—such as by adding offsets, via angle-axis or methods, and by factors—are efficiently handled using 4×4 matrices in homogeneous space; for example, a matrix is: \begin{pmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{pmatrix} applied as \mathbf{p}' = M \mathbf{p}. Projection techniques map the 3D scene onto a 2D plane, with perspective projection mimicking human vision by converging parallel lines, using the equations x' = \frac{x}{z/d}, y' = \frac{y}{z/d} where d is the distance from the viewpoint to the projection plane, effectively dividing coordinates by depth z to create foreshortening. In contrast, orthographic projection maintains parallel lines without depth division, preserving dimensions for technical illustrations but lacking realism. These principles ensure that 3D graphics can accurately transform and render complex scenes while accounting for viewer position and spatial hierarchy.

Applications and Impact

3D computer graphics have transformed numerous industries by enabling immersive and realistic visual representations. In , real-time rendering technologies power interactive experiences, with engines like facilitating the creation of high-fidelity 3D environments for titles across platforms. In film and (VFX), (CGI) creates entire worlds and characters, as exemplified by the photorealistic alien ecosystems and motion-captured performances in Avatar, which utilized full CGI environments and virtual camera systems to blend live-action with digital elements. Beyond , 3D graphics support practical applications in and . In , virtual walkthroughs allow stakeholders to navigate detailed 3D models of buildings before , enhancing and client engagement through real-time rendering in browsers or dedicated software. Medical leverages 3D models derived from or MRI scans to reconstruct organs, aiding surgeons in planning procedures and educating patients with anatomically accurate representations. In , 3D prototyping enables rapid iteration of digital models, reducing time and costs in by simulating physical properties and testing virtually. The impact of computer graphics extends to economic, cultural, and technological spheres. Economically, the global computer graphics market is projected to reach USD 244.5 billion in 2025, driven by demand in , , and simulation sectors. Culturally, advancements have fueled concepts like the , where persistent 3D virtual spaces foster social interactions and heritage preservation, potentially reshaping global connectivity and . Technologically, GPU evolution from the ' basic 3D acceleration to the ' ray-tracing hardware, such as NVIDIA's RTX series, has enabled real-time , democratizing advanced rendering for broader applications. Societally, have enhanced through consumer hardware like affordable GPUs, allowing individuals to create and interact with complex visuals on personal devices without specialized equipment. However, ethical concerns arise, particularly with 3D deepfakes, where generative produces hyper-realistic that blurs reality, raising issues of , invasion, and in visual .

History

Early Foundations

The foundations of 3D computer graphics emerged in the through innovative academic research focused on interactive systems and basic geometric representations, primarily at institutions like , where early efforts shifted from 2D sketching to three-dimensional visualization. Ivan Sutherland's , developed in 1963 as part of his PhD thesis at , introduced the first interactive using a on a vector display, enabling users to create and manipulate line drawings with constraints and replication—concepts that directly influenced subsequent techniques. Although primarily 2D, 's architecture served as a critical precursor to graphics by demonstrating interaction and hierarchical object manipulation on cathode-ray tube (CRT) displays. Building on this, early wireframe rendering for 3D objects developed at in the mid-1960s, extending Sutherland's ideas to . For instance, III, implemented in 1963 on the TX-2 computer at MIT's Lincoln Laboratory, allowed users to construct and view wireframe models in multiple projections, including , using a for input and real-time manipulation of 3D polyhedral shapes. These wireframe techniques represented objects as line segments connecting vertices, facilitating the visualization of complex geometries without surface filling, and were displayed on vector-based CRTs that drew lines directly via electron beam deflection. Key milestones in the late 1960s advanced beyond pure wireframes toward polygonal surfaces. In , researchers at General Electric's Computer Equipment Division conducted a study on one of the earliest applications of in a visual system for , employing edge-based representations of 3D objects with up to 500 edges per view, which required resolving visibility through priority lists and basic depth ordering. This work highlighted the potential for polygons as building blocks for more realistic scenes, though limited by computational constraints to simple shapes. By 1975, the model, created by Martin Newell during his PhD research at the , became a seminal test object for algorithms; Newell hand-digitized the teapot's bicubic patches into a of 2,000 vertices and 1,800 polygons, providing a standardized for evaluating surface modeling, hidden-surface removal, and due to its intricate handle, spout, and lid details. Hardware innovations were essential to these developments, with early vector displays dominating the 1960s for their ability to render precise lines without pixelation. Systems like the modified oscilloscope used in Sketchpad and subsequent MIT projects employed analog deflection to trace wireframes at high speeds, supporting interactive rates for simple 3D rotations and views, though they struggled with dense scenes due to flicker from constant refreshing. The transition to raster displays in the , driven by falling costs, enabled pixel-based rendering of filled polygons and colors; early examples included framebuffers on minicomputers like the PDP-11, allowing storage of entire images for anti-aliased lines and basic , which proved crucial for handling complex scenes without the limitations of vector persistence. Academic contributions in shading algorithms further refined surface rendering during this period. In 1971, introduced an method for smooth of polygonal approximations to curved surfaces, computing at vertices based on surface normals and linearly interpolating across edges and faces to simulate without per-pixel calculations. This technique significantly improved the visual quality of wireframe-derived models, reducing the faceted appearance common in early polygon renders. Complementing this, Bui Tuong Phong's 1975 work proposed a reflection model incorporating diffuse, specular, and ambient components, along with an -based that used interpolated normals for more accurate highlight rendering on curved surfaces approximated by polygons. These methods established foundational principles for realistic illumination in , influencing designs for decades.

Major Advancements

The 1980s saw significant progress in rendering techniques and hardware, bridging academic research to practical applications. In 1980, Turner Whitted introduced ray tracing, a method simulating light paths for realistic reflections, refractions, and shadows, which became essential for offline photorealistic rendering despite high computational cost. Hardware advancements included specialized graphics systems from Evans & Sutherland and Incorporated (SGI), enabling real-time 3D visualization in professional workstations used for CAD and early in films like (1982), the first major motion picture to feature extensive . The 1990s marked a pivotal era for 3D computer graphics with the advent of consumer-grade hardware acceleration, transforming graphics from niche academic and professional tools into accessible technology for gaming and personal computing. In November 1996, 3dfx Interactive released the Voodoo Graphics chipset, the first widely adopted 3D accelerator card that offloaded rendering tasks from the CPU to dedicated hardware, enabling smoother frame rates and more complex scenes in real-time applications like Quake. This innovation spurred the development of the first consumer GPUs, such as subsequent iterations from 3dfx and competitors like NVIDIA's Riva series, which integrated 2D and 3D capabilities on a single board and democratized high-fidelity visuals for millions of users. A landmark milestone came in 1995 with Pixar's Toy Story, the first full-length feature film produced entirely using computer-generated imagery (CGI), rendered via Pixar's proprietary RenderMan software, which implemented advanced ray tracing and shading techniques to achieve photorealistic animation. Entering the 2000s, the field advanced toward greater flexibility and realism through programmable graphics pipelines. Microsoft's 8, released in November 2000, introduced and shaders, allowing developers to write custom code for transforming and coloring , moving beyond fixed-function to enable effects like dynamic and procedural textures in . This programmability, supported by GPUs like NVIDIA's GeForce 3, revolutionized game development and , facilitating more artist-driven control over rendering outcomes. The 2010s and 2020s witnessed integration with emerging technologies and computational breakthroughs, particularly in and AI-enhanced workflows. In March 2018, announced RTX technology with the Turing , enabling hardware-accelerated ray tracing on consumer GPUs, which simulates light paths for accurate reflections, refractions, and shadows at interactive speeds, fundamentally elevating graphical fidelity in games and simulations. Complementing this, 's OptiX ray tracing engine incorporated AI-accelerated denoising in the early 2020s, using to remove noise from incomplete ray-traced renders, drastically reducing computation time while preserving detail—often achieving visually clean images in seconds on RTX hardware. Open-source efforts also flourished, exemplified by Blender's Cycles render engine, introduced in 2011 and continually refined through community contributions, which supports unbiased on CPUs and GPUs, making production-quality rendering freely available and fostering innovations in film, , and scientific . Key milestones included the 2012 launch of the , which revitalized by leveraging stereoscopic 3D graphics and head-tracking for immersive environments, influencing graphics hardware optimizations for low-latency rendering. By 2025, these advancements extended to scientific applications, with AI-accelerated simulations and high-resolution 3D visualizations enhancing climate modeling in platforms like 's Earth-2, built on , enabling researchers to analyze complex atmospheric interactions with unprecedented accuracy.

Core Techniques

3D Modeling

3D modeling involves the creation of digital representations of three-dimensional objects through geometric and topological structures, serving as the foundational step for subsequent processes like and rendering. These models define the shape, position, and connectivity of objects in a virtual space using mathematical descriptions that approximate real-world . Common approaches emphasize efficiency in storage, manipulation, and computation, often balancing detail with performance in applications such as and . Fundamental building blocks in are geometric primitives, which include points (zero-dimensional locations defined by coordinates), lines (one-dimensional connections between points), polygons (two-dimensional faces typically triangular or ), and voxels (three-dimensional volumetric elements analogous to pixels in space). These primitives enable the construction of complex shapes; for instance, polygons form the basis of surface models, while voxels support volumetric representations suitable for simulations like . One prevalent technique is , where objects are represented as composed of vertices (position points), edges (connections between vertices), and faces (bounded polygonal regions). This structure allows for flexible and is widely used due to its compatibility with hardware-accelerated rendering pipelines. A survey on polygonal meshes highlights their role in approximating smooth surfaces through or quadrangulation, with applications in tasks like simplification and remeshing. For smoother representations, subdivision surfaces refine coarse polygonal meshes iteratively; the Catmull-Clark algorithm, for example, generates limit surfaces that approximate bicubic B-splines on arbitrary topologies by averaging vertex positions across refinement levels. Another important method is digital sculpting, which simulates traditional clay sculpting in a digital environment using brush tools to push, pull, and deform high-resolution meshes. This technique excels at creating intricate organic forms like characters and creatures, often starting from a base mesh and adding detail through dynamic topology adjustments. Curve- and surface-based methods, such as non-uniform rational B-splines (NURBS), provide precise control for freeform shapes. NURBS extend B-splines by incorporating rational weights, enabling exact representations of conic sections and complex geometries like car bodies in CAD systems. Introduced in Versprille's dissertation, NURBS curves are defined parametrically, with the surface form generalizing tensor-product constructions. underpins these, as seen in Bézier curves, where a curve of degree n is given by \mathbf{B}(t) = \sum_{i=0}^{n} \mathbf{P}_i B_{i,n}(t), \quad 0 \leq t \leq 1, with \mathbf{P}_i as control points and B_{i,n}(t) = \binom{n}{i} t^i (1-t)^{n-i} as Bernstein polynomials. This formulation ensures convexity and smooth interpolation between points. To compose complex models from simpler ones, constructive solid geometry (CSG) employs Boolean operations on primitives: union combines volumes, intersection retains overlapping regions, and difference subtracts one from another. Originating in Requicha's foundational work on solid representations, CSG ensures watertight models by operating on closed sets, though it requires efficient intersection computations for practical use. Supporting these techniques are data structures like scene graphs, which organize models hierarchically as directed acyclic graphs with nodes representing objects, transformations, and groups. This allows efficient traversal for rendering and simulation by propagating changes through parent-child relationships. For optimization, bounding volumes enclose models to accelerate queries; axis-aligned bounding boxes (AABBs), defined by min-max coordinates along axes, provide fast intersection tests in and ray tracing.

Animation and Scene Layout

Scene layout in 3D computer graphics involves the strategic arrangement of modeled objects, cameras, lights, and props to construct a coherent that supports narrative or functional goals. Once 3D models are created, they are imported and positioned relative to one another, often using object hierarchies to manage complexity; these hierarchies establish parent-child relationships that propagate transformations such as translations and rotations efficiently across assemblies like characters or vehicles. Camera placement is a critical aspect, defining the viewer's and framing, with techniques ranging from adjustments to automated methods that optimize viewpoints for hierarchical storytelling or scene comprehension. Environmental setup completes the by integrating lights to establish mood and directionality, alongside props that fill space and interact with primary elements, ensuring spatial relationships align with intended dynamics. Animation techniques enable the temporal evolution of these laid-out scenes, transforming static compositions into dynamic sequences. Keyframing remains a foundational method, where animators define discrete poses at specific timestamps, and intermediate positions are generated through to create fluid motion; this approach draws from principles adapted to 3D, emphasizing timing and easing for realism. provides the basic mechanism for blending between two keyframes \mathbf{P}_0 and \mathbf{P}_1 at parameter t \in [0,1]: \mathbf{P}(t) = (1-t) \mathbf{P}_0 + t \mathbf{P}_1 This is extended to cubic splines, which ensure C^2 continuity for smoother trajectories by fitting piecewise polynomials constrained at endpoints and tangents. For character rigging, inverse kinematics (IK) solves the inverse problem of positioning end effectors—like hands or feet—while computing joint angles to achieve natural poses, contrasting forward kinematics by prioritizing goal-directed control over sequential joint specification. Motion capture (mocap) is another essential technique, involving the recording of real-world movements using sensors or cameras to capture data from actors or objects, which is then applied to digital models for highly realistic animations. This method reduces manual effort and captures nuanced performances, commonly used in film and video games. Procedural animation complements these by algorithmically generating motion without manual keyframing, as in particle systems that simulate dynamic, fuzzy phenomena such as fire or smoke through clouds of independent particles governed by stochastic rules for birth, life, and death. Physics-based simulations integrate realistic motion into animated scenes by modeling interactions under physical laws. applies Newton's second (\mathbf{F} = [m](/page/M) \mathbf{a}) to compute accelerations from forces and torques on undeformable objects, enabling collisions and constraints that propagate through hierarchies for believable responses like falling or tumbling. For deformable elements, cloth and soft body simulations employ mass-spring models, discretizing surfaces into point masses connected by springs that resist stretching, shearing, and bending; internal pressures or stabilize the system, allowing emergent behaviors like folding or fluttering.

Visual Representation

Materials and Texturing

In 3D computer graphics, materials define the intrinsic properties of surfaces to enable realistic or stylized rendering, independent of lighting conditions. models form the foundation of modern material systems, approximating real-world optical behavior through key parameters such as , roughness, and metallic. , often represented as a base color or factor, specifies the proportion of light reflected diffusely by the surface, excluding specular contributions. Roughness quantifies the irregularity of microscopic surface facets, with values ranging from 0 (perfectly smooth, mirror-like) to 1 (highly diffuse, matte), influencing the spread of specular highlights. The metallic parameter distinguishes between (non-metallic) and (metallic) materials; for metals, it sets to the material's reflectivity while disabling , ensuring in the model. These parameters adhere to microfacet theory, where surface appearance emerges from billions of tiny facets oriented randomly. Texturing enhances materials by mapping 2D images onto 3D geometry using UV coordinates, which parametrize the surface as a flattened 2D domain typically in the [0,1] range for both U and V axes. UV mapping projects textures onto models by unfolding the 3D surface into this 2D space, allowing precise control over how image details align with geometry features like seams or contours. To handle varying screen distances and reduce artifacts, mipmapping precomputes a pyramid of progressively lower-resolution texture versions, selecting the appropriate (LOD) based on the texel's projected size; this minimizes moiré patterns and improves rendering efficiency by sampling fewer texels for distant surfaces. For adding fine geometric detail without increasing polygon count, and perturb surface s during . Normal maps encode tangent-space normal vectors in RGB channels (with blue typically dominant for forward-facing perturbations), enabling detailed lighting responses like shadows and highlights on flat geometry. Bump mapping, an earlier precursor, uses grayscale height maps to compute approximate normals via finite differences, simulating elevation variations such as wrinkles or grains. Both techniques integrate seamlessly with materials, applying perturbations to the base normal before lighting computations. Procedural textures generate patterns algorithmically at runtime, avoiding the need for stored images and allowing infinite variation. A prominent example is , which creates coherent, organic randomness through across a , ideal for simulating natural phenomena like marble veins or ; higher octaves of noise can be layered for fractal-like complexity. Multilayer texturing extends this by assigning separate maps to PBR channels—diffuse () for color, specular for reflection intensity and color (in specular/glossiness workflows), and emission for self-glow—often packed into single textures for efficiency, such as combining metallic and roughness into RG channels. Texture sampling retrieves color values from these maps using coordinates, typically via the GLSL function texture2D(tex, uv), which applies bilinear filtering by linearly interpolating between the four nearest texels to produce smooth, anti-aliased results for non-integer coordinates. This process forms a core step in fragment shaders, where sampled values populate material parameters before final color computation. Materials and texturing thus prepare surfaces for integration into rendering pipelines, such as rasterization, where they inform per-pixel evaluations.

Lighting and Shading

Lighting and shading in 3D computer graphics simulate the interaction between light sources and material surfaces to determine pixel colors, providing visual depth and realism without global light transport computations. Light sources are defined by their , direction, intensity, and color, while shading models calculate local illumination contributions at surface points based on surface normals and properties. These techniques form the foundation for approximating realistic appearances in and offline rendering. Common types of light sources include point lights, which emit illumination uniformly in all directions from a fixed position, mimicking small bulbs or candles; directional lights, which send parallel rays from an infinite distance to model sources like ; and area lights, which are extended geometric shapes such as rectangles or spheres that produce softer, more realistic shadows due to their finite size. Point and area lights incorporate distance-based , following the from physics, where light intensity I falls off as I \propto \frac{1}{d^2}, with d as the distance from the source, to prevent unrealistically bright distant illumination. This is often implemented as a factor in the shading equation, such as \text{att} = \frac{1}{1 + kc \cdot d + kl \cdot d^2 + kq \cdot d^2}, where constants kc, kl, and kq adjust the falloff curve for artistic control. Shading models break down surface response into components like ambient, diffuse, and . The Lambertian diffuse model, ideal for surfaces, computes the diffuse as I_d = k_d \cdot I_p \cdot (\mathbf{N} \cdot \mathbf{L}), where k_d is the diffuse reflectivity, I_p the light's , \mathbf{N} the normalized surface , and \mathbf{L} the normalized vector from the surface to the light; the cosine term (\mathbf{N} \cdot \mathbf{L}) accounts for reduced illumination at angles. This model assumes perfectly , where light scatters equally in all directions, independent of viewer position. Specular reflection adds shiny highlights to simulate glossy materials. The original Phong model calculates specular intensity as
I_s = k_s \cdot I_p \cdot (\mathbf{R} \cdot \mathbf{V})^n,
where k_s is the specular reflectivity, \mathbf{R} the perfect reflection vector \mathbf{R} = 2(\mathbf{N} \cdot \mathbf{L})\mathbf{N} - \mathbf{L}, \mathbf{V} the normalized view direction, and n the shininess exponent controlling highlight sharpness; higher n produces tighter, more mirror-like reflections. The full Phong shading combines ambient I_a = k_a \cdot I_p, diffuse, and specular terms, often clamped to [0,1] and multiplied by material color.
The Blinn-Phong model refines specular computation for efficiency by using a half- approximation: compute \mathbf{H} = \frac{\mathbf{L} + \mathbf{V}}{|\mathbf{L} + \mathbf{V}|}, then I_s = k_s \cdot I_p \cdot (\mathbf{N} \cdot \mathbf{H})^n. This avoids explicit reflection vector calculation, reducing operations per or fragment while closely approximating Phong highlights, especially for low n. Blinn-Phong remains widely used in real-time graphics due to its balance of quality and performance. For materials beyond opaque surfaces, subsurface scattering models light penetration and re-emission in translucent substances like wax or . The dipole model approximates this by placing virtual point sources inside the material to solve diffusion equations, enabling efficient computation of blurred, soft appearances from internal . This approach captures effects like color bleeding and forward , validated against measured data for materials such as and . Volumetric lighting extends to media like atmosphere or , where scatters within volumes to form visible beams known as or . These are simulated by sampling along rays from the camera through occluders, using techniques like to integrate contributions and accumulate for realistic atmospheric effects. In practice, post-processing methods project sources onto a , apply radial blurring, and composite with depth to achieve performance.

Rendering Methods

Rasterization Pipeline

The rasterization pipeline is a hardware-accelerated sequence of operations in 3D computer graphics that transforms 3D geometric primitives into a raster image suitable for display on screen pixels. This process projects vertices from world space to screen space and fills the resulting shapes with interpolated attributes, enabling efficient rendering for interactive applications like and . Unlike physically based methods, it prioritizes speed through approximations, leveraging fixed-function and programmable hardware stages on the (GPU). The pipeline begins with vertex processing, where input vertices—typically points in 3D space—are transformed by a program. This stage applies model-view-projection matrices to convert coordinates from object space to clip space, handling transformations such as , , and . Following vertex processing, primitive assembly groups the transformed vertices into geometric primitives, such as triangles or lines, based on predefined connectivity indices from vertex buffers. The rasterization stage then scans these primitives across the screen, generating fragments—potential coverage areas—by determining which screen pixels overlap each primitive and computing initial attributes like depth (z-value) at those locations. Once fragments are produced, fragment shading (also known as pixel shading) processes each fragment using a fragment shader program, which interpolates and computes per-fragment properties such as color based on vertex attributes. This stage draws on material data for surface appearance, as detailed in and shading techniques. Finally, depth testing in the output merger stage resolves visibility by comparing each fragment's depth against a stored z-buffer value per , discarding those behind closer surfaces to perform hidden surface removal. Successful fragments are then blended into the to produce the final image. Key optimizations enhance efficiency in the rasterization pipeline. , implemented during depth testing, maintains a depth buffer initialized to maximum values and updates it only if a fragment's depth is closer than the current value, effectively handling occlusions without primitives. Backface culling occurs in the rasterizer stage, discarding primitives whose vertices are wound in a direction facing away from the viewer (e.g., for counter-clockwise front faces), reducing unnecessary processing for about half of a closed mesh's triangles. The GPU plays a central role by enabling across pipeline stages through specialized . execute concurrently on multiple vertices, (an optional stage post-primitive assembly) can generate or modify in , and fragment process thousands of fragments simultaneously across shader cores, achieving high throughput for rendering. This architecture allows GPUs to handle billions of operations per second, far exceeding CPU capabilities for graphics workloads. During rasterization and fragment shading, attributes like color or texture coordinates are interpolated across using barycentric coordinates, which express any point inside a as a weighted of its vertices. For a point p within ABC, the coordinates \alpha, \beta, \gamma (summing to 1) are computed from sub-triangle areas: \begin{align*} \alpha &= \frac{A_{pBC}}{A_{ABC}}, \\ \beta &= \frac{A_{ApC}}{A_{ABC}}, \\ \gamma &= \frac{A_{ABp}}{A_{ABC}}, \end{align*} where A_{ABC} is the total triangle area and A_{pBC}, etc., are areas of sub-triangles formed by p. In practice, for a point p with respect to vertices v_1, v_2, v_3, simplified forms use \alpha = A_2 / A and \beta = A_1 / A, with \gamma = 1 - \alpha - \beta, enabling perspective-correct interpolation essential for accurate shading.

Ray Tracing and Global Illumination

Ray tracing is a rendering technique that simulates the physical behavior of by tracing rays from the camera through each in the , computing their interactions with scene geometry to determine color and . Introduced by Turner Whitted in 1980, this method traces primary rays from the viewpoint to find the nearest with objects, then recursively generates secondary rays for reflections, refractions, and shadows to model specular effects and visibility. For shadows, a ray is cast from the point toward each source; if it intersects another object before reaching the , the point is in . This recursive process accurately captures phenomena like mirror reflections and transparent refractions but is computationally intensive, often taking hours or days for complex scenes due to the need to trace millions of rays per . To accelerate ray-object intersection tests, spatial data structures partition the scene into hierarchies that prune unnecessary computations. Bounding volume hierarchies (BVHs) organize objects into a where each node encloses child nodes with bounding volumes like axis-aligned bounding boxes (AABBs), allowing rays to skip empty regions efficiently; this structure excels in dynamic scenes and has become prevalent in production renderers. K-d trees, an alternative, recursively subdivide space along axis-aligned planes to create a balanced spatial partition, optimizing traversal for coherent rays but requiring more preprocessing for static scenes. The ray intersection is computed parametrically: a ray originates at point \mathbf{O} in direction \mathbf{D}, with points along it given by \mathbf{P}(t) = \mathbf{O} + t \mathbf{D} for t \geq 0; intersection solves for the smallest positive t against each surface equation, such as planes or spheres. Unlike rasterization, which approximates via geometric projections for real-time performance, ray tracing provides physically accurate light simulation at the cost of slower rendering, making it ideal for offline photorealistic applications. Basic ray tracing, however, neglects diffuse interreflections, leading to incomplete . Radiosity addresses this by solving the integral equation for diffuse energy transfer between surfaces, computing form factors to propagate light iteratively across patches until convergence, effectively simulating soft indirect in enclosed environments. Path tracing extends ray tracing to full by using to sample random light paths from the camera, averaging their contributions to unbiased estimates of the , which balances emitted, reflected, and incoming radiance. Pioneered by James Kajiya in , this unbiased method naturally handles all light interactions, including multiple bounces and caustics, though it requires many samples to reduce noise. For caustics—bright patterns from focused specular reflections or refractions— traces photons from lights to store density maps, then reconstructs illumination during ray tracing, enabling efficient simulation of effects like underwater light shafts. Modern advancements enable hybrid real-time ray tracing by combining ray tracing for primary effects with rasterization, using AI-based denoising to filter noise from low-sample paths, achieving interactive frame rates on GPUs while approximating . Recent developments as of 2025 include Microsoft's (DXR) 1.2, announced in March 2025, which offers up to 2.3x performance improvements in complex ray tracing scenes, and NVIDIA's OptiX 9.0.0, released in February 2025. These techniques reduce render times from seconds to milliseconds per frame, bridging offline accuracy with demands in games and simulations.

Software and Tools

Modeling and Animation Software

Modeling and animation software encompasses a range of applications designed to facilitate the creation, manipulation, and temporal sequencing of models and scenes, enabling artists to build complex digital assets and choreograph their movements. These tools typically include intuitive interfaces for and NURBS modeling, skeletal for character deformation, keyframe-based or systems, and integrated timelines for sequencing actions. Widely adopted in , , , and industries, such software has evolved to support collaborative workflows and hybrid techniques that blend with traditional artistry. Autodesk Maya stands as an industry standard for 3D animation, particularly in professional production pipelines for film and games, offering robust tools for modeling, , and . Originally developed by Alias|Wavefront in the early and released as version 1.0 in 1998, Maya was acquired by in 2005, integrating seamlessly with other Autodesk products for enhanced workflow efficiency. Key features include advanced systems that allow for precise control over skeletons and , timeline editors for keyframe and graph-based curve , and plugins such as nCloth, which simulates dynamic cloth and soft body interactions using a particle-linked system for realistic motion. Blender, an open-source alternative providing a comprehensive from modeling to , has gained prominence for its and community-driven since its open-source release in , originating from a 1994 hobby project by . It features a non-linear timeline editor that supports keyframing, dope sheets, and action clips for efficient management, alongside tools like armature systems for bone-based deformation and solvers. A distinctive capability is the tool, first introduced in 2009 and significantly enhanced in version 2.80 in 2019 for hybrid 2D/3D workflows by allowing stroke-based drawing directly in the 3D viewport for storyboarding, cut-out , and integration with 3D scenes. Houdini excels in and , emphasizing node-based workflows for non-destructive, parametric asset generation, particularly suited for in film and television. Developed by Side Effects Software, its procedural foundations trace back to the PRISMS system, with Houdini proper emerging in the mid-1990s and evolving into a versatile tool for dynamic simulations and through tools like CHOPS (Channel Operators) for procedural motion and via digital assets. The software's timeline and expression language support complex, rule-driven animations that adapt to changes in model geometry or scene parameters. The evolution of these tools reflects a shift from proprietary systems in the , such as Alias|Wavefront's early modeling packages that pioneered NURBS and keyframe on workstations, to diverse ecosystems in the incorporating open-source and cloud-based collaboration. Modern platforms like Unity's system, with its state machines, clip blending, and with Unity for real-time team syncing, exemplify this progression by enabling distributed workflows for game development and .

Rendering and Simulation Tools

Rendering and simulation tools in 3D computer graphics encompass specialized software that computes photorealistic images from scene data and simulates physical phenomena like motion and fluids, enabling production-quality visuals in , , and games. These tools operate post-modeling and animation stages, focusing on efficient computation of light interactions and dynamic effects to produce final outputs such as rendered frames or simulation caches. Key rendering engines employ or algorithms for accuracy, while simulation tools leverage physics engines for realistic behaviors, often accelerated by modern hardware. A foundational example is Pixar's RenderMan, which originated from the REYES (Render Everything Really Easy System) architecture developed in the 1980s at . REYES processes complex scenes by micropolygon rendering, where geometry is subdivided into small primitives for efficient shading and sampling, allowing high-quality outputs without excessive memory use; this design remains integral to RenderMan's pipeline for film production. Among modern rendering engines, stands out for film and , utilizing CPU- and GPU-based ray tracing to deliver unbiased, physically accurate results with features like adaptive sampling and . Integrated into software like , it supports and , making it suitable for intricate scenes in productions such as animations. Cycles, the built-in path tracer for , provides physically based rendering with unbiased and biased modes, supporting GPU acceleration via , OptiX, and HIP for faster previews and final renders; its node-based shader system facilitates procedural materials and light paths optimization. V-Ray, developed by , offers versatile hybrid rendering for architecture visualization and design, combining CPU/GPU ray tracing with progressive rendering and AI denoising to produce photorealistic images quickly, often used in tools like and 3ds Max for stills and animations. Simulation tools complement rendering by modeling physical interactions. NVIDIA's is an open-source SDK for real-time physics, handling , collisions, particles, and cloth with GPU acceleration on hardware, widely adopted in game engines like Unreal for dynamic environments. EmberGen from JangaFX specializes in real-time volumetric fluid simulations for smoke, fire, and explosions, using GPU-based and confinement to generate VDB sequences in seconds, ideal for VFX artists needing rapid iterations without baking long simulations. Advancements in the 2010s introduced GPU-accelerated rendering, exemplified by , a biased renderer from Maxon that leverages and GPUs for out-of-core and massive scene handling, reducing render times dramatically compared to CPU-only systems—often achieving 10-100x speedups in production workflows. In the 2020s, AI-upscaled rendering emerged, with 's using deep learning super sampling to upscale lower-resolution renders in real-time, improving image quality and frame rates in 3D applications like previews while minimizing . These tools integrate seamlessly with animation software to streamline pipelines from simulation to final output.

Data Handling

3D File Formats

3D file formats store geometric , materials, animations, and scene hierarchies essential for representing 3D models and assets in applications. These formats vary in structure, from simple text-based representations of meshes to complex binary containers supporting dynamic elements like skeletal animations. Common formats balance accessibility, compactness, and compatibility, enabling exchange across modeling, rendering, and simulation workflows. The format, developed by , is a text-based standard for defining , primarily supporting vertices, normals, coordinates, and polygonal faces for simple meshes. It uses ASCII encoding, making it human-readable and editable with text editors, but it lacks native support for advanced features like animations or complex materials, often requiring a companion file for basic . OBJ files are widely used for static model interchange due to their simplicity and broad software compatibility. In contrast, the format, a standard owned by since 2006, supports both and ASCII encodings and encompasses a broader scope, including meshes, skeletal s, , and basic materials. FBX files are more compact and efficient for large datasets, while ASCII variants aid . Developed originally by Kaydara for MotionBuilder, FBX facilitates seamless data transfer in animation pipelines, handling hierarchical scenes and deformation data. For scene-level data, the (GL Transmission Format) specification, maintained by the and adopted as the ISO/IEC 12113:2022 , provides a , JSON-based structure optimized for real-time rendering and web applications. It describes entire scenes with nodes, meshes, materials, animations, and skins, often packaged in a compact .glb file that embeds resources like textures. glTF's design emphasizes low overhead and fast loading, making it suitable for runtime delivery in browsers and mobile devices. Specialized formats address niche needs, such as the STL () format, introduced by in 1987, which represents surfaces as triangular facets in either ASCII or binary form, focusing exclusively on watertight meshes for additive manufacturing. Binary STL files include an 80-byte header followed by triangle data, prioritizing geometric approximation over attributes like colors or textures. Similarly, the (.abc) format, an open-source standard co-developed by and in 2010, stores baked animation caches and procedural geometry as hierarchical particle systems or transforms, enabling efficient exchange of complex, time-sampled data without topology changes. Despite their utility, early formats like exhibit limitations, such as incomplete material support that requires external files and does not natively handle (PBR) workflows. By the 2020s, formats like evolved to incorporate PBR materials, using metallic-roughness or specular-glossiness models for consistent, realistic shading across tools, addressing interoperability challenges in modern pipelines.

Data Interchange Standards

Data interchange standards in computer graphics enable seamless transfer of scene data, models, and metadata across diverse tools and pipelines, promoting interoperability in collaborative workflows. One prominent standard is the Universal Scene Description (USD), developed by Animation Studios and open-sourced in 2016 to support efficient exchange in large-scale production environments, such as film and pipelines where multiple artists contribute to complex scenes. USD facilitates non-destructive of 3D assets, allowing teams to layer modifications without altering original files, which is particularly valuable for iterative creative processes. Its open-source evolution, branded as OpenUSD, gained momentum in the through the formation of the Alliance for OpenUSD in 2023, involving industry leaders like , , Apple, , and to standardize and extend its capabilities for broader 3D ecosystems. Another key standard is , released by the in 2017 as a runtime asset delivery format optimized for web and real-time applications, reducing file sizes and processing overhead for efficient 3D model transmission. supports extensions that enhance its utility, including ongoing efforts to incorporate physics simulations for , enabling better integration of interactive elements in simulations and games. Low-level graphics APIs like and play a complementary role by providing cross-platform interfaces for rendering interchanged 3D data directly on GPUs, with offering explicit control over resource management to minimize overhead in high-performance pipelines. A distinctive feature of USD is its layering system, which organizes scene data into modular, independent layers that can be composed hierarchically—such as base models in one layer and overrides like animations or in others—ensuring changes remain non-destructive and reversible. This approach supports variant sets for exploring alternatives (e.g., different character poses) without duplicating data, streamlining collaboration in tools from modeling to rendering. Despite these advances, challenges persist in 3D data interchange, including managing versioning to track evolving assets across tools and preserving like or parameters during transfers, which can lead to loss of intent or compatibility issues in long-term projects. As of 2025, USD continues to evolve with enhancements for AR/VR applications, including improved streaming capabilities for real-time delivery of layered scenes to devices like , as evidenced by integrations in production workflows and alliance-driven extensions.

Specialized Rendering

Real-Time Graphics

Real-time graphics encompasses the techniques and advancements that enable interactive at high frame rates, typically targeting applications such as and simulations where responsiveness is paramount. These methods build on the rasterization pipeline by incorporating optimizations and to manage complex scenes without compromising interactivity. Key goals include minimizing latency and maximizing throughput on consumer , allowing for immersive experiences with dynamic , shadows, and animations. Optimizations like level-of-detail (LOD) systems play a crucial role in balancing visual fidelity and performance by dynamically reducing the geometric complexity of objects based on their distance from the viewer or screen-space importance. For instance, distant might use a low-polygon , while nearby characters retain high-detail models, preventing unnecessary computations. Occlusion culling further enhances efficiency by identifying and excluding geometry hidden behind closer objects, such as walls or characters, from the rendering , which can significantly reduce the number of draw calls in complex scenes such as dense urban environments. shaders, introduced in modern graphics APIs like DirectX 11 and OpenGL 4.0, allow for adaptive subdivision of base meshes on the GPU, generating finer geometry where needed—such as curved surfaces—for smoother silhouettes without inflating vertex buffers. Hardware support is foundational to real-time graphics, with modern GPUs like NVIDIA's architecture (launched in 2022) providing massive through thousands of shader cores and specialized tensor cores for AI-accelerated features. This architecture enables real-time ray tracing and upscaling via (DLSS), which uses AI to reconstruct higher-resolution images from lower internal renders, achieving up to 4x performance gains in demanding titles while maintaining image quality. For example, DLSS 3 integrates frame generation to boost frame rates in games like , reducing latency through optical flow analysis. Advanced techniques such as deferred rendering decouple geometry passes from computations, storing attributes like normals and in G-buffers for efficient application of multiple lights in screen space, which is essential for scenes with dozens of dynamic sources. This approach scales better than forward rendering for complex , as is computed only for visible pixels rather than every vertex. Compute shaders extend GPU programmability beyond pipelines, enabling simulations for effects like particle systems, where millions of elements—such as or —can be updated in using algorithms like NVIDIA's Compute Particles sample. Performance in real-time graphics is measured by frame rates, with industry standards aiming for 60 frames per second (FPS) or higher to ensure fluid motion, corresponding to a maximum frame time of approximately 16.67 milliseconds. The relationship between frame rate and time is given by the equation: \text{Frame time} = \frac{1}{\text{FPS}} = \text{render time} + \text{sync time} where render time encompasses GPU and CPU processing, and sync time accounts for vertical synchronization to prevent screen tearing. Achieving consistent metrics often involves profiling tools to balance draw calls and shader complexity, as exceeding the frame budget leads to stuttering in interactive applications.

Non-Photorealistic Rendering

Non-photorealistic rendering () encompasses a range of techniques designed to produce stylized images that emulate traditional artistic media, such as hand-drawn illustrations, paintings, or cartoons, in contrast to the simulation of physical lighting and materials in photorealistic rendering. These methods prioritize expressive communication and aesthetic appeal over realism, often abstracting details to emphasize form, motion, or conceptual information. Seminal work in NPR traces back to Paul Haeberli's 1990 paper "Paint by Numbers: Abstract Image Representations," which introduced painterly effects by splatting colored disks onto images to mimic . Subsequent developments expanded NPR to models, focusing on line-based and tonal stylization to convey shape and depth artistically. Key techniques in NPR include cel-shading, also known as toon shading, which applies flat color bands and bold outlines to 3D models to achieve a comic-book appearance. Cel-shading typically involves a two-step process: first, quantizing shading into discrete levels using threshold functions on diffuse lighting, and second, detecting and rendering outlines via edge detection algorithms that identify silhouette edges where surface normals face away from the viewer. A foundational approach to real-time cel-shading was presented in the 1997 SIGGRAPH paper "Real-Time Nonphotorealistic Rendering," which used image-space processing to extract feature edges, including silhouettes, by analyzing depth discontinuities and normal variations in rendered buffers. Another prominent technique is stroke-based rendering for sketchy lines, where 3D scenes are approximated by placing oriented strokes or lines that follow surface geometry, creating hand-drawn-like effects. Aaron Hertzmann's 2002 survey "Stroke-Based Rendering" outlines algorithms that optimize stroke placement by minimizing image reconstruction error, often starting with suggestive contours and refining via greedy selection to balance coverage and stylization. NPR pipelines commonly integrate silhouette detection as a core to highlight object boundaries, enabling stylized outlines in various . These pipelines typically operate in object-space or image-space: object-space methods compute back-facing polygons adjacent to front-facing ones to trace exact silhouette curves, as detailed in the 2003 IEEE Computer Graphics and Applications guide "A Developer's Guide to Silhouette Algorithms for Polygonal Models," which categorizes techniques by efficiency for polygonal meshes. Image-space alternatives, faster for real-time use, post-process depth or normal buffers to detect edges via Sobel filters or thresholds. For watercolor effects, NPR simulates artistic and bleeding by blurring edges and modulating pigment flow; the 1997 paper "Computer-Generated Watercolor" by , Anderson, Seims, Fleischer, and Salesin models these through pigment transport on simulated paper, where edge blurring is achieved by convolving boundaries with diffusion kernels to create soft, irregular borders mimicking techniques. Applications of NPR span entertainment and scientific domains, enhancing visual storytelling and comprehension. In video games, cel-shading defines the distinctive look of titles like the Borderlands series (2009–present), where Gearbox Software employs custom shaders for flat shading and rim lighting to maintain a vibrant, comic-inspired aesthetic across dynamic scenes. In scientific visualization, NPR facilitates illustrative anatomy by abstracting complex volumes into clear, didactic diagrams; for instance, the 2005 Eurographics paper "Illustrative Visualization for Medical Training" demonstrates tone-based shading and exploded views on CT scans to reveal subsurface structures, improving educational efficacy over photorealistic volumes. Recent advancements in leverage for style transfer, enabling automatic adaptation of artistic styles to content in the . Neural networks, particularly convolutional models, perform style transfer by optimizing content loss (preserving geometry) against style loss (matching artistic textures), as explored in the 2023 Applied Sciences paper "Comparing and Gradient-Based Algorithms in Non-photorealistic Rendering," which shows neural methods outperform traditional optimization in capturing sketchy or painterly details with fewer iterations. The 2025 paper "Hybridizing Expressive Rendering: Stroke-Based Rendering with Classic and Neural Methods" highlights hybrid approaches where neural networks generate stroke parameters, blending classical NPR with for coherent animations in real-time applications.

Emerging Technologies

Integration with AI and Machine Learning

and have profoundly transformed 3D computer graphics by automating complex tasks, enhancing rendering quality, and enabling novel content creation. These technologies integrate into various stages of the , from scene generation to optimization, leveraging neural networks to achieve results that were previously labor-intensive or computationally prohibitive. Seminal advancements, such as neural radiance fields introduced in 2020, allow for high-fidelity scene reconstruction from sparse input data, representing scenes as continuous functions optimized via to synthesize photorealistic novel views. Generative models have emerged as a cornerstone of AI-driven graphics, facilitating the creation of assets from text prompts or images. For instance, extensions of diffusion models like have been adapted in the 2020s for generation, such as in DreamFusion, which combines score distillation sampling with neural radiance fields to produce textured shapes from textual descriptions in minutes. Building on this, Gaussian splatting, proposed in , represents scenes using explicit Gaussians optimized with , enabling real-time radiance field rendering at over 100 frames per second while supporting novel view synthesis with high quality. These models address limitations in traditional mesh-based representations by providing differentiable, compact scene encodings suitable for graphics applications. In rendering workflows, AI excels at post-processing tasks like denoising, particularly for ray tracing where Monte Carlo sampling introduces noise. Intel's Open Image Denoise, an open-source library released in 2019 and continually updated, employs convolutional neural networks trained on synthetic ray-traced data to remove noise from images, achieving up to 10x speedup in convergence compared to traditional filters while preserving details. Similarly, NVIDIA's (DLSS), starting with version 2.0 in 2019 and evolving to DLSS 4 by 2025, uses AI-based temporal upscaling and frame generation powered by Tensor Cores to upscale low-resolution renders in , boosting frame rates by 2-4x in games without perceptible quality loss. These tools integrate seamlessly into rendering pipelines, enhancing efficiency for both offline and graphics. Advancements in also automate labor-intensive aspects of 3D production, such as character . RigNet, a 2020 neural architecture, automates skeletal for articulated characters by predicting bone placements and weights from , reducing manual time from hours to seconds with accuracy comparable to expert results on diverse datasets. This approach leverages graph neural networks to infer rig topologies, enabling scalable preparation for films and games. Despite these innovations, challenges persist in AI integration for 3D graphics. Training data biases can propagate unfair representations, leading to skewed outputs that reinforce stereotypes, such as racial and gender biases in AI models, as highlighted in the 2025 AI Index Report. Computational costs remain a barrier, with training large models like NeRF variants requiring thousands of GPU hours; however, by 2025, techniques like Gaussian splatting have reduced inference times by orders of magnitude, from minutes per frame to milliseconds, democratizing access through efficient representations. Ongoing efforts focus on bias mitigation via diverse datasets and cost optimizations through hardware accelerations.

Virtual and Augmented Reality Applications

Virtual reality (VR) rendering in 3D computer graphics relies on stereoscopic views generated through dual eye projections to simulate , mimicking the human system by rendering separate images for each eye with a slight horizontal offset based on inter-pupillary distance. This approach enables immersive experiences in head-mounted displays, where near-eye present the projections to create a sense of . To optimize computational efficiency, techniques leverage eye-tracking to render high-resolution details only in the user's central area—the fovea—while reducing quality in peripheral regions, potentially cutting rendering costs by up to 70% without perceptible loss in visual fidelity. Seminal work in this area demonstrated that gaze-contingent foveation, when paired with low-latency tracking, supports wider field-of-view displays in VR systems. In (AR), spatial mapping integrates virtual 3D graphics with the physical environment using techniques like (), as implemented in frameworks such as ARKit, which combines visual-inertial from device cameras and motion sensors to build real-time 3D maps of surroundings. This enables precise anchoring of virtual objects to real-world coordinates, allowing for stable overlays that persist across device movements. handling further enhances realism by determining depth relationships between virtual and real elements, ensuring that closer real objects obscure farther virtual ones through depth buffering or model-based segmentation in the rendering pipeline. For instance, depth-aware compositing uses z-buffer algorithms adapted for , where virtual geometry is clipped or blended based on real-world depth data to avoid unnatural transparency. Hardware advancements have been pivotal, with the Meta Quest series exemplifying standalone VR headsets optimized for 3D graphics rendering; starting with the Oculus Quest in 2019, followed by Quest 2 in 2020, Quest Pro in 2022, Quest 3 in 2023, and Quest 3S in 2024, these devices incorporate inside-out tracking and high-resolution displays supporting stereoscopic 3D at up to 120 Hz refresh rates. In AR, LiDAR scanners on devices like iPhones provide high-precision depth sensing, capturing point clouds for environmental reconstruction and enabling accurate spatial alignment of 3D graphics with sub-millimeter accuracy in controlled settings. Key challenges include minimizing end-to-end latency to under 20 ms—encompassing motion-to-photon delays—to prevent motion sickness and maintain immersion, as delays beyond this threshold degrade perceptual synchrony in dynamic scenes. By 2025, advancements in holographic displays have introduced content-adaptive optimization for AR, using computational holography to generate light fields that support true volumetric rendering with reduced speckle noise and improved occlusion for mixed-reality integration. These real-time optimizations build on general graphics pipelines to address immersion-specific demands.

References

  1. [1]
    Overview of three-dimensional computer graphics
    Mar 1, 1996 · Computer vision · Image and video acquisition · 3D imaging · Computer graphics · Animation · Graphics systems and interfaces. Recommendations ...
  2. [2]
    [PDF] Introduction to Computer Graphics - UCSD CSE
    Jan 6, 2025 · Computer graphics is introduced, including its brief history, topics, and 3D graphics with rasterization and ray casting.<|control11|><|separator|>
  3. [3]
    [PDF] History-of-graphics.pdf
    History of computer graphics. CS 248 - Introduction to Computer Graphics ... – 1980s - 3D trackers. – 1990s - active rangefinders. 4D and higher.
  4. [4]
    A Short History of Computer Graphics
    Rendering (shading) were discovered by Gouraud and Phong at the University of Utah. Phong also introduced a reflection model that included specular highlights.
  5. [5]
    Introduction to 3D Graphics
    Introduction to 3D Graphics · preserves the ratio between parallel objects; · preserves the parallel lines; · the orthogonal projection preserves the width of ...
  6. [6]
    Computer Graphics For All - Communications of the ACM
    Jul 1, 2010 · Creating a 3D model in a computer (not necessarily on a screen) is the first step in most 3D computer-graphics applications yet is also the most ...
  7. [7]
    [PDF] The 3D Model Acquisition Pipeline - Computer Graphics Group
    The potential exists to expand the use of 3D models beyond the well established games market to new applications ranging from virtual museums to e-commerce. To ...
  8. [8]
    Three Dimensional Computer Graphics - ScienceDirect.com
    Three-dimensional computer graphics are defined as the computational representation and manipulation of objects in a virtual three-dimensional (3D) space, where ...
  9. [9]
    Chapter 3 - Computer Graphics Primer - VTK Book
    There are four important coordinate systems in computer graphics. The model system is the 3D coordinate system where our geometry is defined. The world ...
  10. [10]
    3D Coordinates and Transforms
    In 3D, a point is specified by three numbers (x, y, z). Basic transforms include rotation, scaling, and translation. The z-axis is perpendicular to x and y.
  11. [11]
    [PDF] The Geometry of Perspective Projection
    - It is the projection of a 3D object onto a plane by a set of parallel rays orthogonal to the image plane. - It is the limit of perspective projection as f − > ...
  12. [12]
    38 Representing Images and Geometry
    Homogeneous coordinates are extensively used in computer vision and computer graphics. They allow simplifying the computation of geometric image ...38.2 Homogeneous And... · 38.3 2d Image... · 38.6 Implicit Image...<|separator|>
  13. [13]
    5.6: Three-Dimensional Homogeneous Coordinates
    May 22, 2022 · Homogeneous coordinates in three dimensions will also allow us to do perspective projections so that we can view a three-dimensional object from any point in ...Image Representation · Image Transformation · Direction Cosines
  14. [14]
    Explaining Homogeneous Coordinates & Projective Geometry
    Feb 24, 2014 · Homogeneous coordinates have an extra dimension called W, which scales the X, Y, and Z dimensions. Matrices for translation and perspective ...
  15. [15]
    The Perspective and Orthographic Projection Matrix - Scratchapixel
    The perspective projection matrix as used in OpenGL and its transformation upon transposition. Principle In summary, we understand that the matrix is correctly ...
  16. [16]
    Unreal Engine: The most powerful real-time 3D creation tool
    Develop games. Produce or animate films. Visualize spaces or products. Create next-generation interfaces. Or build immersive experiences we haven't even thought ...Download · Features · Switching to Unreal Engine · Unreal Engine Issues
  17. [17]
    Build Multi-Platform Video Games - Unreal Engine
    Unreal Engine offers powerful tools for AAA games, including graphics, audio, multiplayer, source code access, and tools for animation and character creation.
  18. [18]
  19. [19]
    Avatar - Framestore
    A photorealistic CGI spectacular featuring live human actors performing in almost entirely CG environments, as well as a supporting cast of CG alien beings, the ...
  20. [20]
    Shapespark: Real-time architectural visualizations in a browser
    Real-time rendering in a browser. Turn architectural 3D models into online walkthroughs. Start free trial Courtesy of Motion Wave Courtesy of My3Ideas ...Architecture and interior design · Pricing and features · 3D Meetings · Sign up
  21. [21]
    3D Architectural Walkthrough Services | OMEGARENDER
    Visualizing exterior and interior design with 3D architectural walkthrough services. ▻ We create amazing, life-like architectural animation with the perfect ...
  22. [22]
    Medical 3D Printing & Anatomical Modeling - Mayo Clinic
    Our 3D anatomic models are based on your unique imaging results, such as CT or MRI scans. Before printing a 3D model, radiologists process the scans with ...
  23. [23]
    From medical imaging data to 3D printed anatomical models
    May 31, 2017 · We have introduced a general workflow that can be used to generate 3D printed anatomical models from medical imaging data. This streamlined ...<|separator|>
  24. [24]
    3D Prototyping in Product Design: Bridging Innovation and Efficiency
    Sep 26, 2024 · 3D prototyping creates physical models from digital designs, allowing for an iterative, hands-on approach to product development.
  25. [25]
    How Does Product Prototyping with 3D Modeling Help Grow Sales ...
    Nov 11, 2024 · One of the most significant advantages of prototyping with a 3D product modeling service is the incredible speed that this process offers.
  26. [26]
    Computer Graphics Market | Global Market Analysis Report - 2035
    Sep 26, 2025 · The computer graphics market is projected to grow from USD 244.5 billion in 2025 to USD 524.6 billion by 2035, at a CAGR of 7.9%. Computer ...
  27. [27]
    Cultural odyssey in the metaverse: investigating the impact of virtual ...
    Jun 19, 2025 · This study investigates the impact of advanced digital technologies, particularly virtual reality (VR), on the sustainability of cultural heritage tourism.
  28. [28]
    The societal impact of the metaverse | Discover Artificial Intelligence
    Oct 6, 2022 · The culture embedded into the Metaverse may be different than the one from the user's geographical location. This can lead to cognitive ...
  29. [29]
    NVIDIA, RTXs, H100, and more: The Evolution of GPU - Deepgram
    Jan 17, 2025 · NVIDIA RTX 20 Series (2018-2019). The RTX 20 series marked NVIDIA's introduction of real-time ray tracing capabilities to consumer graphics ...
  30. [30]
    The Evolution of Graphics Cards: A Historical Perspective - Medium
    Nov 27, 2023 · NVIDIA's RTX series, introduced in 2018, brought ray tracing capabilities to consumer graphics cards, ushering in a new era of visual ...
  31. [31]
    What is interactive 3D? - Unreal Engine
    Nov 19, 2019 · Interactive 3D is the ability to interact with the digital world the same way you do with the real world, creating a 3D world you can explore.
  32. [32]
    Ethical Implications of Generative AI in 3D Modeling
    Nov 20, 2024 · One of the more troubling applications of AI is the creation of hyper-realistic but deceptive content, such as deepfakes and synthetic media.
  33. [33]
    What Are The Ethics Of Deepfake Technology? - Consensus
    Deepfake technology presents significant ethical challenges across various domains, from politics and corporate fraud to societal trust and personal rights.
  34. [34]
    Sketchpad: a man-machine graphical communication system
    Sketchpad III: a computer program for drawing in three dimensions. AFIPS '63 (Spring): Proceedings of the May 21-23, 1963, spring joint computer conference.Missing: original | Show results with:original
  35. [35]
    Spotlight on Turing Laureates - ACM
    D thesis at MIT, Ivan Sutherland developed and described "Sketchpad," the first computer graphical user interface. The primitive TX-2 computer Sutherland used ...
  36. [36]
  37. [37]
    History of Computer Graphics: 1960
    The first major advance in 3D computer graphics was created at UU by these early pioneers, the hidden-surface algorithm. In order to draw a representation of a ...
  38. [38]
    A timeline of 3D softwares - RTF | Rethinking The Future
    1971 – Donald P. Greenberg – He developed the first computer graphics movie known as Cornell in Perspective using the General Electric Visual Simulation ...<|separator|>
  39. [39]
    The Utah Teapot - Utah Graphics Lab
    The Utah teapot has been the symbol of computer graphics. It was originally created by Martin Newell in 1975, when he was a PhD student at the University of ...
  40. [40]
    None
    Nothing is retrieved...<|separator|>
  41. [41]
    Continuous Shading of Curved Surfaces - ACM Digital Library
    A procedure for computing shaded pictures of curved surfaces is presented. The surface is approximated by small polygons in order to solve easily the hidden- ...
  42. [42]
    Illumination for computer generated pictures - ACM Digital Library
    Illumination for computer generated pictures. Author: Bui Tuong Phong. Bui Tuong Phong. Stanford Univ., Stanford, CA. View Profile. Authors Info & Claims.
  43. [43]
    Famous Graphics Chips 3Dfx's Voodoo - IEEE Computer Society
    Jun 5, 2019 · 3Dfx released its Voodoo Graphics chipset in November 1996. Voodoo was a 3D-only add-in board (AIB) that required an external VGA chip.
  44. [44]
    3dfx Voodoo - the graphics card that revolutionized PC gaming
    May 27, 2024 · Reflective surfaces, smooth frame rates and the awesomeness of GLQuake. We explore how the 3dfx Voodoo 3D accelerator changed everything.
  45. [45]
    RenderMan at 30: A Visual History - VFX Voice -
    Nov 27, 2018 · RenderMan became somewhat of a household name when Pixar's Toy Story, the first feature-length computer-animated film, was released in 1995 ...<|separator|>
  46. [46]
    Shader Programming Part I: Fundamentals of Vertex ... - GameDev.net
    Feb 27, 2002 · Vertex Shaders. This is the new mechanism introduced in DirectX 8. Instead of setting parameters to control the pipeline, you write a vertex ...
  47. [47]
    NVIDIA RTX Technology Realizes Dream of Real-Time Cinematic
    Mar 19, 2018 · Real-time ray tracing replaces a majority of the techniques ... announced the NVIDIA GameWorks™ SDK will add a ray-tracing denoiser module.
  48. [48]
    NVIDIA OptiX™ AI-Accelerated Denoiser
    It uses GPU-accelerated artificial intelligence to dramatically reduce the time to render a high fidelity image that is visually noiseless.Missing: 2020s | Show results with:2020s
  49. [49]
    Blender's History
    Cycles is Blender's production-capable path-tracing render engine, first incorporated into release 2.61, back in 2011. Over the years, Cycles has introduced ...Missing: advancements | Show results with:advancements
  50. [50]
    Virtual Reality (VR) In Gaming: 2025 Guide - Euphoria XR
    Rating 5.0 (483) The Oculus Rift's 2012 release reignited interest in virtual reality. The ... Players using VR technology need 3D graphics with high resolution ...<|separator|>
  51. [51]
    Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics ...
    Aug 25, 2020 · NVIDIA RTX has turned real-time ray tracing into a reality, and it's transforming workflows across all industries.
  52. [52]
    Chapter 7 - Advanced Computer Graphics - VTK Book
    A major topic in that chapter was how to represent and render geometry using surface primitives such as points, lines, and polygons. In this chapter our primary ...
  53. [53]
    Voxelisation Algorithms and Data Structures: A Review - PMC
    To be able to work with voxels, the conversion from vector-based primitives such as points, lines, triangles, surfaces and solids needs to be performed.
  54. [54]
    [PDF] Geometric Modeling Based on Polygonal Meshes - cs.Princeton
    Botsch's research interests include geometry processing in general, and mesh generation, mesh optimization, and multiresolution shape editing in particular.
  55. [55]
    [PDF] Recursively generated B-spline surfaces on arbitrary topological ...
    This paper describes a method for recursively generating surfaces that approximate points lying-on a mesh of arbit- rary topology. The method is presented as a ...
  56. [56]
    Computer-aided design applications of the rational b-spline ...
    Computer-aided design applications of the rational b-spline approximation form. · K. J. Versprille · Published 1975 · Computer Science, Mathematics, Engineering.
  57. [57]
    [PDF] Bézier Curves - CUNY Academic Works
    Bézier curves were originally developed in 1912, and used by Paul de Casteljau in 1959 to evaluate poly- nomials recursively. This procedure is ...
  58. [58]
    [PDF] Using graph-based data structures to organize and manage scene ...
    Apr 4, 2002 · Scene graphs are data structures used to organize and manage the contents of hierarchically oriented scene data.
  59. [59]
    [PDF] A Survey on Bounding Volume Hierarchies for Ray Tracing
    Figure 1: Bounding volume hierarchies (BVHs) are the ray tracing acceleration data structure of choice in many state of the art rendering applications.
  60. [60]
    Hierarchical Graph Networks for 3D Indoor Scene Generation With ...
    We propose <sc>Scene</sc>HGN, a hierarchical graph network for 3D indoor scenes that takes into account the full hierarchy from the room level to the object ...
  61. [61]
    Automatic View Placement in 3D toward Hierarchical Non-Linear ...
    In this work, we propose a framework for automatically solving the task of 3D content placement, which is based on views --- 3D replacement for slides. We also ...
  62. [62]
    A Large Language Model-Based System for Semantic ...
    Jul 9, 2025 · A layout engine places 3D models into a virtual space using scene graph descriptions. For example, if two characters are conversing, they ...
  63. [63]
    Principles of traditional animation applied to 3D computer animation
    This paper describes the basic principles of traditional 2D hand drawn animation and their application to 3D computer animation.
  64. [64]
    [PDF] INTERPOLATING SPLINES FOR KEYFRAME ANIMATION
    Given a sequence of key positions, we can interpolate them with a piecewise cubic polynomial (a spline) by choosing two con- straints for each interpolation ...Missing: 3D seminal
  65. [65]
    Particle Systems—a Technique for Modeling a Class of Fuzzy Objects
    Particle systems are a technique for modeling a class of fuzzy objects.
  66. [66]
    Large steps in cloth simulation | Proceedings of the 25th annual ...
    Predicting the drape of woven cloth using interacting particles. Computer Graphics (Proc. SIGGRAPH), pages 365-372, 1994.
  67. [67]
    Texture and reflection in computer generated images
    This paper describes extensions of this algorithm in the areas of texture simulation and lighting models.The parametrization of a patch defines a coordinate ...Missing: UV | Show results with:UV
  68. [68]
    [PDF] Improving Noise - NYU Media Research Lab
    PERLIN, K., ACM SIGGRAPH 84 conference, course in. "Advanced Image Synthesis." PERLIN, K. 1985. An Image Synthesizer. In Computer Graphics. (Proceedings of ACM ...
  69. [69]
    None
    Nothing is retrieved...<|separator|>
  70. [70]
    Introduction to Shading - Scratchapixel
    We can differentiate essentially two types of delta lights: directional or distant lights and spherical lights or point light sources. The former is generally ...
  71. [71]
    Lighting - LearnOpenGL
    For instance, a directional light source has a constant wi without an attenuation factor. And a spotlight would not have a constant radiant intensity, but one ...
  72. [72]
    [PDF] James F. Blinn University of Utah
    This paper presents a more accurate function for the generation of hilights which is based on some experimental measurements of how light reflects from real.
  73. [73]
    A practical model for subsurface light transport - ACM Digital Library
    This paper introduces a simple model for subsurface light transport in translucent materials. The model enables efficient simulation of effects that BRDF ...
  74. [74]
    Chapter 13. Volumetric Light Scattering as a Post-Process
    Volumetric light scattering is a post-process using a GPU pixel shader to simulate light scattering due to shadows in the atmosphere, computed in real-time.
  75. [75]
    Graphics pipeline - Win32 apps - Microsoft Learn
    Feb 23, 2022 · Rasterizer stage, The rasterization stage converts vector information (composed of shapes or primitives) into a raster image (composed of pixels) ...
  76. [76]
    Chapter 28. Graphics Pipeline Performance - NVIDIA Developer
    The very back end of the pipeline, raster operations (often called the ROP), is responsible for reading and writing depth and stencil, doing the depth and ...
  77. [77]
    Chapter 35. GPU Program Optimization - NVIDIA Developer
    Simultaneously, the rasterizer can convert groups of transformed vertices into fragments (potentially many fragments), queuing them up for processing by the ...
  78. [78]
  79. [79]
    Direct3D Architecture (Direct3D 9) - Win32 apps - Microsoft Learn
    Jan 6, 2021 · Clipping, back face culling, attribute evaluation, and rasterization are applied to the transformed vertices. Pixel Pipeline (Direct3D 9).
  80. [80]
    [PDF] Rasterization - GAMMA
    ▫ Slope-intercept form of line equation: y = mx + b. ▫ We assume m ... ▫ 𝛼,𝛽,𝛾 are called the barycentric coordinates of p. ▫ p lies inside ...
  81. [81]
    [PDF] Rasterizing triangles - CS@Cornell
    • Interpolate three barycentric coordinates across the plane. – each ... Barycentric coordinates. • Basis: a coordinate system for trinagles. – in this ...
  82. [82]
    An improved illumination model for shaded display
    The shader to accurately simulate true reflection, shadows, and refraction, as well as the effects simulated by conventional shaders.Missing: seminal | Show results with:seminal
  83. [83]
    The ultimate guide to 3D animation - Autodesk
    Maya: Relied on by top studios for TV, film, and video games. Used in Hollywood and AAA gaming, Maya excels in modeling, rigging, animation, and simulation.
  84. [84]
    Blender's History - Blender 4.5 LTS Manual
    Cycles is Blender's production-capable path-tracing render engine, first incorporated into release 2.61, back in 2011. Over the years, Cycles has introduced ...Missing: advancements | Show results with:advancements
  85. [85]
  86. [86]
    How Autodesk Maya Became the Industry Standard
    Key features of Autodesk Maya include its dynamic simulation tools, rigging capabilities, extensive plugin support, and integration with other Autodesk products ...
  87. [87]
    Maya Help | nCloth | Autodesk
    nCloth is a fast and stable dynamic cloth solution that uses a system of linked particles to simulate a wide variety of dynamic polygon surfaces.
  88. [88]
    Introduction - Blender 4.5 LTS Manual
    The animation timeline will automatically create a new keyframe when Grease Pencil is used on empty frames. Tip. Grease Pencil can read pressure-sensitivity ...
  89. [89]
    CORE Features | SideFX
    Houdini Core delivers a powerful and accessible 3D animation experience to CG artists creating feature films, commercials or video games.<|separator|>
  90. [90]
    Side Effects Software - 25 years on - fxguide
    Feb 27, 2012 · This year marks the 25th anniversary of Side Effects Software. The company's Houdini product rides high as the go-to tool for procedural and effects animation.
  91. [91]
    Animation system overview - Unity - Manual
    Unity's animation system · Simplified workflow for aligning animation clips. · Convenient preview of animation clips, transitions and interactions between them.Missing: collaboration 2020s
  92. [92]
    Unity Cloud: Products for Real-Time 3D Creators
    Unity Cloud is an ecosystem of products and services that makes work on real-time 3D experiences more creator-focused, accessible, and connected.Missing: animation | Show results with:animation
  93. [93]
    [PDF] ( ~ ~ ' Computer Graphics, Volume 21, Number 4, July 1987
    Reyes is an image rendering system developed at Lucasfilm Ltd. and currently in use at Pixar. In designing Reyes, our goal was an architecture optimized for ...
  94. [94]
    Cycles Renderer
    Cycles is a physically based production renderer developed by the Blender project. The source code is available under the Apache License v2.
  95. [95]
  96. [96]
    PhysX SDK - Latest Features & Libraries - NVIDIA Developer
    NVIDIA PhysX is a powerful, open-source multi-physics SDK that provides scalable simulation and modeling capabilities for robotics and autonomous vehicle ...
  97. [97]
    EmberGen: Realtime fire and explosions - JangaFX
    Create real-time, high-quality volumetric simulations with EmberGen. Easy-to-use, GPU-based VFX software for fire, smoke, and explosions.Real-Time VFX Simulations · Download Free VDB Animations · Try for free
  98. [98]
    Redshift | GPU-Accelerated 3D Renderer
    ### Summary: Redshift - GPU-Accelerated 3D Renderer
  99. [99]
    NVIDIA DLSS 4 Technology
    DLSS is a revolutionary suite of neural rendering technologies that uses AI to boost FPS, reduce latency, and improve image quality.
  100. [100]
    Wavefront OBJ File Format - The Library of Congress
    Feb 20, 2025 · The Wavefront OBJ format is a format for defining the 3D geometry for the surface of one or more objects. The format was first used by Wavefront ...Identification and description · Sustainability factors · File type signifiers
  101. [101]
    FBX | Adaptable File Formats for 3D Animation Software - Autodesk
    FBX allows user to import and export files between 3D animation software, such as Maya, 3ds Max, MotionBuilder, Mudbox and other third-party software.
  102. [102]
    [PDF] B1. Object Files (.obj) - Paul Bourke
    Object files define the geometry and other properties for objects in Wavefront's Advanced Visualizer. Object files can also be used to transfer geometric data ...
  103. [103]
    Autodesk Filmbox Interchange File (FBX) - The Library of Congress
    FBX files are normally saved in a binary (or native) format, but they can also be saved in ASCII format. Binary FBX files and ASCII FBX files both use the .fbx ...Identification and description · Sustainability factors · File type signifiers
  104. [104]
    glTF™ 2.0 Specification - Khronos Registry
    Oct 11, 2021 · glTF is an API-neutral runtime asset delivery format. glTF bridges the gap between 3D content creation tools and modern graphics applications.Foreword · Concepts · GLB File Format Specification · Properties Reference
  105. [105]
    glTF - Runtime 3D Asset Delivery - The Khronos Group
    The core of glTF is a JSON file that describes the structure and composition of a scene containing 3D models, which can be stored in a single binary glTF file ( ...glTF™ 2.0 Specification · glTF | PBR · glTF News · glTF Asset Auditor
  106. [106]
    What Is An STL File? - 3D Systems
    This format approximates the surfaces of a solid model with triangles. For a simple model such as the box shown in figure 1, its surfaces can be approximated ...Missing: specification | Show results with:specification
  107. [107]
    STL (STereoLithography) File Format, Binary - Library of Congress
    Feb 27, 2025 · An STL_binary file consists of an 80 character header that can be used as a comment; the number of triangles as a 32-bit little-endian integer; ...Identification and description · Sustainability factors · File type signifiers
  108. [108]
    Autodesk Alembic - The Library of Congress
    Mar 4, 2024 · The ABC file format is used for exchanging 3D animation data between different platform and applications, any app (supporting ABC) can write out ...
  109. [109]
    Chapter 29. Efficient Occlusion Culling - NVIDIA Developer
    Occlusion culling increases rendering performance simply by not rendering geometry that is outside the view frustum or hidden by objects closer to the camera.
  110. [110]
    Level of Detail for 3D Graphics: | Guide books | ACM Digital Library
    Level of Detail for 3D Graphics brings together, for the first time, the mechanisms, principles, practices, and theory needed by every graphics developer ...
  111. [111]
    The NVIDIA Ada Lovelace Architecture
    The Ada GPU architecture has been designed to provide revolutionary performance for ray tracing and AI-based neural graphics.A Breakthrough Moment For 3d... · Third-Gen Rt Cores · Shader Execution Reordering
  112. [112]
    [PDF] Real-Time, Continuous Level of Detail Rendering of Height Fields
    We present an algorithm for real-time level of detail reduction and display of high-complexity polygonal surface data. The algorithm.
  113. [113]
    [PDF] NVIDIA ADA GPU ARCHITECTURE
    Compared to traditional brute-force graphics rendering, DLSS 3 is ultimately up to 4x faster while providing low system latency. The GeForce RTX 4090 is the ...
  114. [114]
    NVIDIA Ada Lovelace architecture
    The NVIDIA Ada Lovelace architecture builds on the power of RTX to enhance the performance of rendering, graphics, AI, and compute workloads.Performance--From Data... · Advanced Video And Vision Ai... · Deep Learning Super Sampling...
  115. [115]
    Deferred Shading - LearnOpenGL
    Deferred shading is based on the idea that we defer or postpone most of the heavy rendering (like lighting) to a later stage.
  116. [116]
    OpenGL Graphics and Compute Samples - NVIDIA Docs Hub
    The Compute Particles sample shows how OpenGL Compute Shaders can be used along with OpenGL rendering to create complex animations and effects entirely on the ...
  117. [117]
    Best practices for profiling game performance - Unity
    To achieve 60 fps on mobile using the same calculation would require a target frame time of (1000 ms / 60) * 0.65 = 10.83 ms. This is difficult to achieve on ...<|control11|><|separator|>
  118. [118]
    Calculating frames per second in a game - Stack Overflow
    Sep 17, 2008 · ... fps from all the rendering times in the queue: fps = # of rendering times in queue / total rendering time. When you finish rendering a new ...
  119. [119]
    [PDF] Stroke-Based Rendering
    This chapter describes stroke-based rendering (SBR), an automatic approach to creating non-photorealistic imagery by placing discrete elements called ...
  120. [120]
    [PDF] Illustrative Visualization for Medical Training - Eurographics ...
    A system is presented that produces images that simulate pictorial representations for both scientific and biomed- ical visualization.
  121. [121]
    Comparing Neural Style Transfer and Gradient-Based Algorithms in ...
    May 11, 2023 · Non-photorealistic rendering (NPR) with explicit brushstroke representation is essential for both high-grade imitating of artistic paintings ...3. Materials And Methods · 3.1. Neural Style Transfer... · 4. Results
  122. [122]
    Representing Scenes as Neural Radiance Fields for View Synthesis
    Mar 19, 2020 · We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and ...
  123. [123]
    3D Gaussian Splatting for Real-Time Radiance Field Rendering
    We develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering.
  124. [124]
    Intel® Open Image Denoise
    Intel Open Image Denoise is an open source library of high-performance, high-quality denoising filters for images rendered with ray tracing.
  125. [125]
    NVIDIA DLSS 2.0: A Big Leap In AI Rendering | GeForce News
    Mar 23, 2020 · DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates while generating beautiful, crisp game images.Dlss 2.0 Offers Several Key... · Dlss 2.0 - Under The Hood · Dlss 2.0 In Action<|separator|>
  126. [126]
    [PDF] Artificial Intelligence Index Report 2025 | Stanford HAI
    Feb 2, 2025 · Yet trust remains a major challenge. Fewer people believe AI companies will safeguard their data, and concerns about fairness and bias persist.
  127. [127]
    Optimizing depth perception in virtual and augmented reality ...
    ٢٧‏/١١‏/٢٠٢٠ · Virtual and augmented reality (VR/AR) displays crucially rely on stereoscopic rendering to enable perceptually realistic user experiences.
  128. [128]
    A Real-Time View Synthesis with Improved Visual Quality for VR
    ٢٨‏/٠٥‏/٢٠٢٥ · Virtual reality (VR) headsets create stereoscopic images through near-eye displays, offering immersive visuals while isolating users from ...
  129. [129]
    Towards foveated rendering for gaze-tracked virtual reality
    We designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea.Missing: seminal | Show results with:seminal
  130. [130]
    [PDF] Towards Foveated Rendering for Gaze-Tracked Virtual Reality
    Dec 5, 2016 · Coupled with high-quality eye tracking, foveated rendering could drive future wide field-of-view displays targeting higher pixel densities and ...
  131. [131]
    Understanding World Tracking | Apple Developer Documentation
    To create a correspondence between real and virtual spaces, ARKit uses a technique called visual-inertial odometry. This process combines information from ...
  132. [132]
    [PDF] Occlusion Handling in Augmented Reality: Past, Present and Future
    Abstract—One of the main goals of many augmented reality applications is to provide a seamless integration of a real scene with additional virtual data.
  133. [133]
    Real-Time Occlusion Handling in Augmented Reality Based ... - NIH
    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important.
  134. [134]
  135. [135]
    Capturing depth using the LiDAR camera - Apple Developer
    Starting in iOS 15.4, you can access the LiDAR camera on supported hardware, which offers high-precision depth data suitable for use cases like room scanning ...
  136. [136]
    [PDF] Minimizing Latency for Augmented Reality Displays: Frames ...
    For exam- ple, the developers of the Oculus VR headset recommend “20 ms or less motion-to-photon latency” [28]. To help developers reach that goal, they ...
  137. [137]
    Diminishing Visual Distractions via Holographic AR Displays
    Apr 25, 2025 · We anticipate that advancements in the occlusion capabilities of head-mounted AR devices will provide valuable insights for achieving a more ...Missing: advancements | Show results with:advancements