Fact-checked by Grok 2 weeks ago

Ray casting

Ray casting is a rendering technique in that generates by projecting rays from a virtual camera (or eye point) through each of the into a scene, identifying the nearest with scene objects to determine , surface properties, and basic effects such as shadows and highlights. This image-order algorithm simplifies the simulation of light propagation compared to more complex methods, focusing primarily on primary rays for direct illumination without recursive secondary rays for phenomena like reflections or refractions. The technique was pioneered by Arthur Appel in as a method to address the hidden surface problem in and to incorporate and shadows for more realistic machine-generated images of three-dimensional objects. In Appel's approach, rays are traced from the viewing position through the image to detect occlusions and compute tonal variations based on surface normals, light sources, and material properties, enabling automated production of shaded renderings that mimic photographic depth and texture. This foundational work laid the groundwork for subsequent advancements in ray-based rendering, distinguishing ray casting from scanline methods by its pixel-centric processing that naturally handles complex geometries without preprocessing. Ray casting gained prominence in real-time applications during the early 1990s, notably in id Software's (1992), where it efficiently rendered pseudo-3D labyrinths from 2D grid maps by casting rays to calculate wall distances and apply , achieving interactive frame rates on limited hardware. Beyond gaming, it is widely applied in for , where rays traverse voxel-based datasets (e.g., CT or MRI scans) to composite semitransparent volumes and reveal internal structures like organs or tumors with high fidelity. Modern implementations leverage , such as GPUs, to extend ray casting for interactive visualization in fields including scientific simulation and , while serving as a core component in hybrid rendering pipelines.

Fundamentals

Definition and Principles

Ray casting is a rendering technique in that simulates the process of viewing three-dimensional scenes by projecting rays from an observer's viewpoint through each of the to identify visible surfaces. This method determines the color of each by computing the nearest point between the ray and the scene's geometry, effectively modeling how light would reach the eye in a synthetic camera setup. Originally proposed for , ray casting enables the visualization of complex solids composed of primitive shapes combined through operations like , , and . The core principles of ray casting involve generating primary rays that originate from the camera or eye and extend in directions defined by the pixel coordinates on the . Unlike more advanced ray tracing, ray casting employs primary rays and non-recursive secondary rays (such as shadow rays) without for effects like reflections or refractions, focusing on resolving visibility and basic including direct . This approach decouples the computation for each , allowing independent processing that contrasts with , which processes the image row by row and requires maintaining active edge lists for efficiency. Ray casting's ray-based probing simplifies handling non-planar surfaces and arbitrary solid complexities compared to scanline methods' reliance on planar projections. Basic shading may involve casting additional non-recursive shadow rays from the intersection point to each light source to determine if the point is occluded, enabling direct shadows without . Key advantages of ray casting include its conceptual simplicity, which facilitates and extensibility for interactive applications, and its computational for rendering in scenarios with moderate complexity. These attributes stem from the direct per-pixel computation and avoidance of preprocessing, making it suitable for early CAD/CAM systems and basic visualizations. However, limitations arise from the absence of recursive secondary rays, preventing the of effects like indirect shadows or interreflections from multiple light bounces. An illustrative example of the basic ray casting algorithm can be expressed in as follows (note: full may include shadow ray tests):
for each [pixel](/page/Pixel) in the [image plane](/page/Image_plane):
    [ray](/page/Ray) = generate_ray_from_camera(pixel_coordinates)
    closest_intersection = None
    min_distance = infinity
    for each object in the [scene](/page/Scene):
        intersection = intersect([ray](/page/Ray), object)
        if intersection and intersection.distance < min_distance:
            min_distance = intersection.distance
            closest_intersection = intersection
    if closest_intersection:
        pixel_color = shade(closest_intersection)  # May involve shadow rays to lights
    else:
        pixel_color = background_color
This loop initializes a ray for each pixel, tests it against scene objects to find the nearest hit, and assigns a color based on local shading, demonstrating the technique's straightforward iteration over pixels and geometry.

Mathematical Model

In ray casting, a ray is mathematically parameterized as a point along a half-line originating from a point \mathbf{P} (typically the camera position) in the direction of a normalized vector \mathbf{D}, given by the equation \mathbf{R}(t) = \mathbf{P} + t \mathbf{D} where t \geq 0 is a scalar parameter representing distance along the ray. This formulation allows efficient computation of points along the ray for intersection tests. To generate rays for rendering an image, the viewport is set up using the camera's field of view (FOV) angle \theta. For an image plane at distance d from the camera (often d = 1), the ray direction \mathbf{D} for a pixel at normalized coordinates (u, v) in the range [-1, 1] is derived as \mathbf{D} = \normalize(u \cdot \tan(\theta/2), v \cdot \tan(\theta/2), -1) in camera space, where the -1 in the z-component points toward the scene. This setup ensures rays fan out from the camera through the pixel grid, covering the desired viewing frustum. Intersection testing determines if and where a ray hits scene primitives. For a plane defined by a point \mathbf{Q} on the plane and normal \mathbf{N} (unit length), the intersection parameter t solves the equation \mathbf{N} \cdot (\mathbf{R}(t) - \mathbf{Q}) = 0, yielding t = \frac{\mathbf{N} \cdot (\mathbf{Q} - \mathbf{P})}{\mathbf{N} \cdot \mathbf{D}}; a valid hit occurs if t > 0 and \mathbf{N} \cdot \mathbf{D} \neq 0. For a centered at \mathbf{C} with radius r, substituting the ray equation into the sphere equation \|\mathbf{R}(t) - \mathbf{C}\|^2 = r^2 results in the a t^2 + b t + c = 0, where a = \mathbf{D} \cdot \mathbf{D}, b = 2 (\mathbf{P} - \mathbf{C}) \cdot \mathbf{D}, and c = (\mathbf{P} - \mathbf{C}) \cdot (\mathbf{P} - \mathbf{C}) - r^2. The \Delta = b^2 - 4ac determines the number of intersections: if \Delta > 0, the smallest positive root t = \frac{-b - \sqrt{\Delta}}{2a} gives the nearest hit point. Scene traversal in ray casting involves, for each generated ray, testing it against all primitives in the scene to find the closest valid intersection. The algorithm iterates over objects, computing intersection parameters t_i > 0 for each, and selects the minimum t among them to identify the nearest hit; if no such t exists, the ray misses the scene. For efficiency with many objects, basic acceleration like hierarchies can prune tests by first checking ray-box intersections along primary rays, but the core logic remains selecting the global minimum t > 0. For shadow rays, a similar traversal is performed from the hit point toward each light, checking for any (occlusion).

Applications

Rendering and Visualization

Ray casting serves as a foundational for generating images from 3D scenes by projecting rays from a viewpoint through each of the to determine visible surfaces, enabling efficient without full . This method excels in producing line drawings and basic shaded representations, particularly in resource-constrained environments like early (CAD) systems, where it facilitates quick previews of complex geometries. By computing intersections and visibility, ray casting supports the of wireframe outlines and simple depth-based visuals, prioritizing conceptual clarity over photorealistic detail. In line drawings, ray casting is employed for wireframe or silhouette rendering by detecting edges through grazing ray intersections—where rays are tangent to surfaces—or by identifying depth discontinuities between adjacent pixels. These techniques allow the extraction of visible edges while removing hidden ones, essential for outlining object boundaries in technical illustrations. For instance, Arthur Appel's 1968 algorithm pioneered this approach by casting rays to quantify invisibility for line segments, enabling accurate hidden line removal in solid models. This method detects silhouettes as rays that just graze object surfaces, producing clean contours that highlight structural features without surface filling. For shaded pictures, ray casting applies basic local illumination models after finding the nearest surface intersection, such as Lambertian shading, which models diffuse reflection based on the cosine of the angle between the surface normal \mathbf{N} and light direction \mathbf{L}. The intensity is computed as I = k_d (\mathbf{N} \cdot \mathbf{L}), where k_d is the diffuse coefficient, providing a simple yet effective way to assign grayscale or color values per pixel for basic tonal variation. This local model, independent of viewer position, yields flat-shaded visuals suitable for quick assessments, as detailed in early shading techniques for machine renderings. Applications in visualization include hidden line removal for engineering drawings, where ray casting determines edge visibility to produce interpretable 2D projections from 3D CAD models, and early 3D previews in CAD software, allowing designers to inspect assemblies without full rasterization. Ray casting is also widely used in volume rendering for medical imaging, where rays are cast through voxel-based datasets from scans such as or MRI to accumulate color and opacity values, semitransparent volumes to visualize internal structures like organs or tumors. This approach enables high-fidelity representation of complex volumetric data for diagnostic and surgical planning purposes. A specific example is generating depth maps or simple orthographic projections via ray casting, where parallel rays are cast perpendicular to the to record distances, creating a representation of scene depth for analysis or . This approach, using uniform ray directions, simplifies computation for non-perspective views and supports tasks like testing in pipelines.

Solid Modeling and Computation

Ray casting serves as a foundational technique in solid modeling for computing geometric properties of complex objects represented through constructive solid geometry (CSG), where primitives are combined via Boolean operations such as union, intersection, and difference. In this context, rays are projected through the model to determine intersection points with boundaries, enabling the classification of space into interior and exterior regions without requiring explicit boundary representations. This approach facilitates efficient evaluation of CSG trees by traversing primitives along each ray and applying set operations to resolve the final occupancy status, a method particularly prominent in CAD/CAM systems during the 1970s and 1980s for verifying and analyzing solid models. One key application is volume computation, where ray casting uses a of parallel rays to estimate the enclosed of a solid by summing the lengths of interior segments along each , scaled by the per ray: V \approx \sum_i l_i \Delta A, where l_i is the interior length along the i-th ray and \Delta A is the cross-sectional area represented by each ray. This method provides an accurate approximation for CSG solids and avoids exhaustive boundary tessellation, making it suitable for irregular or composite shapes. Similarly, mass properties like moments of can be derived from ray representations (ray-reps), which capture interior segments along rays as intervals for , supporting analyses such as computation of inertia tensors directly from CSG descriptions. In CAD/CAM workflows, ray casting enables intersection detection between rays and for Boolean operations, ensuring robust handling of overlaps and differences in models. This is critical for tasks like and model validation, where rays probe the to identify entry and exit points, facilitating operations on primitives such as blocks and cylinders. The technique extends to simulations in , where ray casting simulates material deposition paths or verifies printability by detecting self-intersections and support requirements in layered models derived from CSG. By leveraging parallel ray grids, these computations achieve high efficiency, making ray casting integral to early CAD systems for prototyping and manufacturing preparation.

Techniques

Shading and Image Generation

To achieve realistic shading in ray casting, the Phong illumination model is extended and applied directly at the ray-surface intersection points to compute local lighting effects. This empirical model calculates the outgoing radiance intensity I as I = I_a k_a + I_d k_d (\mathbf{N} \cdot \mathbf{L}) + I_s k_s (\mathbf{R} \cdot \mathbf{V})^n, where I_a, I_d, and I_s represent the ambient, diffuse, and specular intensities, respectively; k_a, k_d, and k_s are the material's ambient, diffuse, and specular coefficients; \mathbf{N} is the normalized surface ; \mathbf{L} is the normalized from the intersection to the source; \mathbf{R} is the normalized reflection of \mathbf{L} about \mathbf{N}; \mathbf{V} is the normalized view direction (opposite the incoming ray); and n is the specular exponent controlling highlight sharpness. The ambient term provides uniform base illumination, the diffuse term models based on the angle between and , and the specular term approximates glossy highlights from viewer-light geometry. This local computation per intersection enhances surface detail without , producing shaded images from basic ray casting. For scenes with multiple light sources, ray casting sums the local illumination contributions from each light at the intersection point, enabling complex direct lighting while maintaining . A shadow ray—or "feeler" ray—is cast from the intersection toward each source to check for occlusions; if the shadow ray intersects another surface before reaching the , that 's contribution is attenuated or excluded at the point. These shadow rays involve only a single intersection test per , distinct from recursive paths in advanced tracing, allowing basic hard shadows without global propagation. This per-ray summation supports up to several lights in early implementations, balancing realism and computation. Despite these advances, ray casting's shading remains limited to direct, local effects, omitting indirect where light reflects multiple times between surfaces and caustics from refractive focusing, leading to perceptually "flat" images lacking subtle environmental interactions. Full ray tracing addresses this by recursively tracing secondary rays for bounced illumination, simulating global effects like color bleeding that ray casting cannot capture without extensions. Consequently, ray-cast images exhibit sharp shadows and surface highlights but miss the depth and inter-object that enhance . Texture mapping integrates into ray casting by deriving UV coordinates from the parametric representation of the point on the surface , then sampling a at those coordinates to replace or modulate the base material color in the computation. For primitive shapes like spheres or planes, UVs are computed analytically (e.g., via spherical or planar projections); for complex models, precomputed UVs on polygons are interpolated along the hit location. This technique adds surface detail efficiently, as the lookup occurs post- without additional rays.

Classification and Optimization

In ray casting, classification of ray-scene interactions is essential for determining whether points along a lie inside or outside modeled objects, particularly in filling and (CSG) applications. The even-odd rule assesses enclosure by counting the number of times a from the test point crosses object boundaries; an odd number of crossings indicates the point is inside, while an even number signifies outside. This parity-based approach simplifies evaluation for non-self-intersecting and CSG unions or intersections, as described in early systems where rays traverse primitive solids to compute boundary parities. For handling complex topologies, such as self-intersecting polygons or multiply-connected shapes in CSG, the rule provides a more robust classification. It calculates the net winding of boundaries around the test point by summing the signed angles or directions of crossings (positive for counterclockwise, negative for clockwise); a non-zero winding number denotes an interior point, accommodating oriented surfaces and nested components. This method extends the even-odd rule to preserve topological consistency in advanced modeling scenarios. Optimization of ray casting focuses on reducing the number of tests through enclosures and structures. , such as spheres or axis-aligned boxes, enclose groups of primitives, allowing rays to skip detailed tests if they miss the bound, thereby pruning inefficient computations. Hierarchical structures, pioneered by and Whitted in 1980, organize these enclosures in trees to exploit spatial coherence. Spatial partitioning via octrees or k-d trees further enhances efficiency by subdividing the scene into manageable regions. Octrees recursively partition space into eight equal child volumes, enabling rapid ray traversal through voxel adjacency and intersection culling, as introduced by Glassner for accelerating ray-scene interactions. K-d trees, developed by Kaplan, alternate splits along coordinate axes to create balanced partitions, supporting efficient nearest-neighbor queries and ray stepping. These hierarchies typically reduce intersection complexity from O(n) for n to O(log n) by limiting tests to relevant subregions. Additional techniques include the slab method for axis-aligned bounding boxes, which computes ray entry and exit parameters by intersecting with pairs of parallel planes along each axis, minimizing branch-heavy computations. Early ray termination on opaque hits further optimizes by halting traversal once a fully blocking surface is found, avoiding redundant downstream tests in primary rays.

Anti-Aliasing

Aliasing in ray casting occurs primarily due to the discrete sampling of continuous scenes at individual centers, resulting in edges known as jaggies on object boundaries. This also produces moiré patterns when rendering repetitive textures or fine details, where high-frequency scene elements alias into lower-frequency artifacts visible in the final image. To mitigate these issues, casts multiple rays per —typically in a —and averages their resulting colors to approximate the over the pixel area more accurately. Adaptive sampling extends this by varying the number of rays based on local depth variance, allocating more samples to regions with significant depth discontinuities, such as edges, while using fewer in uniform areas to balance quality and efficiency. Jittered sampling enhances by introducing random offsets to ray origins within the grid, distributing samples more evenly and reducing patterned artifacts compared to uniform grids. For targeted refinement, algorithms analyze depth or color gradients in initial low-sample renders to identify boundaries, enabling selective only at those locations. Post-processing filters integrate with ray casting by applying operations like directly to the accumulated color or depth buffers after ray intersection and shading, smoothing residual without additional ray computations. In the standard pixel ray generation process, a single ray per exacerbates these effects, underscoring the need for such enhancements.

History and Evolution

Origins and Early Developments

The concept of ray casting in originated in the late as a method for solving the hidden surface removal problem and simulating shadows in rendered images. In 1968, Arthur Appel at developed an early ray casting algorithm that traced rays from the viewpoint through the to determine visible surfaces and by other objects, marking one of the first computational approaches to realistic shading in machine-generated pictures. This technique laid foundational principles for later rendering methods by prioritizing ray-object intersections over scanline-based alternatives prevalent in contemporary research. During the 1970s, academic efforts at institutions like the University of Utah's Computer Graphics Laboratory advanced related visibility algorithms, with researchers such as Martin Newell and colleagues developing techniques that contrasted with emerging ray-based methods, influencing the evolution of efficient image synthesis. These developments highlighted ray casting's potential for handling complex scenes, though computational limitations favored rasterization for real-time applications. The term "ray casting" itself was formally introduced in 1982 by Scott D. Roth at Research Laboratories, who applied it to systems using (CSG), enabling the rendering of combined primitive solids through ray intersection tests. A pivotal advancement bridging ray casting to more sophisticated ray tracing occurred in 1980 with Turner Whitted's work at , which extended primary ray casting with recursive secondary rays for reflections, refractions, and shadows, demonstrating practical image generation with enhanced realism. By the 1980s, ray casting began integrating into specialized graphics hardware pipelines, particularly for volume visualization and scientific rendering, as processing power improved to support intersection computations in industrial and research contexts.

Role in Computer Games

Ray casting emerged as a pivotal technique in early during the early , enabling pseudo- rendering on limited such as 286 and 386 . By casting rays from the player's viewpoint for each vertical column of screen pixels, the method determined wall distances and applied to vertical strips, creating the illusion of depth without full geometry processing. This approach was computationally efficient, transforming simple maps into navigable environments while respecting the era's memory and processing constraints. A landmark example is (1992), developed by , which employed ray casting to render maze-like levels for first-person navigation. Rays traced through a grid-based map to detect walls, with textures scaled based on distance and flat-shaded floors and ceilings added via simple for visual depth. The technique supported enemy sprites as overlays sorted by distance, contributing to the game's immersive feel despite its nature. Subsequent titles built on this foundation. ShadowCaster (1993), developed by using a licensed id engine variant, incorporated ray casting with rotated sectors to allow non-orthogonal walls and variable heights, enhancing level complexity while maintaining performance. Similarly, NovaLogic's Comanche series (starting 1992) adapted ray casting through its engine to generate dynamic helicopter simulation landscapes, tracing rays vertically through heightmap voxels to render terrain elevations in . In these games, optimizations like (BSP) trees were occasionally integrated for static scenes to streamline ray traversal and visibility culling, though core ray casting handled primary rendering. However, inherent limitations persisted, including the inability to depict sloped floors or ceilings and restricted movement to horizontal planes without true vertical freedom, confining designs to flat, labyrinthine layouts. Ray casting thus pioneered the genre but became outdated by 1996, when id Software's Quake shifted to a fully polygonal for unrestricted and .

Modern Contexts

Computational Geometry

In computational geometry, ray casting serves as a fundamental technique for performing ray shooting queries within arrangements of geometric primitives, such as line segments or hyperplanes, where the goal is to determine the first intersection point along a given ray. This approach preprocesses the arrangement into data structures that enable efficient query resolution, often achieving near-linear preprocessing time and logarithmic query time for simple polygons with n vertices. For instance, geodesic triangulations can be constructed to support ray shooting in O(log n) time after O(n log n) preprocessing, facilitating queries in polygonal domains without explicit visibility computation. Ray casting also underpins the computation of visibility maps in both 2D and 3D settings, representing the regions of space from which certain segments or triangles are visible. In 3D, visibility maps for segments and triangles on polyhedral terrains can exhibit complexities up to O(n^5), where n is the number of features, and ray shooting algorithms help delineate these maps by tracing rays to identify occlusions and visible portions. The 3D visibility complex extends this by organizing all free-space visibility relations among geometric objects, allowing ray casting to query visible elements efficiently in environments with obstacles. These maps are crucial for abstract , distinct from rendering applications. Advanced algorithms leverage Davenport-Schinzel sequences to bound the complexity of ray traversals in dynamic scenes, where geometric elements may move or change over time. These sequences limit alternations in intersection orders, enabling data structures that support updates and queries for shooting with near-optimal performance, such as O(n α(n)) size for the lower envelope of motion paths, where α is the inverse . In , casting with such sequences aids path planning by simulating ray shots to detect feasible trajectories amid obstacles, as in penetration growth models that expand free space for collision-free paths. Specific applications include in simulations, where ray casting traces paths between dynamic objects to identify intersections, ensuring geometrically exact contacts in cloth or interactions using GPU-accelerated methods. A classic use is the point-in-polygon test, employing ray casting to count edge crossings from a query point to ; an odd count indicates the point lies inside, based on the and implemented efficiently for convex or simple polygons. Data structures like priority queues manage sorted lists of potential intersections along the ray by distance, allowing sequential processing to retrieve the nearest hit in O(k log m) time, where k is the number of intersections and m the number of objects.

Extensions and Comparisons

Ray casting serves as a foundational subset of ray tracing, limited to emitting primary rays from the viewpoint to detect the first with scene geometry, without for secondary effects like reflections or refractions. In contrast, ray tracing extends this by recursively tracing additional rays from points to simulate light bounces, enabling and more physically accurate rendering. This distinction positions ray casting as computationally lighter, suitable for applications where full ray tracing's overhead for multiple ray generations would be prohibitive. One key extension of ray casting is , which adapts the technique for rendering implicit surfaces defined by signed distance fields (), particularly in shader-based implementations. iteratively advances rays along their direction by the estimated distance to the nearest surface, as provided by the SDF, until an is approximated, allowing efficient handling of procedural or geometries without explicit meshes. This method has gained traction in GPU shaders for real-time procedural rendering, offering compact representations and support for advanced features like soft shadows through temporal accumulation. Hybrid approaches further evolve ray casting by integrating it with rasterization in graphics engines, leveraging GPU acceleration structures for selective ray queries. For instance, rasterization handles primary visibility for broad scene coverage, while ray casting—often via hardware-accelerated hierarchies—performs targeted operations like object picking or shadow determination, reducing overall computational cost. 's OptiX framework exemplifies this, enabling deformable mesh updates and instanced handling for dynamic scenes in games and simulations. In modern virtual and augmented reality (VR/AR) contexts, ray casting facilitates intuitive object selection through 2020s APIs like WebXR, where rays are projected from user devices to detect and interact with virtual elements in mixed reality environments. This enables precise hit-testing against scene geometry, supporting actions such as highlighting or manipulating AR objects based on the closest intersection. Additionally, machine learning-accelerated denoising enhances ray casting outputs, particularly for noisy low-sample renders, by training neural networks on paired clean and noisy images to reconstruct high-quality results in real time. Techniques like those in NVIDIA's denoising pipelines apply convolutional networks to filter variance while preserving edges, making ray casting viable for interactive previews. Compared to rasterization, ray casting excels in handling complex, non-polygonal geometry—such as volumes or implicits—by directly querying , though it remains prone to without additional sampling and is slower for dense primary visibility due to per-pixel ray computations. Rasterization, optimized for triangle projections, achieves higher throughput for static scenes but struggles with secondary effects, prompting hybrids where ray casting augments rasterized bases. Looking ahead, hardware ray tracing (RT) cores in GPUs, like those in series including the RTX 50 series released in 2025, increasingly blend ray casting with rasterization pipelines, using dedicated intersection units and tensor cores for denoising to enable global effects and neural rendering at scale. This convergence supports future trends in high-fidelity interactive graphics, including VR/ and procedural worlds, with ongoing advancements in acceleration structures for broader adoption.

References

  1. [1]
    None
    ### Summary of Simple Ray Casting Algorithm
  2. [2]
    Some techniques for shading machine renderings of solids
    Some techniques for shading machine renderings of solids. Author: Arthur Appel ... Kumar K(2025)Optimizing Computational Performance for Ray Tracing on the ...Missing: original | Show results with:original
  3. [3]
    [PDF] The Development of Ray Tracing and Its Future - CEUR-WS
    The first Ray casting algorithm that used for rendering was first introduced by Arthur Appel in 1968. Ray casting renders the scene by emitting a ray from the ...
  4. [4]
    Volume Visualization: A Technical Overview with a Focus on ...
    Volumetric medical image rendering is a method of extracting meaningful information from a three-dimensional (3D) dataset, allowing disease processes and ...
  5. [5]
    Real-time Ray Tracing for Cardiothoracic Imaging | Radiology
    Aug 24, 2023 · Ray casting is simpler and faster and has been widely adopted in radiology, primarily through volume rendering (3). In contrast, ray tracing ...<|control11|><|separator|>
  6. [6]
    Ray casting for modeling solids - ScienceDirect.com
    Ray casting for modeling solids☆. Author links open overlay panel. Scott D Roth ... View PDFView articleView in Scopus. Cited by (0). Research reported herein ...
  7. [7]
    Overview of the Ray-Tracing Rendering Technique - Scratchapixel
    When primary rays are used to solve the visibility problem, we can use the term ray-casting. Generating primary or camera rays is the topic of the next lesson ( ...
  8. [8]
    [PDF] Ray Tracing Part 1 - CSC 240 Computer Graphics - Smith College
    Process called ray casting a b a' b'. Page 8. Simple Ray Casting Algorithm. Loop over all pixels. Create ray from eye through pixel center. Loop over all ...
  9. [9]
    [PDF] Ray Tracing 1: The Basics
    Apr 1, 2022 · Ray: r(t) = o + t-→d. Plane: Substitute ray equation into plane equation: Want t where the ray intersects the plane… o. -→d r(t). Plane. Ray.
  10. [10]
    [PDF] Raycasting
    Let d = kxclose −ck be the distance to the ray from the sphere center. Then by the Pythagorean theorem: a2 +d2 = r2 or a2 = r2 −d2, and a = p r2 − d2. It is ...
  11. [11]
    Ray Tracing in One Weekend
    Ray-Sphere Intersection. The equation for a sphere of radius r that is centered at the origin is an important mathematical equation:.
  12. [12]
    A Minimal Ray-Tracer: Ray-Sphere Intersection - Scratchapixel
    Ray-sphere intersection can be done using geometric methods (trigonometry, Pythagorean theorem) or analytic methods (algebraic solutions). The geometric method ...Missing: standard | Show results with:standard
  13. [13]
    [PDF] Ray Casting - cs.Princeton
    For each sample (pixel) … Construct ray from eye position through view plane. Compute radiance leaving first point of intersection between ray and scene.
  14. [14]
    The Ray casting engine and Ray representatives - ACM Digital Library
    Curved ray-casting for displacement mapping in the GPU. MMM'08: Proceedings of the 14th international conference on Advances in multimedia modeling.Missing: 1970s | Show results with:1970s<|separator|>
  15. [15]
    Introduction to Texturing - Scratchapixel
    The idea behind texture mapping is based on associating each vertex of a polygonal mesh with a 2D coordinate that indicates a position within the texture. These ...
  16. [16]
    The Point in Polygon Problem for Arbitrary Polygons - ResearchGate
    Aug 6, 2025 · Two concepts for solving this problem are known in literature: the even–odd rule and the winding number, the former leading to ray-crossing, ...<|control11|><|separator|>
  17. [17]
    A 3-dimensional representation for fast rendering of complex scenes
    This paper describes a method whereby the object space is represented entirely by a hierarchical data structure consisting of bounding volumes, with no other ...
  18. [18]
    Further Reading - Physically Based Rendering
    Glassner (1984) introduced the use of octrees for ray intersection acceleration. Use of the kd-tree for ray tracing was first described by Kaplan (1985).Missing: casting | Show results with:casting
  19. [19]
    [PDF] An Improved Illumination Model for Shaded Display
    For simplicity of representation and ease of performing the intersection calculation, spheres are used as the bounding volumes. Communications. June 1980 of.Missing: Rubin | Show results with:Rubin
  20. [20]
    [PDF] Anti-aliased and accelerated ray tracing - Texas Computer Science
    When casting one ray per pixel, we are likely to have aliasing artifacts. To improve matters, we can cast more than one ray per pixel and average the result.
  21. [21]
    [PDF] Advanced Ray Tracing
    Computer Graphics 15-462. Anti-aliasing: Supersampling. Can be done adaptively. –divide pixel into 2x2 grid, trace 5 rays (4 at corners, 1 at center). –if the ...
  22. [22]
    [PDF] Anti-aliasing and Monte Carlo Path Tracing - Washington
    To improve matters, we can cast more than one ray per pixel and average the result. A.k.a., super-sampling and averaging down. 7. Antialiasing by adaptive ...
  23. [23]
    [PDF] Distribution Ray Tracing - UT Computer Science
    We call this a jittered sampling pattern. One interesting side effect of these stochastic sampling patterns is that they actually injects noise into the ...
  24. [24]
    [PDF] Low Cost Adaptive Anti-Aliasing for Real-Time Ray-Tracing
    A recent scheme in [6] uses edge-detection filter that again works well for edge smoothing, but doesn't consider geometry aliasing. For RT there is an ...<|control11|><|separator|>
  25. [25]
    [PDF] The Antialiasing Problem in Ray Tracing - Don P. Mitchell
    Aliasing. Ray tracing is one of the most general and powerful techniques for synthesizing realistic images. Unfortu- nately, this method suffers from a ...
  26. [26]
    POV-Ray 3.1 Documentation - Anti-Aliasing Options
    The ray-tracing process is in effect a discrete, digital sampling of the image with typically one sample per pixel. Such sampling can introduce a variety of ...<|control11|><|separator|>
  27. [27]
    [PDF] Ray Casting and Ray Tracing - CS@Purdue
    Ray Casting and Ray Tracing. • Ray Casting. – Arthur Appel, started around 1968… • Ray Tracing. – Turner Whitted, started around 1980… ▫2. Page 3. Ray Tracing.Missing: paper | Show results with:paper
  28. [28]
    [PDF] Philipp Slusallek - Computer Graphics Lab
    1968: Intel is founded. 1968: Arthur Appel at IBM introduces ray-casting, a pre- cursor to ray-tracing which combines a hidden-surface and shadow algorithm. ...Missing: origins | Show results with:origins
  29. [29]
    [PDF] Ray Tracing on Programmable Graphics Hardware
    By choosing different shading rays, we can implement several flavors of ray tracing using our streaming algorithm. We will look at ray casting, Whitted-style ...
  30. [30]
    Raycasting - Lode Vandevenne
    Raycasting is a rendering technique to create a 3D perspective in a 2D map. Back when computers were slower it wasn't possible to run real 3D engines in ...
  31. [31]
    Ray Casting / Game Development Tutorial - Page 1 - permadi.com
    Ray-casting is a technique that transform a limited form of data (a very simplified map or floor plan) into a 3D projection by tracing rays from the view point ...
  32. [32]
    The Video Game Software Wizardry of Id - IEEE Spectrum
    Aug 1, 2002 · In raycasting, as it is called, the computer draws scenes by extending lines from the player's position in the direction he or she is facing.<|separator|>
  33. [33]
    Wolfenstein 3D for iPhone - Fabien Sanglard
    Apr 20, 2009 · Cast a ray for every columns of pixel (wolf3D casts 640 rays), it will build the list of visible walls. Build a list of every sprite entity ...
  34. [34]
    Game 507: Shadow Caster (1993) - The CRPG Addict
    Mar 23, 2024 · With the first enemy gone, we can catch our breath and note that the interface makes use of a near-identical raycasting engine as the previous ...
  35. [35]
    s-macke/VoxelSpace: Terrain rendering algorithm in less ... - GitHub
    Comanche uses a technique called Voxel Space, which is based on the same ideas like ray casting. Hence the Voxel Space engine is a 2.5D engine, it doesn't ...
  36. [36]
    A graphical history of id Tech: Three decades of cutting-edge ...
    May 30, 2025 · Instead of using ray casting to calculate what should be displayed, Doom's engine ... Quake's engine handled everything in true 3D. Just like most ...Id Tech 2 | Quake (1996) |... · Id Tech 3 | Quake 3 Arena... · Id Tech 4 | Doom 3 (2004)
  37. [37]
    [PDF] Ray Shooting in Polygons Using Geodesic Triangulations
    In this paper we consider the ray-shooting problem in simple polygons. Given a simple polygon with n vertices, we wish to preprocess it into a data structure ...Missing: casting | Show results with:casting
  38. [38]
    Visibility and ray shooting queries in polygonal domains
    As a special case of visibility problems, we also study the ray-shooting problem of finding the first point on the polygon boundaries that is hit by any query ...Missing: casting | Show results with:casting
  39. [39]
    [PDF] Visibility Maps of Segments and Triangles in 3D
    If T is a polyhedral terrain, the complexity of the weak visibility map is Ω(n4) and O(n5), both for a segment and a triangle. We also present efficient ...
  40. [40]
    [PDF] The 3D Visibility Complex - People | MIT CSAIL
    Visibility computations are central in many computer graphics algorithms. Some of the most common include the determination of the objects visible from a ...
  41. [41]
    [PDF] Davenport{Schinzel Sequences and Their Geometric Applications
    Feb 5, 1998 · A near-linear bound on the maximum length of Davenport{Schinzel sequences enable us to derive sharp bounds on the combinatorial structure ...
  42. [42]
    [PDF] Ray-Shooting Algorithms for Robotics | Disney Research
    This paper discusses several general ray-shooting algorithms and their applications to these problems in robotics. ... Robot path planning with penetration growth ...
  43. [43]
    [PDF] GPU Ray-Traced Collision Detection for Cloth Simulation - Hal-Inria
    Our method casts rays between objects to perform collision detection, and an inversion-handling algorithm is introduced to correct errors introduced by discrete ...
  44. [44]
    [PDF] Computational Geometry Algorithms and Applications
    Computational geometry emerged from the field of algorithms design and analysis in the late 1970s. It has grown into a recognized discipline with its.
  45. [45]
    A bridge between rasterization and ray casting - 3D - ResearchGate
    Aug 6, 2025 · Compared to ray tracing, no reflections or refractions are detected due to the sole focus on collision detection of the rays. ...
  46. [46]
    What's the Difference Between Ray Tracing, Rasterization?
    Mar 19, 2018 · Ray tracing is used for special effects in movies, while rasterization is used for real-time graphics, creating 3D objects from virtual ...
  47. [47]
    Raymarching Distance Fields with CUDA - MDPI
    Nov 9, 2021 · Raymarching is a technique for rendering implicit surfaces using signed distance fields. It has been known and used since the 1980s for rendering fractals and ...
  48. [48]
    Effectively Integrating RTX Ray Tracing into a Real-Time Rendering ...
    Oct 29, 2018 · The best approach for real-time applications is hybrid rendering, a combination of ray tracing and rasterization. Figure 1 shows an example of ...
  49. [49]
    WebXR AR Raycasting Shapes - PlayCanvas Developer Site
    WebXR AR Raycasting Shapes. Example of how to raycast in the PlayCanvas scene when using WebXR AR. Tap the shapes to change their color!
  50. [50]
    Ray Tracing Essentials Part 7: Denoising for Ray Tracing
    May 4, 2020 · The two main denoising approaches are human-controlled algorithms and those created by training neural networks. Denoising can provide usable ...
  51. [51]
    Real-Time Ray Tracing | NVIDIA Developer
    Real-time ray tracing is a method of graphics rendering that simulates the physical behavior of light. Find tutorials, samples, videos, and more.Missing: blending trends