Fact-checked by Grok 2 weeks ago

Image plane

In , the image plane is a plane conjugate to an object plane, where a sharp image of object points is formed, at least within the approximations of . This plane lies in image space, where rays from an object point converge to form a real or , with its location determined by the object distance, image distance, and the of the system via the relation \frac{n}{z} + \frac{n'}{z'} = \phi, where n and n' are refractive indices, z and z' are object and image distances from the principal planes, and \phi is the system's power. Distinct from the focal plane—which applies specifically to objects at —the image plane's position shifts with finite object distances, enabling precise imaging in systems like lenses and mirrors. In practical applications, such as and , the image plane corresponds to the location of the film, sensor, or intermediate where the focused image appears, often coinciding with the rear principal plane for approximations. For complex optical systems, multiple image planes may exist, including intermediate ones that can introduce aberrations if not carefully managed, such as by placing field lenses to relay the image without degradation. In and camera modeling, the image plane represents the two-dimensional projection surface in a , positioned at the from the camera center, where 3D world points are mapped via perspective projection to yield 2D coordinates. This concept underpins fields from astronomical imaging to , where accurate alignment of the image plane ensures optimal and minimal .

Fundamentals in Optics

Definition

In optics, the image plane is defined as a two-dimensional plane perpendicular to the optical axis in which the image of an object is formed through the convergence of light rays following refraction or reflection by an optical element such as a lens or mirror. This plane represents the locus of points where rays originating from a corresponding object point intersect after passing through the system, ensuring that the entire object plane maps to a conjugate image plane for sharp focus. The concept of the image plane emerged in the amid the formalization of geometric , notably through Carl Friedrich Gauss's seminal work Dioptrische Untersuchungen (1841), which established foundational principles for using principal planes and ray tracing. further advanced this framework in his Treatise on Physiological Optics (1867), applying it to the eye's retinal image plane and distinguishing between real and virtual images in . These developments emphasized the image plane's role in both instrumental and physiological optics, shifting from qualitative descriptions to quantitative geometric models. A key distinction exists between the real image plane, where light rays physically converge to form a tangible image that can be projected onto a screen, and the virtual image plane, from which rays appear to diverge as if originating behind the optical element, preventing projection. This differentiation arises from the ray paths: converging for real images on the opposite side of the optics, and diverging for virtual images on the same side. Understanding the image plane requires familiarity with prerequisite concepts, including the , which serves as the reference line passing through the centers of curvature of spherical surfaces or the symmetry axis of the system. Principal planes are hypothetical planes perpendicular to the where the effective refraction occurs, simplifying the analysis of image location without detailing internal ray paths. As a special case, the focal plane coincides with the image plane when the object is at , capturing parallel incoming rays.

Geometric Formation

In geometric , the image plane is formed through the of rays originating from an object point and redirected by optical elements such as lenses or mirrors. For an point object, ray tracing employs specific rays to define this process: the chief ray, which passes from the object point through the center of the aperture stop and determines the location of the image point, and the marginal rays, which extend from the object point to the edges of the aperture stop and define the bundle's extent. In the paraxial approximation, assuming small angles relative to the , these rays, along with all others in the bundle, converge precisely at a single image point on the image plane. In converging optical systems, light rays diverge from the object point, interact with the lens or mirror—where refraction or reflection bends them according to Snell's law—and subsequently reconverge at the corresponding image point. The image plane is thus established as the surface perpendicular to the optical axis where these image points for various object points lie in focus, forming a sharp inverted replica of the object in the ideal case. This geometric configuration ensures that the entire ray bundle from each object point intersects at the designated plane, enabling coherent image reconstruction. Deviations from ideality arise due to aberrations, which blur the convergence and thus the sharpness on the image plane. Spherical aberration occurs when peripheral marginal rays at a different point than paraxial rays, shortening the effective for off-axis portions of the bundle, while causes different wavelengths to refract variably, dispersing the along the . However, in the geometric model, these effects are neglected to emphasize perfect ray intersection at the image plane. A conceptual diagram illustrates this formation: an off-axis object point emits diverging rays toward a converging lens, with the chief ray passing undeviated through the lens center and marginal rays bending symmetrically at the lens surfaces; all rays then intersect at the image point on a plane parallel to the lens, behind it, highlighting the inverted and magnified image geometry.

Mathematical Formulation

Paraxial Ray Approximation

The paraxial ray approximation, also known as , is a fundamental simplification in that models light rays propagating close to the of an system. Paraxial rays are defined as those making small angles with the , typically less than 10°, where the sin θ ≈ θ (with θ in radians) holds with errors below 0.5%. This allows for linearizing the behavior of rays, facilitating the mathematical description of on the image plane without accounting for complex nonlinear effects. The core assumptions of the paraxial approximation involve neglecting higher-order terms in the expansions of trigonometric functions within Snell's law of refraction (n₁ sin θ₁ = n₂ sin θ₂) and the law of reflection. Under this regime, Snell's law simplifies to n₁ θ₁ ≈ n₂ θ₂, and reflection to θᵢ ≈ θᵣ, resulting in linear differential equations for ray paths that treat ray heights and angles as first-order quantities. These assumptions enable straightforward ray tracing and the prediction of first-order optical properties, such as focal lengths and image positions, by considering only rays with small heights and slopes relative to the chief ray. However, the paraxial approximation has inherent limitations, particularly in wide-angle optical systems where ray angles exceed the small-angle threshold, leading to significant distortions such as barrel or effects and other aberrations. In such cases, the neglect of higher-order terms causes deviations from ideal , necessitating exact ray tracing methods that employ full trigonometric evaluations of to capture nonlinear ray behaviors accurately. This distinction highlights paraxial optics as a first-order model, contrasted with exact that accounts for all ray deviations. The paraxial ray approximation was systematically developed by in 1841 through his treatise Dioptrische Untersuchungen, which laid the groundwork for modern lens design by introducing these linear principles to analyze optical systems rigorously. This historical framework remains the basis for deriving key relations in first-order optics, such as the thin lens equation.

Lens and Mirror Equations

The relates the u, v, and f of a under the , given by \frac{1}{f} = \frac{1}{u} + \frac{1}{v}. This determines the location of the image plane at v from the , where the image forms for an object at u. To derive it, consider a with f. An object of height h is placed at u to the left of the . A from the object tip to the passes through the on the right side at f. Another from the object tip through the center travels undeviated. These intersect at the image tip of height h' at v to the right. Using similar triangles formed by the (height h over base u - f) and the (height h' over base f), the relation \frac{h}{u - f} = \frac{h'}{f} holds. Similarly, from the undeviated and intersection, \frac{h}{f} = \frac{h'}{v - f}. Eliminating h and h' yields (u - f)(v - f) = f^2, which rearranges to the . A common used in many physics textbooks assigns positive values to object distances measured opposite to the of incident and to distances measured in the of for real images. For lenses, the object distance u is positive when the object is to the left of the , the v is positive for real images to the right and negative for images to the left, and the f is positive for converging es and negative for diverging lenses. This convention ensures consistency for calculating the plane position, with real images (positive v) forming on the plane beyond the . For spherical mirrors, the mirror equation takes the identical form \frac{1}{f} = \frac{1}{u} + \frac{1}{v}, where f = R/2 and R is the radius of curvature, determining the image plane via reflection under paraxial conditions. To derive it, consider a concave spherical mirror with radius R. An object at distance u sends paraxial rays (small angles to the axis) to the mirror. By the law of reflection, incident angle equals reflected angle relative to the surface normal at the incidence point. For a ray parallel to the axis, it reflects through the focal point at f = R/2. Another ray through the mirror's center (pole) reflects back along itself. These intersect at the image at distance v. Using geometry, the sagitta approximation for small heights h gives the relation from similar triangles: the parallel ray deviates by angle \theta \approx h/R, leading to \frac{h}{u} = \frac{h'}{v} for heights, and combining with the focal relation yields \frac{1}{u} + \frac{1}{v} = \frac{2}{R} = \frac{1}{f}. For convex mirrors, f is negative. The same sign convention applies: u > 0 for objects in front, v > 0 for real images (in front of the mirror for concave), and v < 0 for virtual images (behind the mirror). The transverse magnification m, which relates the object size to its extent on the image plane, is given by m = -\frac{v}{u} = \frac{h'}{h}, where the negative sign indicates inversion for real images. This formula applies to both thin lenses and spherical mirrors, linking the image plane's scale directly to the distances from the lens equation. For example, consider a converging with f = 10 cm and an object at u = 15 cm. Substituting into the lens equation gives \frac{1}{v} = \frac{1}{10} - \frac{1}{15} = 0.0333 cm^{-1}, so v = 30 cm ( on the image plane 30 cm to the right). The is m = -30/15 = -2, meaning the image is inverted and twice the object height.

Applications in Optical Systems

Photography and Projection

In photographic cameras, the image plane is the location where the light-sensitive or is placed to record the image, with light rays converging to form a sharp reproduction of the subject. The mechanism operates by adjusting the distance between the and the image plane, ensuring that rays from objects at various distances meet precisely at this plane for optimal clarity. This positioning allows for the lens equation to determine the required distance for focus, accommodating different subject depths without delving into derivations here. arises from the allowable deviation in image plane position, governed by the circle of confusion—a measure of the maximum blur diameter considered acceptable in the final image. Factors such as the aperture's play a key role; lower (larger apertures) yield shallower by enlarging the circle of confusion, while higher enhance it for broader sharpness. In projection systems, the image plane corresponds to the screen, where the projection forms an inverted from the light source and imaging device, enabling visibility when rays converge on the surface. Keystone correction addresses geometric distortion caused by angular misalignment between the and screen, digitally trapezoidally adjusting the image to restore rectangular proportions and maintain fidelity. The historical development of image plane management in began with the process in , which employed fixed planes and manual plate positioning without adjustable . Over time, advancements led to and ground-glass focusing in view cameras, culminating in modern systems; phase detection , introduced commercially in the Minolta Maxxum 7000 in 1985, uses split-image sensors to detect phase differences for rapid, precise lens-to-plane adjustments.

Microscopy and Astronomy

In optical microscopy, the image plane plays a critical role in forming high-magnification views of specimens through compound systems. The captures diffracted by the specimen and focuses it to create a real, inverted at a specific plane. In fixed-tube-length designs, this plane is typically located at the microscope's length (such as 160 mm) from the rear . In infinity-corrected designs, common in microscopes, the produces rays, with the formed by a at a standardized position, typically 200 mm from the . This , often positioned at the field diaphragm, serves as the foundation for further , where direct and diffracted interfere to produce detailed grayscale patterns representing the specimen's structure. The then relays this to the final on the or detector, projecting a or for observation, with total being the product of the and (adapted for high values, such as 200x from a 20x and 10x ). The at the image plane in is fundamentally limited by the objective's (NA), which determines the angular range of light rays that can contribute to . Higher NA values enable the capture of more orders from the specimen, reducing the size of the —a central bright spot surrounded by diffraction rings—at the intermediate image plane and thus improving the ability to distinguish fine details. The lateral d is given by d = \frac{0.61 \lambda}{\text{NA}}, where \lambda is the wavelength of light; for example, increasing NA from 0.20 to 1.30 can shrink the Airy disk radius, yielding sharper images with NA up to 1.30 in oil-immersion objectives. In compound microscopes, this multi-stage process—primary by the objective followed by secondary by the —allows for progressive refinement of the image plane, essential for resolving sub-micron features in biological samples. In astronomical telescopes, the image plane is typically the focal plane at the prime focus of the primary mirror, where incoming converges to form an initial for viewing through an or detection by instruments. In Cassegrain designs, a secondary mirror intercepts before it reaches the prime focus, reflecting it back through a in the primary mirror to a shifted focal plane, often at the rear of the telescope tube, which increases the effective and compactness while maintaining diffraction-limited performance. This configuration, using a paraboloidal primary and hyperboloidal secondary, positions the final plane at the intersection of reflected rays, enabling high-resolution imaging of distant celestial objects despite the secondary's central obscuration. Adaptive optics enhances the sharpness of the image plane in ground-based astronomical telescopes by correcting wavefront distortions caused by atmospheric turbulence. Real-time measurements from wavefront sensors, using natural guide stars or artificial , drive deformable mirrors to adjust the incoming light's phase, reducing blurring and achieving near-diffraction-limited resolution at the focal plane. For instance, the (VLT) at ESO's employs a multi-conjugate system with and a 1170-actuator deformable mirror, attaining a of 0.1 at 650 nm for improved detection and studies. Similarly, the Keck Telescope's system with a 349-actuator deformable mirror delivers high-contrast imaging at 3.4 μm, enabling detailed observations of galaxies and faint objects that rival space-based capabilities. While the benefits from a distortion-free path above the atmosphere, ground-based systems with , such as those on the Gemini South Telescope, provide uniform correction over wide fields for multi-conjugate applications.

Image Plane in Computer Graphics

Projection and Screen Space

In , the image plane is defined as a two-dimensional serving as the target for projecting three-dimensional world coordinates to produce a flat, rendered image of a scene. Positioned typically at z = -d from the camera's viewpoint along the , it functions analogously to or in a physical camera, where projected points represent the intersection of viewing rays with this surface. This setup enables the simulation of depth and spatial relationships in a output. The concept of the image plane in , particularly for , was formalized during the amid early advancements in interactive systems, with key contributions from pioneers like , whose work on 3D viewing laid foundational techniques for . As a brief conceptual inspiration, it draws from the optical image plane where rays physically converge, but in graphics, it remains a computational construct without physical rays. Projections onto the image plane fall into two primary categories: and orthographic. emulates realistic vision by having all projectors converge at a single center of projection (the eye point), causing distant objects to appear smaller through ; this is achieved via similar triangles, where the scaling factor for a point (x, y, z) relative to the image plane at z = -d is d / -z, yielding projected coordinates x' = x * (d / -z) and y' = y * (d / -z). In matrix form using , the is commonly represented as P = \begin{pmatrix} \frac{2n}{r - l} & 0 & \frac{r + l}{r - l} & 0 \\ 0 & \frac{2n}{t - b} & \frac{t + b}{t - b} & 0 \\ 0 & 0 & -\frac{f + n}{f - n} & -\frac{2fn}{f - n} \\ 0 & 0 & -1 & 0 \end{pmatrix}, where n and f are the near and far clipping planes, and l, r, b, t define the left, right, bottom, and top bounds of the view frustum; post-multiplication by this matrix followed by perspective division (dividing by the w-component) maps points to the image plane. Orthographic projection, by contrast, employs parallel projectors perpendicular to the image plane, maintaining constant object sizes irrespective of depth and preserving parallelism among lines, which suits applications like engineering blueprints. The corresponding matrix directly scales and translates the view volume without division, as in the symmetric case O = \begin{pmatrix} \frac{2}{r - l} & 0 & 0 & -\frac{r + l}{r - l} \\ 0 & \frac{2}{t - b} & 0 & -\frac{t + b}{t - b} \\ 0 & 0 & -\frac{2}{f - n} & -\frac{f + n}{f - n} \\ 0 & 0 & 0 & 1 \end{pmatrix}, mapping x and y to [-1, 1] while normalizing z accordingly. Once projected onto the image plane, coordinates enter screen space through normalized device coordinates (NDC), a device-independent range of -1 to 1 for x and y (and typically 0 to 1 or -1 to 1 for z), which standardizes the output before the final viewport transform scales and offsets them to pixel positions on the display (e.g., 0 to width-1 for x). This NDC mapping ensures projections are resolution-agnostic, facilitating consistent rendering across varied hardware.

Rendering Pipeline Integration

In the rasterization-based rendering pipeline, the image plane serves as the target for projected vertices following the vertex transformation stage, where 3D model coordinates are converted to clip space via the model-view-projection matrix. Once in clip space, primitives are clipped against the , and the surviving geometry undergoes perspective division to normalize coordinates to the canonical view volume, mapping them onto the image plane in normalized device coordinates (NDC). Rasterization then generates fragments by sampling the image plane at discrete locations, interpolating attributes such as depth and coordinates across the surface; these fragments proceed to the fragment shading stage, where per- computations determine final color values for the . This integration ensures efficient hardware-accelerated rendering in systems like and , where the image plane acts as the intermediary between geometric projection and pixel-level processing. In ray tracing pipelines, the image plane defines the and for primary cast from the virtual camera into the , with each corresponding to a ray's starting point on the plane and its direction determined by the plane's position relative to the eye point. These rays intersect to compute radiance, and techniques like distribute additional rays around primary ones on the image plane to reduce artifacts, enhancing image quality without exhaustive sampling. This approach contrasts with rasterization by tracing rays outward from the image plane rather than projecting inward, enabling effects while maintaining the plane as the sampling grid for the final image. The image plane's coordinates in NDC are mapped to the —a rectangular region on the display surface—via a and that aligns the projected plane with screen pixels, typically specified by width, height, and offset in APIs like . Clipping occurs prior to this mapping, discarding geometry outside the near and far planes of the to prevent invalid projections onto the image plane, ensuring only visible content contributes to the rendered output. In frameworks such as and , the perspective divide explicitly normalizes the image plane's , dividing x, y, and z by w to produce perspective-correct during rasterization. Modern extensions in virtual and (VR/AR) rendering address distortions introduced by wide-field-of-view , where the image plane is pre-warped to counteract lens aberrations like barrel , ensuring undistorted when viewed through headset displays. This involves rendering to a distorted image plane before final output, with techniques such as displacement or shader-based radial corrections integrated into the pipeline's fragment stage to maintain performance.

References

  1. [1]
    intermediate image planes, conjugate planes - RP Photonics
    An image plane is then a plane in which a sharp image of those object points appears, at least within Gaussian optics.
  2. [2]
    [PDF] Section 4 Imaging and Paraxial Optics
    This plane of effective refraction into image space is called the Rear Principal Plane P'. The distance from the Rear Principal Plane to the Rear Focal ...
  3. [3]
    [PDF] CS231A Course Notes 1: Camera Models
    In this construction, the film is commonly called the image or retinal plane. The aperture is referred to as the pinhole O or center of the camera. The distance ...
  4. [4]
    [PDF] Chapter 8 Optics
    The corners of the rectangle and the location of the center of projection define four planes (top, bottom, left and right) that bound the view volume.
  5. [5]
    Introduction to Lenses and Geometrical Optics - Evident Scientific
    ... image plane on the right-hand side of the lens. The distance between the ... perpendicular to the optical axis of the lens, then the lens would focus ...
  6. [6]
    [PDF] 6.637 Geometric Optics - MIT
    Sep 13, 2007 · Two planes are said to be an object and image pair if all rays from any point in the object plane pass through the corresponding point in the ...
  7. [7]
    [PDF] First Order Optics
    The image location is known as the focal plane and is our first introduction into the cardinal points. Figure 3. Marginal and Chief Rays. The cardinal points ...
  8. [8]
    (PDF) The vision of Helmholtz - ResearchGate
    Jun 4, 2021 · Helmholtz's contributions to understanding the eye as an optical instrument, the sensations of vision, and perception were expressed in the ...
  9. [9]
    [PDF] GEOMETRIC OPTICS
    Summarizing we say that plane mirrors produce virtual images the same distance behind the mirror that the object is in front. We note also that the image is the ...
  10. [10]
    CHAPTER 1 Optical Aberrations - SPIE Digital Library
    An object ray passing through the center of the aperture stop and appearing to pass through the centers of the entrance and exit pupils is called the chief (or ...
  11. [11]
    [PDF] Lecture 28 – Geometric Optics - Purdue Physics department
    What about ray tracing? Page 11. Tray Tracing Example. • Transfer ... Chief and marginal rays. Chief ray: any ray from an object point that passes through the.
  12. [12]
    [PDF] Ray Optics for Imaging Systems Course Notes for IMGS-321 11 ...
    Dec 11, 2013 · 3.1 PARAXIAL RAY TRACING EQUATIONS. 99. Marginal and chief rays traced through the three-surface optical system. The refraction at the first ...
  13. [13]
    Anatomy of the Microscope - Optical Aberrations
    Sep 11, 2018 · Spherical Aberration - These artifacts occur when light waves passing through the periphery of a lens are not brought into focus with those ...
  14. [14]
    [PDF] Chromatic aberrations in lens design - SPIE
    We will describe the consequences of this aberration for the colour correction of optical systems, and show that stable chromatic correction requires the.<|control11|><|separator|>
  15. [15]
    Paraxial Approximation, Rays & Optics Explained - Avantier
    Paraxial approximation simplifies optical calculations by assuming small-angle paraxial rays. Explore its role in paraxial optics, limitations, and commonMissing: definition assumptions limitations
  16. [16]
    Gaussian Optics – paraxial approximation - RP Photonics
    Gaussian optics is a framework for describing optical phenomena, which is based on geometrical optics and the paraxial approximation.Missing: 1841 | Show results with:1841
  17. [17]
    Understanding paraxial ray tracing - Ansys Optics
    The underlying assumptions of paraxial optics are that the ray makes a small angle and small height with respect to the chief ray. Because of this, Snell's Law ...Missing: limitations | Show results with:limitations
  18. [18]
    2.6: Gaussian Geometrical Optics - Physics LibreTexts
    Sep 16, 2022 · ... Snell's Law are replaced by the angles themselves: ... because it is second order in yA and therefore is neglected in the paraxial approximation.
  19. [19]
  20. [20]
    Optical Aberrations - RP Photonics
    Frequently encountered problems are barrel distortions and pincushion distortions. Such problems are particularly observed in wide-angle objectives. Zernike ...Missing: limitations | Show results with:limitations
  21. [21]
    Paraxial Region - an overview | ScienceDirect Topics
    In 1841, Professor Carl Friedrich Gauss (1777–1855) published his famous treatise on optics (Dioptrische Untersuchungen) in which he demonstrated that, so far ...
  22. [22]
    Chapter 1 - Geometrical Optics - SPIE Digital Library
    A marginal ray travels from the point where the object and the optical axis cross the edge of the aperture stop. In Fig. 1.7, the lens edge is the aperture stop ...
  23. [23]
    Thin-Lens Equation:Cartesian Convention - HyperPhysics Concepts
    The lens equation can be used to calculate the image distance for either real or virtual images and for either positive on negative lenses.
  24. [24]
  25. [25]
    How Focus Works | B&H eXplora
    Mar 19, 2015 · Lenses direct light, and focus is achieved by adjusting the lens's distance from the sensor until light converges at the sensor plane.
  26. [26]
  27. [27]
    Depth of Field in Photography Defined: the Basics | B&H eXplora
    Jul 28, 2015 · Depth of field (DOF) is defined as the area in a projected image, forward and aft of the focal plane, which also appears to be in focus in the image.Depth Of Field, Part I: The... · Factors Affecting Dof · Circle Of Confusion...
  28. [28]
    What Is Keystone Correction for Projectors? And Why You Should ...
    Jun 15, 2022 · Keystone correction aims to solve electronically what is inherently an optical problem. The projector will digitally adjust the image in the opposite direction ...
  29. [29]
    Perfect Your Projection: Easy Keystone Correction with BenQ ...
    Keystone correction adjusts digital distortion along the vertical, horizontal, and rotational axes to compensate for a misaligned lens.
  30. [30]
    History of the Camera: When was Photography Invented? - Adorama
    Aug 15, 2023 · In 1839, Louis Daguerre created the daguerreotype, which was much closer to the photographic camera concept we know today.
  31. [31]
    A Brief History of Focusing
    Jan 25, 2014 · This isn't nearly as much of an issue today, as it was in the late 70s and early 80s when the first autofocus cameras were being developed.
  32. [32]
    Microscope Conjugate Field Planes - Nikon's MicroscopyU
    The tutorial explores the geometrical relationship between image planes in a microscope, including the intermediate image plane (I(3)) and the retina image (I( ...
  33. [33]
    Microscopy Basics | Image Formation - Zeiss Campus
    The direct light and the light from the higher order diffraction maxima are focused by the objective to form an image in the intermediate image plane at the ...
  34. [34]
    Understanding Numerical Aperture & Image Resolution - ZEISS
    Jun 5, 2024 · Learn how numerical aperture affects image resolution in microscopy. Explore the changing diffraction pattern in this tutorial.
  35. [35]
    Two-mirror telescopes: Cassegrain, Gregorian and variants
    Effective focal length is determined by extending marginal ray converging from the secondary to the final focus backwards to the point of intersection with the ...
  36. [36]
    Adaptive Optics - ESO
    Adaptive optics allows the corrected optical system to observe finer details of much fainter astronomical objects than is otherwise possible from the ground.Missing: Hubble | Show results with:Hubble
  37. [37]
    Astronomical adaptive optics: a review | PhotoniX | Full Text
    May 1, 2024 · This paper provides a comprehensive review of AO progress for large aperture astronomical optical telescopes including both night-time and day-time solar ...
  38. [38]
    Coordinate Systems - LearnOpenGL
    ... coordinates to normalized device coordinates (NDC). These NDC are then given to the rasterizer to transform them to 2D coordinates/pixels on your screen.
  39. [39]
    The Perspective and Orthographic Projection Matrix - Scratchapixel
    The orthographic projection matrix offers a different view of three-dimensional scenes by projecting objects onto the viewing plane without the depth ...Remapping The Z-Coordinate · Taking The Field-Of-View... · Test Program<|control11|><|separator|>
  40. [40]
  41. [41]
    [PDF] Realistic Lens Distortion Rendering - Semantic Scholar
    Two approaches to integrate realistic lens distortions effects into any graphics pipeline are presented, both based on the most widely used camera model in ...