Fact-checked by Grok 2 weeks ago

Image formation

Image formation is the process in by which rays originating from an object are redirected through or by optical elements such as mirrors and lenses, resulting in a visual reproduction of the object that can be real or . This phenomenon is analyzed under geometric optics, an approximation valid when the of is much smaller than the dimensions of the optical elements and objects involved, typically on scales larger than about 500 nm. In reflection-based image formation, light rays bounce off surfaces according to the law of reflection, where the angle of incidence equals the angle of reflection. Plane mirrors produce , upright images that are the same size as the object and located at an equal distance behind the mirror. Spherical mirrors, either (converging) or (diverging), form images whose position, size, and orientation depend on the object's distance relative to the mirror's , defined as half the . Refraction-based image formation occurs when light passes through interfaces between media of different refractive indices, bending according to : n_1 \sin \theta_1 = n_2 \sin \theta_2, where n is the and \theta the angle from the normal. Thin lenses, approximated as having negligible thickness, are central to this process: converging (convex) lenses focus parallel rays to a real focal point with positive focal length f, while diverging (concave) lenses rays to appear to diverge from a focal point with negative f. Image location and magnification for lenses are calculated using the : \frac{1}{f} = \frac{1}{d_o} + \frac{1}{d_i}, where d_o is object distance and d_i is image distance (positive for real images on the opposite side, negative for virtual), with magnification m = -\frac{d_i}{d_o} indicating orientation (negative for inverted). Real images, formed where light rays actually converge, can be projected onto a screen and are typically inverted, whereas virtual images arise from apparent divergence of rays and cannot be projected, often appearing upright. These principles underpin diverse applications, from simple magnifiers and eyeglasses—where lens power P = 1/f (in diopters) corrects —to complex systems like cameras and microscopes that combine multiple elements for enhanced resolution and .

Core Principles

Geometric Image Formation

Geometric image formation refers to the process by which light rays emanating from points on a three-dimensional object are mapped to corresponding points on a two-dimensional image plane via the principles of ray optics. This involves tracing the straight-line propagation of light rays as they undergo refraction or reflection at optical surfaces, assuming ideal conditions where wave phenomena like diffraction are negligible. The foundational assumption is that light travels in straight lines, enabling the prediction of image location, size, and orientation through geometric constructions. The origins of these principles trace back to the 11th century, when , in his seminal work Kitāb al-Manāzir (), advanced the understanding of ray paths and image formation by establishing the intromission theory of vision, where light enters the eye from external objects to form images, and by systematically analyzing the geometry of refraction and reflection. Central to geometric image formation is the paraxial approximation, which simplifies calculations by considering light rays that make small angles with the , allowing the use of linear approximations in ray tracing. Under this approximation, the equation describes the relationship between object and image distances for a : \frac{1}{f} = \frac{1}{u} + \frac{1}{v}, where f is the (positive for converging lenses and negative for diverging lenses), u is the object distance from the (typically taken as positive when the object is on the incident light side), and v is the image distance (positive for real images on the opposite side and negative for virtual images on the same side). This equation enables the determination of where an object will be imaged for a given . Images formed by lenses are classified as real or virtual based on ray convergence: real images occur where rays actually intersect after passing through the lens, allowing projection onto a screen, whereas virtual images form where rays appear to diverge from, as if originating from a point behind the lens. Real images are inverted relative to the object, while virtual images are upright; the lateral magnification m, which quantifies image size relative to the object, is given by m = -\frac{v}{u}, where a negative value confirms inversion and the absolute value indicates enlargement or reduction. For converging lenses, real images form when the object is beyond the (u > f), yielding inverted and possibly magnified images, whereas virtual images arise when the object is within the (u < f), producing upright and enlarged images; diverging lenses always produce virtual, upright, and reduced images regardless of object position. Ray diagrams provide a visual method to locate and characterize images by tracing principal rays through the lens, assuming thin lens behavior where ray deviation at the center is negligible. For a converging lens, the three principal rays from an object point are:
  • The ray parallel to the optical axis, which refracts through the focal point on the opposite side;
  • The ray passing through the lens center, which continues undeviated;
  • The ray directed toward the focal point on the incident side, which refracts parallel to the optical axis after the lens.
    The intersection of these rays determines the image position and orientation. For a diverging lens, the principal rays are:
  • The ray parallel to the optical axis, which refracts as if coming from the focal point on the incident side;
  • The ray passing through the lens center, undeviated;
  • The ray directed toward the focal point on the opposite side, which refracts parallel to the optical axis.
    These rays diverge after the lens, and their backward extensions intersect to locate the virtual image. Such diagrams confirm the predictions of the thin lens equation and illustrate the geometric mapping without requiring numerical computation.

Radiometric Image Formation

Radiometry is the science of measuring radiant energy, particularly in the context of optical imaging, where it quantifies how light energy propagates from sources through scenes to form images. Central to this are two key quantities: radiance and irradiance. Radiance L, measured in watts per square meter per steradian (W m⁻² sr⁻¹), describes the power emitted or reflected from a surface per unit projected area per unit solid angle in a given direction, capturing the directional brightness of light. Irradiance E, in watts per square meter (W m⁻²), represents the power incident on a surface per unit area, integrating radiance over the hemisphere of incoming directions via E = \int_{2\pi} L \cos \theta \, d\omega, where \theta is the angle between the surface normal and the direction of incoming light, and d\omega is the differential solid angle. In image formation, scene radiance determines image irradiance at the sensor, as the optical system collects light rays from scene points to produce intensity values proportional to the incident radiance, enabling the reconstruction of scene properties through inverse rendering. The interaction of light with surfaces is modeled using the bidirectional reflectance distribution function (BRDF), which describes how incident light from one direction is reflected into another, essential for non-Lambertian surfaces exhibiting specular or directional scattering. Defined as f_r(\theta_i, \phi_i; \theta_r, \phi_r) = \frac{dL_r(\theta_i, \phi_i; \theta_r, \phi_r)}{dE_i(\theta_i, \phi_i)} in steradians⁻¹, the BRDF quantifies reflected radiance dL_r per unit incident irradiance dE_i, where angles \theta_i, \phi_i and \theta_r, \phi_r specify incident and reflected directions relative to the surface normal. For diffuse reflection on ideal Lambertian surfaces, the BRDF simplifies to a constant f_r = \rho / \pi, where \rho is the surface albedo (reflectivity), independent of viewing angle. This follows , which states that the observed radiance from such a surface is proportional to \cos \theta_r, ensuring uniform brightness appearance despite foreshortening effects, as the projected area decreases with the same cosine factor. Image irradiance arises from the propagation of scene radiance through the optical system, approximated for a small source patch by E = \frac{L \cdot A \cdot \cos \theta}{r^2}, where L is the source radiance, A is the source area, \theta is the angle between the source normal and the line to the receiver, and r is the distance. This equation highlights the role of geometry in energy distribution, with the \cos \theta term accounting for projected area and the $1/r^2 factor embodying the inverse square law dilution over distance. Illumination models distinguish point sources, which strictly follow the inverse square law for irradiance falloff (E \propto 1/r^2) due to spherical spreading, from extended sources like overcast skies, where irradiance remains approximately uniform beyond distances comparable to the source size, as multiple points contribute without significant geometric dilution. In lossless optical systems, energy conservation ensures that radiance remains invariant along ray paths, as the product of area and solid angle (étendue, A \Omega) is preserved, maintaining constant power throughput \Phi = L A \Omega.

Optical Elements

Lenses and Mirrors

Lenses serve as fundamental refractive elements in optical systems, bending light rays to converge or diverge them for image formation. Convex lenses, characterized by surfaces curving outward, act as converging elements that focus parallel incident rays to a real focal point on the opposite side, enabling the formation of real, inverted images when the object is beyond the . In contrast, concave lenses, with inward-curving surfaces, diverge parallel rays as if emanating from a virtual focal point on the same side, producing virtual, upright, and diminished images. These properties arise under the paraxial approximation, where rays are close to the optical axis. The focal length f of a thin lens, which determines its converging or diverging power, is calculated using the lensmaker's formula: \frac{1}{f} = (n - 1) \left( \frac{1}{R_1} - \frac{1}{R_2} \right), where n is the refractive index of the lens material relative to the surrounding medium (typically air, with n \approx 1), and R_1 and R_2 are the radii of curvature of the first and second surfaces, respectively, following the sign convention where radii are positive if the center of curvature is to the right of the surface. For a biconvex lens, R_1 > 0 and R_2 < 0, yielding a positive f for convergence; a biconcave lens has negative f for divergence. This formula assumes a thin lens approximation, neglecting thickness effects. To mitigate chromatic aberration—where focal length varies with wavelength due to dispersion—achromatic doublets combine a convex lens of low-dispersion crown glass (e.g., borosilicate) with a concave lens of high-dispersion flint glass, cemented together. The design ensures that the focal lengths for at least two wavelengths (typically red and blue spectral lines) coincide, producing a net focal length that remains nearly constant across the visible spectrum. For instance, the crown lens has a longer focal length for blue light than red, while the flint lens reverses this, allowing mutual compensation. Mirrors, which form images through reflection rather than refraction, are vital in systems requiring light redirection without chromatic dispersion. Plane mirrors, with flat reflective surfaces, produce virtual, erect images equal in size to the object and located at an equal distance behind the mirror, adhering strictly to the law of reflection where angle of incidence equals angle of reflection. Concave mirrors, curved inward like a sphere's inner surface, converge reflected rays to form real, inverted images (magnified, equal, or reduced based on object distance relative to focal length f = R/2, where R is the radius of curvature) when the object is outside the focal point, or virtual, upright, magnified images when inside. Convex mirrors, bulging outward, diverge rays to form only virtual, upright, and diminished images, providing a wider field of view. Parabolic mirrors, with a non-spherical parabolic profile, focus all parallel rays (e.g., from distant sources) precisely to a single point at the focal length without spherical aberration, making them ideal for reflection-based telescopes and searchlights. Compound lens systems integrate multiple elements to achieve desired optical properties unattainable with single lenses, such as extended focal lengths in compact designs. Telephoto lenses exemplify this, typically comprising a positive (convex) front lens of focal length f_1 followed by a negative (concave) rear lens of focal length f_2 (where f_2 < 0), separated by distance t < f_1. The effective focal length f_T of the system is longer than f_1 alone, calculated as f_T = \frac{f_1 f_2}{f_1 + f_2 - t}, resulting in a compressed perspective and magnified distant subjects while maintaining a shorter physical length than a single long-focal-length lens. The back focal distance, from the rear lens to the image plane, is \text{BFD} = f_T (1 - t/f_1), ensuring compatibility with fixed image sensors. The performance of lenses depends on material properties, particularly the refractive index n and , which quantifies wavelength-dependent variation in n. Common optical es include crown types like borosilicate BK7 (n_d \approx 1.517, low dispersion) for minimal color fringing, and flint types like dense flint SF6 (n_d \approx 1.805, high dispersion) for aberration correction in doublets. Dispersion is measured by the Abbe number \nu_d = \frac{n_d - 1}{n_F - n_C}, where n_d, n_F, and n_C are refractive indices at the d-line (587.56 nm), F-line (486.13 nm), and C-line (656.27 nm); crown glasses have \nu_d > 50, while flints have \nu_d < 50. crown glasses offer intermediate properties, with n \approx 1.6 and reduced dispersion compared to flints. These materials enable precise control over light bending, with higher n allowing shorter focal lengths for the same .

Pupils and Stops

In optical systems, pupils and stops play a crucial role in defining the bundle of rays that contribute to image formation by limiting the amount and angular extent of entering or exiting the system. The aperture stop is the physical aperture that most severely limits the axial bundle of rays passing through the system, determining the maximum cone of from an on-axis object point. The is the image of the aperture stop as viewed from object space, formed by the preceding the stop, and it defines the effective opening through which enters the system. Similarly, the is the image of the aperture stop viewed from image space, formed by the succeeding , and it specifies the cone of emerging toward the or observer. The field stop, in contrast, limits the bundle of off-axis rays, thereby defining the extent of the field of view rather than the light-gathering capacity. The , or f-stop, quantifies the light-collecting ability of the system and is defined as the ratio of the effective f to the diameter D of the : f/\# = \frac{f}{D}. A smaller corresponds to a larger relative to the , allowing more light to reach the and thus enabling shorter exposure times in photographic applications. Additionally, the influences , with lower values producing a shallower range of acceptable focus due to the wider cone of rays. Vignetting occurs when off-axis ray bundles are progressively clipped by stops or lens rims, leading to reduced illumination at the periphery of the image. In wide-angle systems, this effect is exacerbated by the steep angles of chief rays and the need for compact elements with limited diameters, causing peripheral falloff that can degrade image uniformity. Pupil magnification, defined as the ratio of the diameter to the diameter, affects the distribution of light in the . When pupil magnification is less than unity, the is smaller, concentrating the light bundle but potentially reducing overall image brightness if not matched to the system's transverse magnification, as the scales with the square of this ratio in conserving . A practical implementation of these concepts is the iris diaphragm commonly found in camera lenses, which serves as an adjustable aperture stop composed of overlapping blades that vary the opening diameter to control light intake and . This mechanism allows photographers to balance , , and sharpness by dynamically altering the size without changing the lens focal length.

Spatial and Spectral Properties

Field of View and Magnification

The (FOV) in an optical system refers to the angular extent of the observable scene that can be captured or projected, typically measured as the maximum angle subtended by the object space at the optical center. For a camera model with a , the horizontal FOV \theta is given by \theta = 2 \arctan\left(\frac{w}{2f}\right), where w is the width of the or and f is the of the . This formula assumes paraxial approximation and a flat , illustrating how shorter focal lengths yield wider FOVs for a fixed sensor size. Magnification quantifies the scale of the relative to the object in optical systems. Transverse or linear magnification m is defined as m = \frac{h_i}{h_o}, the of the image height h_i to the object height h_o, which for thin es follows from the lens equation and is negative for inverted real images. In viewing instruments like microscopes or telescopes, angular magnification measures the apparent increase in the object's angular size as seen by the observer, typically M = \frac{\theta_i}{\theta_o}, where \theta_i and \theta_o are the angles subtended by the image and object, respectively. Image formation distinguishes between , the geometric of a three-dimensional onto a two-dimensional via rays of , and , the process of capturing or representing that projected in a medium such as a sensor or film for storage or analysis. This separation highlights how optical systems first create a continuous light distribution () before discretization in digital or photographic . The profoundly influences FOV in both artificial and biological systems. In cameras, a decrease in expands the FOV, allowing capture of broader scenes at the cost of reduced detail per unit angle, as seen in wide-angle lenses where a 24 mm lens provides approximately 74° horizontal FOV on full-frame sensors. Similarly, the , with an effective of about 17-22 mm depending on , achieves a monocular horizontal FOV of roughly 140-160° through its wide-angle , though central high-acuity vision is limited to about 2-5° due to foveal structure. Geometric distortions arise when magnification varies across the FOV, leading to nonlinear mapping of object points to the . Barrel distortion occurs in wide-angle systems where off-axis magnification decreases, causing straight lines to appear curved outward like a barrel; this stems from the radial increase in ray angles relative to the . Conversely, distortion appears in telephoto es where off-axis magnification increases, bowing lines inward; it results from the design emphasizing central rays over peripheral ones. These effects are inherent to non-ideal geometries and can be minimized through aspheric elements or post-processing corrections.

Color and Monochrome Imaging

Monochrome imaging captures light intensity across the using a single channel, resulting in a representation based on , which quantifies perceived weighted by human visual . This approach employs a single without color filters, allowing maximum collection per and higher , particularly in low-light conditions, as no spectral division occurs. values are typically derived from formulas, such as the CIE Y component, which approximates the eye's response with weights emphasizing wavelengths for natural tone reproduction. In contrast, color imaging records spectral properties through multi-channel representations, enabling reproduction of hue, saturation, and brightness. The foundational RGB model, developed from trichromatic theory, uses , , and channels to approximate the full via additive mixing, as any color can be synthesized from these primaries under ideal conditions. Spectral sensitivity curves of sensors define how each RGB channel responds to wavelengths, typically peaking at approximately 450 nm for , 550 nm for , and 650 nm for , though variations exist across models due to filter and properties. These curves influence color fidelity, as mismatches with human cone sensitivities can lead to metamerism, where objects with distinct spectral reflectance appear identical under one illuminant but differ under another, arising from the limited three-channel encoding of continuous spectra. The historical development of color imaging began with James Clerk Maxwell's 1861 experiment, where separate black-and-white photographs of a ribbon were taken through , , and filters and projected additively to produce the first color image, demonstrating the principle of three-color synthesis. In modern digital sensors, the array facilitates color capture on single-chip devices by overlaying a of RGB filters on photosites, with 50% elements to align with sensitivity, 25% , and 25% in a repeating 2x2 pattern. Color interpolates missing channel values at each from neighbors, often prioritizing the green channel for sharpness, enabling full-color images from spatially subsampled data. To ensure faithful reproduction across devices, standardized color spaces like provide a device-independent framework, where X, Y, and Z tristimulus values are derived from spectral data using color-matching functions that encompass all perceivable colors, with Y serving as . Derived from psychophysical experiments, this space linearizes human vision for metrically accurate color specification and serves as a reference for transformations. The space, standardized by the IEC in 1999, maps RGB values to XYZ for consumer displays and web use, incorporating a gamma curve for perceptual uniformity and covering about 35% of CIE 1931 chromaticities to balance with compatibility. This enables consistent color rendering, mitigating metamerism in practical imaging pipelines.

Quality Factors

Illumination Effects

Illumination plays a critical role in image formation by determining the distribution of across a scene, which directly affects shadows, highlights, and in the resulting image. Different types of illumination—directional, diffuse, and specular—produce distinct based on the source's characteristics and interaction with objects. Directional illumination uses point sources, often focused by lenses, to create bright, targeted lighting that emphasizes edges but can introduce shadows and on or flat surfaces. In , diffuse illumination employs extended sources to provide even, scattered that minimizes and ensures uniformity, making it suitable for large, shiny objects where consistent is needed. Specular illumination, which highlights reflective properties, generates sharp highlights on glossy surfaces, enhancing surface details but potentially causing overexposure in those areas. Shadows form when objects block rays, with their appearance varying significantly based on the light source's size. Point sources produce sharp shadows consisting solely of an umbra, the darkest region where is completely obstructed. Extended sources, however, create both umbra and penumbra; the penumbra is a partially illuminated fringe around the umbra where is only partly blocked, resulting in softer, blurred edges that reduce contrast but add depth to the image. This penumbral effect becomes more pronounced as the source size increases, influencing overall scene visibility and requiring adjustments in imaging setups to maintain detail. To achieve proper exposure under varying illumination, photographers rely on the exposure triangle, which balances ISO sensitivity, , and to control the amount of light reaching the . ISO measures the 's light responsiveness, where higher values amplify signals but introduce , compensating for low illumination at the cost of image quality. determines exposure duration, with longer times capturing more light from dim sources but risking . regulates light intake through the opening, where wider settings allow more light for underexposed scenes but reduce . These elements interplay reciprocally: for instance, dim illumination might necessitate a wider and higher ISO to maintain , ensuring balanced contrast without over- or underexposure. Uneven lighting poses significant challenges in high dynamic range (HDR) imaging, where scenes exhibit wide variations in brightness that exceed standard sensor capabilities. Reflections and shadows from non-uniform sources create localized overexposure or underexposure, compressing details in highlights and lowlights while complicating fusion of multiple exposures. This leads to artifacts and reduced accuracy in applications like defect detection, as the dynamic range mismatch hinders capturing the full tonal spectrum. Practical techniques such as fill lights and backlighting mitigate these illumination effects to enhance contrast and highlights. Fill lights, positioned opposite the primary key light, soften harsh shadows by adding subtle illumination to darker areas, reducing overall contrast ratios for more even exposure without flattening the image. Backlighting, placed behind the subject, creates rim highlights that separate it from the background, producing dramatic silhouettes or glowing edges that emphasize form and depth in low-contrast scenes. These methods, often combined in three-point setups, allow precise control over light distribution to optimize image quality.

Aberrations and Distortions

Aberrations and distortions represent fundamental imperfections in optical systems that degrade the quality of formed images by deviating from geometric or radiometric behavior. These errors arise primarily from the limitations of surfaces and material properties, leading to blurred, , or geometrically warped images. In image formation, aberrations can be classified as monochromatic (affecting a single ) or chromatic (wavelength-dependent), with distortions specifically altering spatial proportions without necessarily blurring focus. Understanding and mitigating these effects is crucial for applications ranging from to , where precise image reproduction is essential. The primary monochromatic aberrations, known as Seidel aberrations, were formalized by Philipp Ludwig von Seidel in the 19th century and include five key types that describe wavefront deviations in third-order optics. Spherical aberration occurs when rays parallel to the optical axis but at different distances from it fail to converge to a single focal point, causing a central blur in on-axis images; this stems from the spherical shape of traditional lenses, where marginal rays focus closer than paraxial ones. Coma affects off-axis points, producing a comet-like flare where rays from an oblique bundle form an asymmetric pattern instead of a point, due to varying magnification across the aperture. Astigmatism arises in off-axis imaging, creating two perpendicular focal lines (tangential and sagittal) rather than a point, as the lens focuses rays in different planes unequally, leading to stretched or blurred images. Field curvature, or Petzval curvature, results in a curved image surface rather than a flat focal plane, requiring off-axis points to be refocused at different distances, which complicates uniform sharpness across the field. Distortion, the fifth Seidel aberration, warps the image geometry without affecting focus, manifesting as barrel (outward bowing) or pincushion (inward bowing) shapes for straight lines, caused by field-dependent radial scaling variations. These aberrations scale with aperture size and field angle, and their combined effects can severely limit resolution in uncorrected systems. Chromatic aberration introduces color-dependent errors due to the dispersion of in optical materials, where the varies with , causing different colors to focus at distinct points. This is divided into axial (longitudinal) , where shorter s (e.g., ) focus closer to the than longer ones (e.g., ), resulting in longitudinal color fringing along the , and lateral (transverse) , where off-axis image sizes differ by , producing colored edges on objects. Correction methods focus on achromatization, primarily through achromatic doublets that combine a convex of low-dispersion crown glass with a concave of high-dispersion ; this configuration balances the focal shifts for two s (typically and ), minimizing primary across the . More advanced apochromatic or superachromatic designs extend correction to three or four s using additional elements or specialized glasses. Depth of field (DOF) quantifies the axial range over which objects appear acceptably sharp, limited by aberrations and tied to the circle of confusion—the maximum blur diameter on the deemed sharp (typically related to or limits). The circle of confusion arises from defocus, where out-of-focus points project as disks rather than points, with size proportional to and distance from . An approximate formula for DOF in thin-lens systems is \text{DOF} \approx \frac{2 N c u^2}{f^2}, where N is the (focal length divided by aperture diameter), c is the diameter, u is the object distance, and f is the ; this holds for small angles and object distances much larger than f, emphasizing how smaller apertures (higher N) or smaller c extend DOF at the cost of gathering. Aberrations like spherical and exacerbate the effective circle of confusion, reducing usable DOF in wide-aperture systems. Correction techniques for aberrations often involve aspheric lenses and strategic stop placement. Aspheric lenses deviate from spherical surfaces by incorporating higher-order curvature terms, allowing precise control over ray paths to minimize and ; for instance, they can reduce the on-axis spot size dramatically compared to spherical equivalents by aligning marginal and paraxial foci. Stop placement, or aperture stop positioning, influences off-axis aberrations by altering chief ray heights and beam obliquity; shifting the stop toward the lens can balance to near zero in systems with , while optimizing for involves positioning to minimize field-dependent focus shifts. Pupil size, related to stop diameter, modulates aberration severity, with smaller pupils reducing higher-order effects like but introducing limits. These methods enable compact, high-performance without excessive element count. Distortion is quantified using radial models that describe geometric warping as a function of distance from the . The standard third-order radial distortion model is r_d = r (1 + k r^2), where r is the ideal (undistorted) radial distance from the center, r_d is the distorted distance, and k is the distortion coefficient; positive k yields pincushion distortion, while negative k produces barrel distortion, both arising from design trade-offs in wide-field systems. This polynomial approximation captures primary distortion, with higher-order terms added for complex lenses, and is calibrated empirically using test patterns to derive k for correction via software or optical compensators.

Biological and Perceptual Aspects

Image Formation in the Eye

The forms images through a series of optical elements that refract and focus incoming onto the , the light-sensitive layer at the back of the eye. The , the transparent front surface, provides the majority of the eye's refractive power by bending rays as they enter. Behind the cornea lies the anterior chamber filled with aqueous humor, a clear fluid that maintains and contributes minimally to . The crystalline , positioned behind the iris and , further focuses , while the posterior chamber and vitreous humor—a gel-like substance filling the space between the and —transmit without significant distortion. The , composed of photoreceptor cells, captures the focused to initiate visual signaling. To adjust focus for objects at varying distances, the eye employs , a process driven by the . When viewing distant objects, the relaxes, allowing suspensory ligaments to pull the into a flatter shape with a longer , setting the typically at infinity for emmetropic eyes. For near objects, the contracts, reducing tension on the ligaments and enabling the to become more convex, shortening its and shifting the —around 25 cm in young adults—to bring the image into sharp focus on the . This dynamic adjustment allows a range of clear without external aids. The total of the unaccommodated eye is approximately 60 diopters, with the contributing about 43 diopters and the relaxed around 17-20 diopters. During , the increases its power by 10-12 diopters in young adults, enabling focus on nearby objects; this amplitude decreases with age due to lens stiffening. These values reflect the eye's design for efficient onto a image plane about 17 mm behind the . Light rays entering the eye are refracted to form an inverted and reversed on the , a consequence of the converging similar to general principles. High is achieved primarily in the fovea, a small central depression in the packed with densely arranged photoreceptors, free of vessels to minimize . This specialized region subtends about 1-2 degrees of and enables resolution of fine details, with acuity dropping sharply in peripheral areas. From an evolutionary perspective, the exemplifies a camera-type structure, having developed from simpler light-sensitive patches in early organisms to pinhole-like cups that improved image sharpness by reducing blur, eventually incorporating lenses for enhanced focus. This progression, spanning hundreds of millions of years, mirrors the principles of a , where light passes through a small to project an inverted image onto a surface, optimizing detection of environmental cues for survival.

Human Image Perception

Human image perception involves the neural processing and interpretation of visual stimuli after the optical image is formed on the . The visual pathway begins with the , which transmits electrical signals from retinal cells to the , conveying information about , color, and spatial patterns. These signals travel through the , where nasal fibers cross to the opposite side, ensuring binocular integration, before reaching the (LGN) in the . The LGN acts as a relay station, organizing inputs into layers that segregate by eye, color, and motion sensitivity, before projecting via optic radiations to the primary () in the . In , neurons respond to specific features like edges and orientations, enabling higher cortical areas to construct coherent perceptions of objects and scenes. The brain applies organizational principles to interpret fragmented or ambiguous images, as described by . The principle of proximity groups visual elements that are spatially close together, leading perceivers to interpret them as belonging to the same object rather than separate entities. Similarity causes elements sharing attributes like color, shape, or size to be perceived as a unified group, facilitating in complex scenes. Closure prompts the mind to complete incomplete figures, filling in gaps to perceive whole shapes, which enhances efficiency in processing real-world visuals where edges may be obscured. These principles reflect innate perceptual tendencies that prioritize holistic interpretations over piecemeal analysis. Optical illusions highlight how contextual cues distort perceived image properties despite identical physical stimuli. The occurs when two lines of equal length appear unequal due to arrowhead orientations at their ends, which the brain misinterprets as depth cues from angular perspectives in . Similarly, the makes two identical objects seem different in size when placed between converging lines mimicking linear , as the visual system assumes the farther object must be larger to subtend the same retinal angle. These effects arise from the brain's probabilistic inference, drawing on environmental regularities like and size constancy to interpret two-dimensional retinal images as three-dimensional scenes. Contrast , the ability to detect differences between patterns, varies with and is quantified by the contrast sensitivity function (CSF), which peaks around 2-4 cycles per degree and declines at higher frequencies. This function enables detection of fine details in high-contrast edges while filtering in low-contrast areas. to levels adjusts dynamically; prolonged to bright reduces overall sensitivity to prevent , whereas dark enhances it over minutes, optimizing across lighting conditions. Such adaptations maintain perceptual stability, as the compresses to handle scenes spanning several orders of magnitude in brightness. Binocular vision contributes to depth perception through stereopsis, where slight disparities between the two retinal images—arising from the eyes' horizontal separation—are processed to infer relative distances. Neurons in and higher areas like detect these horizontal disparities, computing depth maps that integrate with monocular cues for robust three-dimensional perception. Stereopsis is most effective at near distances, up to about 6 meters, where disparities exceed detection thresholds, enabling precise judgments of object proximity and aiding tasks like grasping. Disruptions, such as in strabismus, impair this mechanism, underscoring its role in transforming stereo images into a unified sense of depth.

Advanced Techniques

Digital Sampling and Pixelation

Digital sampling in image formation involves the conversion of continuous optical signals into discrete digital representations, fundamentally shaping the fidelity of the resulting image. This process discretizes both spatial and intensity dimensions, introducing constraints on and potential artifacts that must be managed through theoretical and practical considerations. The Nyquist-Shannon sampling theorem provides the foundational principle for this discretization, stating that to accurately reconstruct a continuous signal without loss of information, the sampling frequency must exceed twice the highest frequency component in the signal, known as the . In imaging contexts, this implies that the spatial sampling rate—determined by —must be at least twice the highest present in the scene to prevent distortion, ensuring that fine details are captured without overlap in the . Pixelation arises as the primary manifestation of spatial , where the continuous image is divided into a grid of finite-sized picture elements, or , each with a defined p, the center-to-center distance between adjacent . The effective (FOV) subtended by a single , which dictates the , is given by \tan(p/f), where f is the of the system; smaller pixel pitches yield finer angular sampling but increase demands on optical quality and management. The modulation transfer function (MTF) quantifies the impact of pixelation on image sharpness, describing how spatial frequencies are attenuated during sampling. For square pixels, the pixel-limited MTF is expressed as: \text{MTF}(\xi) = \frac{\sin(\pi \xi p)}{\pi \xi p} where \xi is the spatial frequency in cycles per unit distance; this sinc-like function rolls off to zero at the Nyquist frequency (\xi = 1/(2p)), illustrating the inherent low-pass filtering effect of finite pixel size. Aliasing occurs when spatial frequencies above the are inadequately sampled, causing higher-frequency details to masquerade as lower-frequency patterns, such as moiré fringes in textured scenes. To mitigate this, filters—typically optical low-pass filters placed before the sensor—blur the image slightly to suppress frequencies beyond the Nyquist limit, trading some sharpness for reduced artifacts, though digital post-processing can also apply similar corrections. Image sensors implement sampling through charge-coupled devices () or complementary metal-oxide-semiconductor () architectures, each influencing efficiency and noise profiles. sensors transfer charge across pixels via a serial readout, achieving high uniformity but at the cost of slower speeds and higher power use, with quantum efficiencies (QE) often peaking around 80-90% in the due to efficient charge collection. In contrast, sensors integrate amplifiers at each pixel for parallel readout, enabling faster frame rates and lower power consumption, though early designs suffered from ; modern variants match or exceed QE, reaching up to 95% in optimized back-illuminated structures, making them dominant in consumer and applications. Color filter arrays, such as the pattern, are commonly overlaid on these sensors to enable single-sensor color capture, though their mosaicking introduces additional challenges during .

Computational Image Formation

Computational image formation encompasses software-based techniques that reconstruct, enhance, or synthesize images using algorithms, often leveraging multiple captures or learned models to overcome limitations of traditional . These methods process or intermediate representations to produce outputs with improved quality, such as extended , higher resolution, or novel viewpoints, enabling applications in , , and . By integrating computational power with imaging hardware, this approach has evolved from early to deep learning-driven synthesis, significantly expanding the capabilities of image capture beyond physical constraints. Light field imaging captures the four-dimensional light field—comprising spatial and angular information—using a placed in front of the sensor in a plenoptic camera. This , consisting of thousands of tiny lenses, redirects light rays to record directional data rather than just intensity, allowing post-capture refocusing and depth estimation. Pioneered in a hand-held plenoptic camera , the samples the light field in a single exposure, enabling digital refocusing by shifting sub-aperture images and disparity-based adjustments. For instance, refocusing involves selecting rays from different microlenses to simulate lens repositioning, achieving all-in-focus images or selective blurring without hardware changes. Super-resolution techniques enhance by combining multiple low-resolution images, exploiting sub-pixel shifts from motion or deliberate dithering to recover finer details. Multi-frame averaging aligns and fuses frames with slight offsets, reducing and noise while amplifying effective ; sub-pixel shifts, often induced by camera movement or mechanical actuators, provide complementary sampling that algorithms interpolate into higher-resolution outputs. A robust method uses iterative back-projection to minimize errors across frames, demonstrating up to 4x gains in real-world sequences with controlled detector shifts. These approaches are particularly effective for handheld devices, where natural motion provides the necessary offsets. Deep learning has revolutionized image enhancement through neural networks trained on vast datasets. For denoising, convolutional networks like DnCNN learn residual mappings to suppress while preserving edges, outperforming traditional filters like BM3D by adapting to unknown noise levels via and ReLU activations. fills missing regions by predicting plausible content from context; early GAN-based models, such as Context Encoders, use encoder-decoder architectures with adversarial training to generate coherent textures, achieving seamless repairs in irregular masks. Generative adversarial networks (GANs), introduced in 2014, pit a against a discriminator to produce realistic synthetic images, enabling applications from to artistic creation since their inception. Computational photography integrates these ideas into practical pipelines, such as (HDR) merging and stitching. HDR imaging recovers wide luminance ranges by aligning and weighting bracketed exposures according to the camera's response function, producing radiance maps that reveal details in shadows and highlights; the seminal method solves for inverse response curves via least-squares optimization on intensities across exposures. stitching automates wide-field mosaics by detecting invariant features like SIFT descriptors, estimating homographies between overlapping images, and blending seams with multi-band to minimize errors. These techniques, often combined in cameras, yield immersive views from casual captures. Recent advancements include neural radiance fields (), which represent scenes as continuous functions parameterized by multilayer perceptrons, optimizing density and color for novel view synthesis from sparse images. Trained via that integrates samples, NeRF achieves photorealistic 2D renderings of complex 3D , surpassing traditional methods in fidelity for static scenes. This approach has spurred extensions for dynamic content and real-time applications, underscoring the shift toward implicit neural representations in image formation.

References

  1. [1]
    2: Geometric Optics and Image Formation - Physics LibreTexts
    Mar 26, 2025 · This chapter introduces the major ideas of geometric optics, which describe the formation of images due to reflection and refraction.Missing: authoritative | Show results with:authoritative
  2. [2]
    25.6 Image Formation by Lenses - College Physics 2e | OpenStax
    Jul 13, 2022 · List the rules for ray tracing for thin lenses. Illustrate the formation of images using the technique of ray tracing. Determine power of a ...
  3. [3]
    The Feynman Lectures on Physics Vol. I Ch. 27: Geometrical Optics
    If the light really comes to a point, it is a real image. But if the light appears to be coming from a point, a fictitious point different from the original ...
  4. [4]
    [PDF] IBN AL-HAYTHAM AND THE ORIGINS OF MODERN IMAGE ...
    Feb 15, 2007 · we must understand not only the geometrical optics of the eye, but also the psychological processes that interpret what the eye collects ...
  5. [5]
    Thin-Lens Equation:Cartesian Convention - HyperPhysics Concepts
    The thin lens equation is a Gaussian form used for paraxial rays, predicting real/virtual images. It can calculate image distance for both positive and ...
  6. [6]
  7. [7]
    Ray Diagrams for Lenses - HyperPhysics Concepts
    The image formed by a single lens can be located and sized with three principal rays. Examples are given for converging and diverging lenses.
  8. [8]
    None
    ### Summary of Radiometry Basics: Radiance, Irradiance, and Their Role in Image Intensity
  9. [9]
    [PDF] Section 11 Radiative Transfer
    This result is known as the Camera Equation, and it relates the image irradiance to the scene radiance.
  10. [10]
    [PDF] Geometrical considerations and nomenclature for reflectance
    This document from the National Bureau of Standards covers geometrical considerations and nomenclature for reflectance.Missing: seminal | Show results with:seminal
  11. [11]
    Lambertian BRDFs - Ocean Optics Web Book
    May 25, 2021 · A Lambertian surface by definition reflects radiance equally into all directions. Its BRDF is simply where ρ is called the reflectivity of the surface.<|separator|>
  12. [12]
    Radiometry - Optics: The Website
    Image Plane Irradiance​​ $$E=\frac{LA}{d^2}$$ where $L$ is the source radiance, $A$ is the lens area, and $d$ is the distance from the lens to the detector or ...
  13. [13]
    Inverse Square Law in Photography | ProGrade Digital
    Jun 12, 2024 · Inverse-Square Law: The intensity of light is inversely proportional to the square of the distance from the light source. Here's the ...Missing: models extended
  14. [14]
    Basic Radiance and Radiance Invariance - SPIE Digital Library
    If rays are traced along a lossless boundary between two media having different indices of refraction, the solid angle changes according to Snell's law.
  15. [15]
    [PDF] Lecture 29 – Geometric Optics - Purdue Physics
    • Focal length: 1. (. = −1. 1. −. 1. • Thin lens equation: 1. (. = 1. (. +. 1. (. • Cancel chromatic aberration using a combination of concave and convex lenses.
  16. [16]
    Lens-Maker's Formula and Thin Lenses - HyperPhysics Concepts
    For a thin lens, the power is approximately the sum of the surface powers. The radii of curvature here are measured according to the Cartesian sign convention.Missing: optics | Show results with:optics
  17. [17]
    The Color Correction of an Achromatic Doublet
    **Summary of Achromatic Doublets and Chromatic Aberration Correction:**
  18. [18]
    Physics of Light and Color - Introduction to Mirrors
    As an example, convex rearview mirrors in automobiles produce panoramic images of reduced size, while concave shaving mirrors enlarge features of the face ...
  19. [19]
    [PDF] 4.2 Mirrors - Physics Courses
    A virtual image is formed by a plane mirror at a distance q behind the mirror. Parabolic mirrors can be used to focus incoming parallel rays to a small area or ...Missing: types | Show results with:types
  20. [20]
    [PDF] LAB 1: MULTIPLE LENS SYSTEMS THE ZOOM LENS
    The (positive) effective focal length of this combination of lenses is greater than f1 alone, and the back focal distance (BFD) is shorter than for the single ...
  21. [21]
    [PDF] TIE-29: Refractive Index and Dispersion
    Glasses having a high refractive index have a high dispersion behaviour and a low Abbe number.
  22. [22]
    Optical Glasses - HyperPhysics
    Common crown glasses have indices of refraction around 1.5-1.6, while extra dense flint glass may have an index as high as 1.75 . Lenses of crown and flint ...
  23. [23]
    [PDF] Section 9 Stops and Pupils
    The entrance pupil (EP) is the image of the stop into object space, and the exit pupil (XP) is the image of the stop into image space. The pupils define the ...
  24. [24]
    Entrance and Exit Pupil - RP Photonics
    The entrance pupil is defined as the image of the aperture stop formed by the optical elements preceding it, as seen from the object side. · Likewise, the exit ...
  25. [25]
    Relative Illumination, Roll-Off, and Vignetting
    ### Summary of Vignetting, Causes, and Relation to Pupils and Stops
  26. [26]
    Iris Diaphragm - Newport
    Free delivery 30-day returnsOur iris diaphragms provide 1-41.3 mm adjustable apertures for a variety of optic diaphragm applications like beam alignment & attenuation.
  27. [27]
    [PDF] Marc Levoy CS 178, Spring 2011
    the field of view becomes smaller. Changing the focal length. 29. FOV = 2 arctan (h / 2 f ). FOV h f. (Kingslake). Page 30 ! Marc Levoy. Focal length and field ...
  28. [28]
    [PDF] Optics for Engineers Week 3 - Northeastern University
    – Image Field of View (Here Defined by Half Angle) f = 10cm. (Normal Lens) s → ∞. FOV = 2 arctan. 11mm/2. 10mm. = 58◦. Jan 2024. ©C. DiMarzio (Based on ...
  29. [29]
    Image Formation by Lenses - HyperPhysics
    The linear magnification or transverse magnification is the ratio of the image size to the object size.
  30. [30]
    [PDF] 5.2 Optical Instruments Magnifiers Angular size ... - Physics Courses
    The angular magnification can have a range of values because the focal length of the eye can vary due to accommodation. The simplest case is the magnification ...
  31. [31]
    [PDF] Stanford CS348b Spring 2022 Lecture 4
    Effect of Focal Length on FOV. For a fixed sensor size, decreasing the focal length increases the field of view. Lecture 4. FOV = 2 arctan( h. 2f ). Lens.
  32. [32]
    eye - OMLC
    The typical eye has a focal length of 17.3 mm, a variable f-number from f/2.5 to f/17, and adjustable focus from infinity down to 10 inches. While the full ...
  33. [33]
    [PDF] The Human Eyes [Hecht 5.7.1-5.7.3]
    The approximate field of view of a human eye is 95° Out, 75° Down, 60° In ... What is the actual focal length in the 'eye ball' (estimate it first)?.
  34. [34]
    Distortion and Curvature of Field - HyperPhysics Concepts
    Distortion occurs when magnification varies off-axis, causing pincushion or barrel distortion. Curvature of field causes a planar object to project a curved ...
  35. [35]
    Aberrations
    Distortion is what causes vertical lines to bulge outward (barrel distortion) or inward (pincushion distortion) at the image plane.
  36. [36]
    Physics of Light and Color - Geometric Distortion Aberrations
    Sep 10, 2018 · Geometric distortion is an aberration causing image shape changes, not sharpness or color. It's due to differences in lens magnification and ...Missing: cause | Show results with:cause
  37. [37]
    [PDF] Color measurement – the CIE color space - Datacolor
    On account of these restrictions, in 1931, the CIE defined three arbitrary imaginary measurement values (X, Y and Z) as primary color stimuli – selected from ...
  38. [38]
    Color vs. Monochrome Sensors - RED cameras
    Monochrome camera sensors are capable of higher detail and sensitivity than would otherwise be possible with color.Missing: luminance single
  39. [39]
    A short history of colour photography
    Jul 7, 2020 · Nearly 200 years later, in 1861, a young Scottish physicist, James Clerk Maxwell, conducted an experiment to show that all colours can be made ...
  40. [40]
    [PDF] Spectral Sensitivity Estimation of Digital Cameras
    The quality of the measurements is judged by the prediction error using the estimated sensitivity curves for an independent set of colour samples. Introduction.
  41. [41]
    METAMERISM IN MULTISPECTRAL IMAGING OF ... - PubMed Central
    Two spectra which are perceived as the same color are called metamers, and the collection of all such spectra are referred to as the metamer set.
  42. [42]
    None
    ### Summary of Bayer Filter Array from the Document
  43. [43]
    A Standard Default Color Space for the Internet - sRGB - W3C
    sRGB has since been standardized within the International Electrotechnical Commission (IEC) as IEC 61966-2-1. During standardization, a small numerical error ...
  44. [44]
    Common Illumination Types
    ### Summary of Illumination Types and Their Effects on Image Formation
  45. [45]
    Practical Guide to Machine Vision Lighting - Advanced Illumination
    Diffuse, or full bright field lighting is commonly used on shiny specular ... (directional) types, the former requiring a specific light head geometry design.
  46. [46]
    Shadow Formation in Physics: Concepts & Examples Explained
    Rating 4.2 (373,000) Point light sources create simple shadows (umbra), while extended sources form complex shadows with umbra, penumbra, and antumbra regions. Shadow size and ...
  47. [47]
    The Exposure Triangle - A Beginner's Guide - Photography Life
    Jul 18, 2019 · Aperture, shutter speed, and ISO make up the three sides of the exposure triangle. They work together to produce a photo that is properly exposed.Stop! · Shutter Speed · ApertureMissing: illumination | Show results with:illumination
  48. [48]
    Real-Time High Dynamic Equalization Industrial Imaging ... - MDPI
    Severe reflections on the surfaces of smooth objects can result in low dynamic range and uneven illumination in images, which negatively impacts downstream ...
  49. [49]
    What Is a Fill Light? Learn About 7 Types of Fill Lights and the Best ...
    Jun 7, 2021 · A fill light is a less powerful light placed opposite the key light to fill in the shadows the key light creates.
  50. [50]
    What is Backlight Photography — Lighting Techniques Explained
    Oct 17, 2021 · The key light is important to expose the image. The fill light can determine the visual tone of a shot. But all too overlooked is the backlight.
  51. [51]
    [PDF] Overview of Aberrations
    The wave aberration function is a function of the field H and aperture ρ vectors. Because this function represents a scalar, which is the wavefront deformation ...
  52. [52]
    [PDF] Geometric optics & aberrations - Astrophysical Sciences
    Feb 9, 2011 · Geometric optics describes how light behaves, not what light is, and is accurate for short wavelengths compared to study dimensions.<|control11|><|separator|>
  53. [53]
    Chromatic and Monochromatic Optical Aberrations
    ### Summary of Chromatic Aberration
  54. [54]
    Depth of Field - RP Photonics
    Depth of field is the distance range with sharp focus in imaging instruments. It is influenced by focal length and aperture size.
  55. [55]
    All About Aspheric Lenses
    ### Summary: How Aspheric Lenses Correct Optical Aberrations
  56. [56]
    Coma and Astigmatism - HyperPhysics Concepts
    Coma can also be corrected by an appropriately placed stop, but the placement and size of an optimum stop also depends upon the other aberrations. Lens ...
  57. [57]
    Distortion - Imatest
    The simplest approximation is the 3rd order equation, ru = rd + krd 3 where rd is the distorted and ru is the undistorted radius. Depending on the sign of k, it ...
  58. [58]
    Gross Anatomy of the Eye - Webvision - NCBI Bookshelf - NIH
    May 1, 2005 · The Vitreous chamber (between the lens and the retina). The first two chambers are filled with aqueous humor, whereas the vitreous chamber ...
  59. [59]
    [PDF] Parts of the Eye | NIH
    Retina: The retina is the light-sensitive tissue at the back of the eye. The retina converts light into electrical impulses that are sent to the brain through ...
  60. [60]
    Gross Anatomy of the Eye by Helga Kolb - Webvision
    Jan 25, 2012 · Surface membranes cover the eye cup and develop into lens, iris and cornea with the three chambers of fluid filled with aqueous and vitreous ...
  61. [61]
    Physiology, Accommodation - StatPearls - NCBI Bookshelf
    Nov 15, 2022 · The accommodation reflex is the visual response for focusing on near objects. It also has the name of the accommodation-convergence reflex or the near reflex.
  62. [62]
    [PDF] 502-22-The-Eye.pdf
    Since the vitreous (nV = 1.337) fills the eye, the rear focal length differs from the focal length. Anatomical variations between eyes can be as much as 25%.
  63. [63]
    Optics and Refraction: From One Medical Student to Another
    May 20, 2021 · The cornea is responsible for roughly two thirds, or 40 diopters, of the refractive power of the eye; refractive errors originating in the ...
  64. [64]
    Comparing measurement techniques of accommodative amplitudes
    The highest amplitude was obtained using the push-up method (11.21 ± 1.85 D), while the minus lens technique gave the lowest finding (9.31 ± 1.61 D). A repeated ...
  65. [65]
    [PDF] Useful quantities in Vision Science
    Optical power (diopters): cornea, 43; lens (relaxed), 20; whole eye, 60. 9. Change in power due to accommodation, 8 diopters. 10. Axial chromatic aberration ...
  66. [66]
    [PDF] THE EYE
    The eye can then see the most distant object (the far point). When the ciliary muscles are contracted most and the lens is thickest, it is a strong lens (short ...
  67. [67]
    Chapter 14: Visual Processing: Eye and Retina
    However, in the accommodation process, the ciliary muscles contract and, acting like a sphincter muscle, decrease the tension on the zonules and lens capsule.
  68. [68]
    The relationship between retinal cone density and cortical ...
    The human fovea occupies only 0.02% of the total retinal area, but is responsible for our highest acuity vision. The fovea is characterized by the ...
  69. [69]
    The evolution of eyes: major steps. The Keeler lecture 2017 - PMC
    Oct 20, 2017 · A camera-style eye is a simple eye, but we think of a camera-style eye as more complex than just a cup with a pinhole. A cornea, lens ...
  70. [70]
  71. [71]
    Neuroanatomy, Visual Pathway - StatPearls - NCBI Bookshelf
    Visual stimuli from our surroundings are processed by an intricate system of interconnecting neurons, which begins with the optic nerve in the eye.
  72. [72]
    The Afferent Visual Pathway: Designing a Structural-Functional ...
    The visual cortex receives axons from the neurons of the LGN projected via the optic radiations and represents the first link in the cortical processing of ...
  73. [73]
    A Century of Gestalt Psychology in Visual Perception I. Perceptual ...
    We review the principles of grouping, both classical (eg, proximity, similarity, common fate, good continuation, closure, symmetry, parallelism) and new.
  74. [74]
    The Müller-Lyer illusion explained by the statistics of image–source ...
    Jan 18, 2005 · The Müller-Lyer illusion is a manifestation of the probabilistic strategy of visual processing that has evolved to contend with the uncertain provenance of ...
  75. [75]
    A review on various explanations of Ponzo-like illusions
    Oct 6, 2021 · This article reviews theoretical and empirical arguments for and against various theories that explain the classic Ponzo illusion and its variants.
  76. [76]
    CONTRAST SENSITIVITY FUNCTION - NCBI - NIH
    The contrast sensitivity function has the potential of adding more information about the functioning of the visual system than that given by visual acuity, ...
  77. [77]
    Image luminance changes contrast sensitivity in visual cortex
    Feb 2, 2021 · We show that luminance range changes contrast sensitivity in both cat and human cortex, and the changes are different for dark and light stimuli.
  78. [78]
    RECENT UNDERSTANDING OF BINOCULAR VISION IN THE ...
    This review highlights the tasks undertaken by the binocular visual system in particular and how, for much of human activity, these tasks differ.<|control11|><|separator|>
  79. [79]
    The Active Side of Stereopsis: Fixation Strategy and Adaptation to ...
    Mar 20, 2017 · Depth perception in near viewing strongly relies on the interpretation of binocular retinal disparity to obtain stereopsis.
  80. [80]
  81. [81]
    Digital camera design, part 1: Determine pixel size, focal ratio, and ...
    Oct 20, 2020 · The field of view (FOV) is essentially the IFOV times the number of pixels in X axis and the IFOV times the number of pixels in Y axis. So, if ...
  82. [82]
    [PDF] Sensor Modulation Transfer Function (MTF)
    The Modulation Transfer Function (MTF) measures the degradation of frequencies below the Nyquist rate, and is defined as MTF(f) = Mout(f) / Min(f).
  83. [83]
  84. [84]
    A visual guide to CCD vs. EM-CCD vs. CMOS | Hamamatsu Photonics
    As in a CCD sensor, light hitting the CMOS sensor is converted into photoelectrons, with conversion efficiency (QE) dependent on wavelength. But unlike a CCD ...