Fact-checked by Grok 2 weeks ago

3D display

A three-dimensional () display is a visual that simulates by presenting images with spatial information, enabling viewers to perceive objects in three dimensions rather than the flat two dimensions of traditional screens. This is achieved primarily through , where slightly different images are directed to each eye, or by volumetric projection that fills physical space with light points to form true volumes. Unlike displays, which lack depth cues such as and , displays mimic natural visual processes to create immersive experiences, though they often require specialized hardware or viewing conditions. 3D displays are classified into two main categories: stereoscopic and autostereoscopic. Stereoscopic displays, which necessitate viewing aids like polarized , anaglyph filters, or shutters, deliver separate left- and right-eye images to exploit ; this approach was first demonstrated by using mirror-based stereoscopes in 1838. Autostereoscopic displays, in contrast, enable glasses-free viewing by directing multiple perspectives to the viewer's eyes through techniques such as parallax barriers, arrays, or integral imaging, the latter invented by Gabriel Lippmann in 1908 to capture and replay full-parallax scenes. Additional subtypes include volumetric displays, which generate images within a physical volume using rotating screens or (proposed as early as 1912), and holographic displays, which reconstruct wavefronts of light for realistic depth, originating with Dennis Gabor's invention in 1948. Head-mounted displays (HMDs), often used in , combine stereoscopic principles with for personal immersion, achieving high efficiency in holographic elements up to 55% for red light. The development of 3D display technology spans over 180 years, beginning with early stereoscopic experiments in the and evolving through 20th-century milestones like the first holographic patents in the and volumetric prototypes in the , such as swept-screen systems using cathode-ray tubes. Advancements accelerated in the late 20th and early 21st centuries with digital computing, leading to commercial applications in , television, and ; for instance, time-multiplexed stereoscopic systems like those operating at 120 Hz emerged in the . Today, 3D displays find applications in entertainment for immersive movies and games, for precise anatomical visualization, military simulations, and systems, with ongoing research focusing on higher resolution and wider viewing angles. Recent innovations emphasize practical, glasses-free solutions to overcome adoption barriers, incorporating eye-tracking cameras and AI-driven depth conversion for seamless 2D-to-3D transitions. Devices such as the 3D monitor, released in 2025, utilize lenses and real-time adjustment to support 27-inch gaming displays at $2,000, while laptops like the Lenovo Legion 9i offer optional 3D upgrades for enhanced immersion in applications like video calls and . These developments signal a resurgence in consumer 3D technology, driven by demands for realistic visuals in 62% of hardcore gamers according to industry surveys.

History

Early Inventions and 19th-Century Devices

The origins of 3D display technology trace back to the mid-19th century, when scientists began exploring the principles of to create illusions of depth. In 1838, British physicist invented the , a device that presented two slightly different two-dimensional images—one to each eye—to mimic the natural separation of views experienced by human eyes spaced about 2.5 inches apart. Wheatstone demonstrated his reflecting stereoscope using hand-drawn pictures, such as outlines of geometric shapes and architectural scenes, to illustrate how the brain fuses these disparate images into a single three-dimensional , revealing hitherto unobserved phenomena like the perception of solidity in illusory figures. Wheatstone's exploited , the horizontal difference in the retinal projections of an object due to the offset between the eyes, by directing separate images via mirrors to avoid overlap and ensure each eye received its exclusive view. This setup allowed observers to perceive depth cues absent in viewing, such as the relative positions of , without relying on other visual hints like or . The device, though bulky and limited to drawings, laid the foundation for stereoscopic viewing by confirming that arises primarily from the brain's interpretation of these interocular differences. Building on Wheatstone's work, Scottish physicist David Brewster advanced the technology in 1849 with the development of stereographic cards—paired photographic prints mounted side by side for stereoscope viewing—and the lenticular stereoscope, which replaced mirrors with refracting lenses to create a more compact and portable design. Brewster's lenticular version used double-convex lenses spaced to match the inter-pupillary distance, directing light from each image to the appropriate eye while magnifying the views for enhanced clarity and reduced light loss compared to Wheatstone's reflecting model. These innovations made stereoscopic images more practical and accessible, particularly after integration with emerging photography techniques like the daguerreotype and calotype. A key milestone came in 1851 when French-born photographer Antoine Claudet produced the first stereoscopic photographs using daguerreotype plates, capturing paired portraits that could be viewed in stereoscopes to convey realistic depth. Claudet's work at his London studio involved exposing two silver-plated sheets simultaneously through slightly separated lenses, resulting in images that exploited to depict subjects with lifelike three-dimensionality, such as busts and architectural details exhibited at the . This photographic application transformed from a scientific curiosity into a popular medium for portraiture and documentation. Further refinement occurred in 1861 with American physician and poet Oliver Wendell Holmes' design of a lightweight, handheld , which streamlined the viewer into an affordable, non-patented device using simple prisms or lenses to hold and separate standard stereographic cards. Holmes' model emphasized portability and ease of use, allowing widespread domestic viewing of photographic stereoviews without the cumbersome mirrors of earlier versions, thereby popularizing the technology among the general public.

20th-Century Developments in Cinema and Broadcasting

The early 20th century saw the transition of stereoscopic principles from 19th-century devices like the stereoscope to motion pictures, with the first feature-length 3D film, The Power of Love (1922), employing anaglyph glasses that filtered red and cyan images to create depth perception for audiences. This anaglyph method, which superimposed complementary color images, marked the initial commercial attempt to bring 3D to cinema, though it suffered from color distortion and limited appeal. A significant advancement occurred in the 1950s with polarization-based systems, which used orthogonal polarizing filters on dual projectors to separate left- and right-eye images, viewed through polarized glasses for clearer stereopsis without color fringing. The debut of Bwana Devil in 1952, the first full-color 3D feature in this format, ignited a brief boom, prompting over 50 Hollywood productions in under a year as studios sought to counter declining attendance from television. However, the era waned by late 1953 due to technical challenges like projector misalignment causing eye strain and headaches, alongside viewer discomfort from prolonged glasses use and dim projections, leading to widespread abandonment of 3D in mainstream cinema. Early experiments in broadcasting paralleled cinema's efforts, with demonstrating an anaglyph-based system in 1951 by airing a game, requiring viewers to use red-blue for depth on compatible receivers. This Space Television approach aimed to extend stereoscopic viewing to homes but faced compatibility issues and limited adoption. In the , active shutter emerged for , electronically synchronizing shutters to alternate full-frame images at high speeds, enabling brighter and higher-resolution on standard TVs without losses. Developed by firms like StereoGraphics, with the wireless CrystalEyes model introduced in 1989, this technology supported releases and early digital formats, reviving interest in consumer despite battery and flicker concerns. A milestone in large-format 3D came in with IMAX's first stereoscopic production, Transitions, a 20-minute documentary screened at in using dual 70mm projectors and polarized glasses to deliver immersive depth on massive screens. This event highlighted 3D's potential in controlled environments, influencing future theme park and educational applications.

Digital Era and Post-2000 Advancements

The release of James Cameron's in 2009 marked a pivotal revival of polarized cinema, revitalizing the format after decades of sporadic interest and driving widespread adoption in theaters worldwide. The film's innovative use of digital contributed to its record-breaking performance, grossing over $2.8 billion globally and prompting studios to invest heavily in systems. In the consumer electronics space, showcased prototypes of autostereoscopic televisions at IFA 2010, introducing glasses-free viewing to home audiences through technology. This development aimed to make more accessible by eliminating the need for active shutter glasses, though commercial rollout faced challenges in resolution and viewing angles. Meanwhile, launched the 3DS handheld console in 2011, incorporating technology to deliver portable, glasses-free gaming experiences, which sold over 75 million units and popularized stereoscopic content in mobile entertainment. The proliferation of virtual and augmented reality headsets further accelerated display advancements, with the Oculus Rift's debut in 2012 pioneering immersive VR for gaming and simulations through high-refresh-rate stereoscopic displays. This paved the way for broader adoption, culminating in Apple's Vision Pro release in February 2024, a mixed-reality headset featuring micro- displays with 23 million pixels per eye for high-fidelity . By 2025, innovations at Display Week highlighted -enhanced display technologies, including light field systems, with companies like BOE demonstrating eye-tracked glasses-free prototypes using 16K-resolution panels to improve multi-viewer experiences without , alongside for real-time content optimization in related products. These advancements, combining algorithms with advanced rendering, promise scalable for applications in , , and professional .

Principles of 3D Perception

Binocular Disparity and Stereopsis

refers to the horizontal in the images projected onto the retinas of the left and right eyes, arising from the separation between the eyes known as the inter-pupillary distance, which averages approximately 6.3 cm in adults. This creates slightly different perspectives of the same scene, with the magnitude of the disparity increasing for objects closer to the observer and decreasing for distant ones. Stereopsis is the perceptual process by which the fuses these disparate images to extract depth information, enabling the sensation of three-dimensional structure from two-dimensional retinal projections. In the primary (), binocular neurons tuned to specific disparities achieve this fusion, primarily through phase disparities in receptive fields that align corresponding features across eyes, though position disparities contribute at higher spatial frequencies. The angular disparity \theta can be approximated as \theta \approx \frac{d}{D} radians for small angles, where d is the inter-pupillary distance (baseline separation) and D is the distance to the object; this geometric relationship underlies the 's computation of relative depth. Horizontal disparity primarily signals the sign and magnitude of depth relative to the fixation plane, determining whether an object appears nearer or farther. In contrast, vertical disparity provides cues for absolute distance and eye alignment, influencing how horizontal disparities are scaled to perceive the overall layout of space, though it plays a secondary role in basic stereoscopic depth. Human stereopsis has inherent limits, including a fusion range constrained by Panum's fusional area of about 15–30 arcminutes, beyond which disparate images fail to fuse into a single percept. Fine stereopsis is most effective for objects within approximately 10 meters, where disparities are sufficiently large for discrimination, but acuity degrades at greater distances. Additionally, the vergence-accommodation conflict—where eye convergence for depth differs from lens focus—disrupts fusion, reducing stereoacuity by up to 10-fold at 2 diopters of mismatch and inducing visual fatigue. The foundational demonstration of stereopsis as a static binocular phenomenon came from Charles Wheatstone's 1838 experiments, where he used a mirror to present disparate line drawings to each eye, eliciting vivid depth perceptions without any motion or cues, thus isolating the role of retinal disparity.

Additional Depth Cues in Human Vision

Human vision relies on a variety of depth cues beyond to perceive three-dimensional structure, enabling robust depth estimation even with one eye or in low-light conditions. These additional cues, including , pictorial, motion-based, and oculomotor mechanisms, provide complementary information that enhances overall spatial awareness and can substitute for when it is unavailable or insufficient. Monocular cues derive from static visual information processed by a single eye and include relative size, where familiar objects appear smaller with increasing distance due to angular subtense on the ; , in which one object partially blocks another, signaling the occluder is nearer; , where parallel lines converge toward a to suggest receding depth; , with surface details becoming denser and finer as distance grows; and , where distant objects lose contrast and appear bluish due to atmospheric scattering. Pictorial cues, often leveraged in and visual , encompass , which uses gradations of to imply surface and ; shadows, indicating an object's relative to light sources and surfaces; and interposition, a form of that establishes relative layering in scenes. These cues collectively allow the to infer depth from two-dimensional projections without requiring eye . Motion parallax provides dynamic depth information through relative motion of objects in the during observer head movement, where nearer objects shift faster across the than distant ones. This cue arises from the velocity difference between images of objects at different depths, where the relative depth d/D can be estimated from the ratio of the retinal difference to the observer's head v_head, as d/D ≈ / v_head. The integrates these retinal motion signals with extra-retinal information about self-motion to compute absolute depth scaling. Oculomotor cues stem from the eye's proprioceptive and include , the ciliary muscle's adjustment of the curvature to focus on objects at varying distances (effective up to about 2 meters), and vergence, the inward or outward rotation of the eyes to maintain fixation on a target, providing depth signals for near objects up to roughly 10 meters. These cues are particularly salient for close-range but diminish in effectiveness at greater distances. In 3D display design, these cues are incorporated to improve realism and reduce visual fatigue, as over-reliance on alone can cause conflicts with natural . For instance, volumetric displays exploit motion by rendering scenes with physical light emission at multiple depths, allowing head movements to naturally reveal shifting object velocities and enhancing in applications like medical visualization. Pictorial and cues are simulated through algorithms and rendering in stereoscopic and light field systems, while oculomotor cues are addressed in varifocal designs that adjust focus dynamically.

Stereoscopic Displays

Glasses-Based Stereoscopic Systems

Glasses-based stereoscopic systems deliver separate images to each eye through wearable that employs various techniques, such as color, , time, or separation, to create the illusion of depth via . These systems require viewers to wear specialized glasses, which can introduce comfort issues like weight or restricted head movement but offer compatibility with standard displays and projectors. Common variants include anaglyph, , active shutter, and methods, each balancing image quality, cost, and technical complexity differently. Anaglyph systems encode left- and right-eye images using complementary color , typically for one eye and for the other, overlaid to form a single composite image. The transmits the red channel intended for one eye while blocking , and vice versa, relying on mixing where the filters absorb opposing wavelengths to isolate views. This approach, effective for low-cost applications like printed media or basic video, suffers from significant color limitations, including desaturation, fringing artifacts at edges due to imperfect spectral separation, and retinal rivalry where mismatched colors cause visual discomfort. in anaglyphs arises from incomplete channel isolation, often exceeding 5% in printed versions, degrading . Polarization systems separate images by encoding them with orthogonal polarization states—either linear (horizontal for one eye, vertical for the other) or circular (left-handed and right-handed)—using polarizing filters on projectors or displays. Viewers wear passive with matching polarizers that block the opposite state's light, allowing each eye to receive only its intended image without temporal alternation. Linear polarizers are simpler and cheaper but sensitive to head tilt, which can cause , while circular polarizers maintain separation over wider viewing angles. To preserve polarization during projection, silver screens coated with metallic aluminum reflect light without depolarizing it, enabling brighter images in settings. These systems achieve low (<1%) but transmit only about 30% of light, resulting in dimmer visuals compared to monochrome alternatives. Active shutter systems use battery-powered glasses with liquid crystal displays (LCDs) or ferroelectric liquid crystal shutters that rapidly open and close in synchronization with the display's frame rate, typically 120 Hz for 60 frames per eye. The display alternates left- and right-eye images sequentially, and infrared or radio signals from the source trigger the shutters to opaque the non-viewing eye, preventing crosstalk. LCD shutters offer good contrast but slower response times, while ferroelectric variants provide faster switching (<20 μs) and higher contrast ratios (~1000:1), reducing motion blur in dynamic scenes. However, these systems suffer from ghosting due to residual crosstalk from shutter leakage or display persistence, with levels around 0.5% in optimized setups; battery life typically lasts 40-60 hours per charge, though frequent use in high-refresh environments can shorten it. Interference filter technology, such as , employs wavelength multiplexing to divide the visible spectrum into discrete bands (e.g., narrow triplets for red, green, and blue), assigning specific wavelengths to each eye's image via dichroic filters in the glasses. These thin-film interference filters transmit or reflect light based on wavelength, allowing full-color separation without the desaturation of anaglyphs or the dimness of polarization. The projector illuminates left-eye content in one set of bands and right-eye in shifted bands, with glasses' dichroic coatings isolating views for high-fidelity 3D. This method excels in large-screen applications, offering crosstalk below 1% and vibrant colors, though it requires specialized projectors and filters. Among these systems, field of view varies with screen size and viewing distance: polarization and active shutter approaches support wide horizontal angles with minimal distortion, while anaglyph and Infitec maintain performance over broader ranges but at the cost of color accuracy. Crosstalk metrics are critical for quality, with levels under 5% generally ensuring comfortable fusion; active shutter and Infitec typically achieve <1%, polarization ~1-2%, and anaglyph >5% in suboptimal conditions, influencing perceived depth and viewer fatigue.

Autostereoscopic and Glasses-Free Displays

Autostereoscopic displays, also known as glasses-free displays, deliver stereoscopic imagery by directing light rays from the screen to specific positions corresponding to the viewer's eyes, thereby creating the illusion of depth without requiring . These systems exploit , the primary cue for in human vision, by separating left- and right-eye views through optical manipulation of the display's output. Unlike traditional stereoscopic methods that rely on external filters, autostereoscopic approaches integrate view separation directly into the display hardware, enabling natural viewing for one or more users within designated zones. The technique, one of the earliest autostereoscopic methods, employs a series of vertical slits placed in front of the display panel to block portions of the emitted light, ensuring that each eye receives an appropriate image slice. Invented by American engineer Frederic Eugene Ives, who presented his "parallax stereogram" on December 5, 1901, at the , this approach creates eye-specific views by aligning the barriers with interleaved subpixel images on the screen. However, the technique incurs a , as the barriers obscure approximately half of the available pixels for dual-view stereoscopic operation, effectively halving the horizontal to accommodate the separated perspectives. Lenticular lens arrays represent an advanced evolution of barrier-based systems, using an array of slanted cylindrical lenses overlaid on the display to refract light from underlying subpixel images toward discrete viewing directions. This configuration enables multi-view , where multiple perspective images (typically 8 to 32 views) are generated across a wider angular range, allowing viewers to experience horizontal parallax and head motion without losing the 3D effect. By slanting the lenses relative to the pixel grid, between views is minimized, and the system supports smoother depth transitions as the observer moves. Directional backlight systems further enhance flexibility by incorporating LED arrays behind the display panel, combined with switchable diffusers or spatial light modulators to control light emission angles temporally. In time-multiplexed designs, the sequentially illuminates specific directions at high refresh rates, synchronizing with the display's content to deliver multiple views without permanent optical overlays like barriers or lenses. This approach maintains full resolution when switched to conventional mode and supports multi-user viewing by dynamically adjusting beam directions. To address viewing constraints in single-user scenarios, recent prototypes integrate eye-tracking cameras that monitor pupil positions in , dynamically adjusting the light direction to expand the effective . For instance, 2022 tablet prototypes, such as 10.1-inch displays developed for , employ field-programmable gate arrays (FPGAs) to process light field rendering and tracking data, enabling wide-angle visualization over larger head movements. As of 2025, advancements include displays like MOPIC's autostereoscopic system for and imaging, which incorporate similar eye-tracking for enhanced precision in medical applications. Despite these advancements, autostereoscopic displays are limited by narrow viewing zones, where the optimal "sweet spot" for clear 3D perception is typically around 30 cm wide at a standard viewing distance, beyond which image flipping or occurs. This restriction arises from the precise alignment required between the optical elements and eye positions, confining multi-view capabilities to specific angular sectors and posing challenges for shared or mobile use. Additional issues include reduced brightness due to light directionality and potential moiré patterns from lens or barrier interactions with the .

Volumetric Displays

Swept-Volume Volumetric Techniques

Swept-volume volumetric techniques generate three-dimensional images by rapidly moving a two-dimensional display surface or light source through a physical volume, leveraging the persistence of vision to fill the space with visible voxels. This approach creates true volumetric emission, distinct from layered stereoscopic methods that simulate depth through multiple 2D planes. One common implementation involves rotating LED arrays or mirror panels that sweep out a cylindrical or spherical volume. For instance, the Voxon Photonics VX1 employs a helical rotating screen driven at 30 volumes per second, rendering up to 1,000 × 1,000 × 200 voxels per frame with a peak fill rate of 500 million voxels per second. Newer models, such as the Voxon VX2-XL introduced in 2024, feature a larger volume of 512 mm diameter and 256 mm height, supporting up to 16 million color voxels for enhanced applications. These systems use field-programmable gate arrays to control LED illumination, enabling real-time interactivity compatible with software like and . Helical screen projections represent another variant, where a fast-spinning diffuse surface captures projected light to form voxels along the swept path. Early developments, such as those using chips to project modulated patterns onto a rotating double-helix screen, achieve high voxel activation rates suitable for dynamic content. Representative systems can process over 100 million s per second, supporting full-color, multiplanar imagery in a cylindrical envelope. Safety is a key design consideration due to the high-speed motion, with enclosed housings preventing physical and reducing risks of from rotating components. in these displays is typically limited to densities of 1-5 , constrained by mechanical speed, , and the need for persistence. For example, LED-based systems achieve at least 8 million addressable voxels in a 165 high by 292 diameter volume, equating to roughly 1-2 per , which supports visualizations but falls short of photorealistic detail. These techniques have found applications in installations, particularly in the , where their immersive qualities enhance viewer engagement. Installations like those using Voxon displays for gestural interaction and volumetric have been featured in galleries and exhibits, allowing audiences to explore floating sculptures without .

Static-Volume Volumetric Approaches

Static-volume volumetric approaches generate three-dimensional images within a fixed physical volume using stationary emissive elements or light-scattering media, relying on addressable s illuminated without any scanning or . These systems produce true volumetric imagery by exciting at discrete points throughout the display space, enabling multi-viewer experiences with consistent from various angles. Unlike dynamic methods, they employ fixed arrays or targeted energy deposition to create persistent or rapidly refreshed voxel patterns, supporting applications in visualization where mechanical stability is essential. One prominent implementation involves voxel-based LED arrays, consisting of layered matrices of individually addressable light-emitting diodes arranged in a three-dimensional . For instance, an early developed at in 2005 featured an 8×8×8 cube of 512 ultra-bright LEDs with a 15 mm pitch, multiplexed via to form static structures without . This configuration allows direct control of each voxel's illumination, demonstrating basic volumetric rendering in a compact, solid-state form. Laser-induced plasma displays represent another key technique, where tightly focused femtosecond laser pulses ionize air molecules to form luminous plasma voxels that emit visible light through recombination. These systems achieve glowing points in free space by delivering pulses with energies up to several millijoules per pulse, typically around 1 for thresholds producing visible sparks without significant acoustic noise or damage. A seminal demonstration in 2015 used a 30–100 laser at up to 7 /pulse to render aerial graphics at rates of 4,000 voxels per second within a 1 cm³ volume, highlighting the potential for untethered, transparent imagery. Up-conversion materials enable layered emission in solid media, where rare-earth-doped or nanoparticles absorb multi-wavelength near-infrared lasers to emit visible at varying depths. In a 2009 prototype, a 17 mm × 17 mm × 60 mm erbium-doped was excited by 1532 nm addressing and 850 nm imaging lasers, producing green emission at 532 nm across up to 30 slices addressed via digital micromirror devices. More recent advancements in 2025 utilized monolithic glasses doped with Ho³⁺, Tm³⁺, Nd³⁺, and Yb³⁺, excited by 808 nm and 980 nm lasers to achieve tunable RGB voxels with a color covering 79.88% of within a 5 mm . These layered structures support high-resolution static volumes by selectively activating planes for full-color output. To maintain smooth visuals, these displays incorporate material persistence and refresh rates typically between 30 and 60 Hz, aligning with human fusion thresholds to prevent perceptible while enabling dynamic content. The short excited-state lifetimes, on the order of 10⁻⁶ seconds in up-conversion media, necessitate rapid cycling to sustain image stability. A defining advantage is isotropic viewing, as voxels emit light uniformly in all directions, allowing unrestricted observation from 360 degrees without angular distortions. Laboratory setups have achieved voxel densities exceeding 1,000 per cm³, as exemplified by a 2009 up-conversion system rendering 23 million s in a 17.34 cm³ volume for resolutions up to × 768 × 30. Such densities facilitate detailed representations, though practical limits arise from efficiency and material transparency. These approaches enhance depth cues through true volumetric , where foreground voxels naturally block light from those behind, mimicking real-world interposition.

Holographic and Light Field Displays

Holographic Display Methods

Holographic displays rely on the principle of wavefront reconstruction through patterns, enabling the recreation of three-dimensional light fields that produce true and depth cues. In the formation of a hologram, coherent from a is split into an object beam, which illuminates the subject and scatters to carry its wavefront information, and a reference beam, which interferes with the object beam on a photosensitive recording medium such as photographic or . This creates a microscopic pattern that encodes both the and of the object wavefront, distinguishing from conventional imaging that captures only intensity. The resulting hologram serves as a , capable of reconstructing the original field when re-illuminated. Reconstruction occurs when the developed hologram is illuminated by a beam conjugate to the original reference, diffracting light to form the virtual or real image of the object. The diffracted field E_r can be approximated as the product of the original object field E_o and the reference pattern, E_r \approx E_o \cdot R, where R represents the reference wave, allowing viewers to perceive horizontal and vertical parallax as well as accommodation cues from the fully reconstructed wavefront. This process enables natural depth perception without eyewear, as the light rays emanate from apparent 3D positions in space. Holograms are classified into transmission and reflection types based on the geometry of beam incidence during recording. Transmission holograms, pioneered by and Upatnieks in 1962, require the object and reference beams to enter the medium from the same side, with reconstruction using a beam passing through the hologram to produce a viewable from one side. In contrast, reflection holograms, developed by Denisyuk in the same year, involve beams entering from opposite sides, forming volume fringes that selectively reflect specific wavelengths, allowing viewing with white light illumination for brighter, more accessible displays without coherent sources. Digital holography extends these principles to dynamic displays by computationally generating fringes for real-time modulation. Computer-generated holograms (CGHs) are calculated using algorithms like the (FFT) to simulate object propagation and with a , producing fringe patterns that encode scenes from models or captured . These patterns are then displayed on spatial light modulators (SLMs), such as devices, which impart the phase or to incoming light for reconstruction. However, SLM is constrained by pitch, typically around 8 μm in commercial devices, limiting the of fringes and thus the angular and reconstruction quality. A key challenge in holographic displays is speckle noise, arising from the coherent of scattered , which degrades quality by introducing granular patterns. Techniques like angular multiplexing address this by recording multiple holograms at slightly different reference beam angles within the same medium, allowing sequential or summed to average out speckle while preserving the . This method leverages the volume nature of holograms to store and retrieve diverse viewpoints, enhancing overall fidelity.

Light Field and Integral Imaging Systems

Light field displays approximate three-dimensional scenes by sampling and reconstructing the directional distribution of rays emanating from points in space, providing glasses-free viewing with correct and cues without the computational intensity of full . These systems parameterize the light field as a four-dimensional L(u,v,s,t), where (u,v) represent spatial coordinates on one and (s,t) denote angular coordinates on another, capturing the radiance along rays passing through in a static scene. This parameterization enables the representation of light flow without occluders, reducing the full five-dimensional plenoptic to four dimensions since radiance remains constant along unobstructed rays. Light fields are typically captured using plenoptic cameras, which integrate a microlens array in front of a to record both spatial and information in a single exposure, allowing post-capture refocusing and novel view synthesis. Integral imaging, a related , employs a microlens array to form an array of elemental images, each capturing a unique of the from slightly offset viewpoints. During reconstruction, these elemental images are computationally back-projected through a virtual pinhole array using ray tracing or models, synthesizing volumetric voxels at specified depths to recreate the for autostereoscopic viewing. This method supports multi-view output with natural accommodation, though it trades off for extent due to the finite number of microlenses. To address the high data demands of dense light field sampling, compressive light field displays employ optimization algorithms that exploit sparsity in the ray data, decomposing the light field into lower-dimensional representations for efficient rendering. These systems often use stacked layers of displays (LCDs) with a directional , where nonnegative iteratively optimizes layer attenuations to approximate the target light field, enabling wider fields of view (e.g., 50° horizontally) and greater in thin form factors. The sparsity arises from representing non-physical or redundant rays with binary weights, reducing memory and computation while maintaining perceptual fidelity through GPU-accelerated multiplicative updates. View synthesis in light field systems generates intermediate perspectives from sparse input views or 2D images augmented with depth maps, facilitating multi-view displays with angular resolutions typically ranging from 20 to 50 views for practical glasses-free operation. By estimating depth from input views and warping pixels accordingly, algorithms like those using convolutional networks produce dense angular sampling, with performance improving as input views increase from 2 (for ) to 5 (for full surround), achieving higher PSNR values while reducing synthesis time by up to 40% with optimized selections. Recent advances, such as developments in ultra-thin light field panel displays, integrate freeform directional backlights with micro-prism layers to achieve over 120° viewing angles in a 28 mm thick prototype, enabling wide-angle applications like immersive medical visualization with resolutions six times finer than conventional systems.

Applications

Entertainment and Consumer Media

In the realm of entertainment, 3D displays have significantly influenced through the adoption of polarized 3D systems in major blockbusters, particularly following the success of in 2009, which spurred a surge in production during the early 2010s. These films utilized passive polarized glasses to deliver stereoscopic viewing in theaters, contributing to a global 3D revenue of $6.1 billion in 2010, more than double the $2.5 billion from 2009, and accounting for 21% of the U.S. and that year. The technology's appeal was bolstered by premium ticket pricing, often $3 to $5 higher than standard admissions—reaching up to $18 in some markets—which drove increased revenue per screening for 3D presentations. Home theater systems embraced 3D displays in the early 2010s with the introduction of the 3D Blu-ray standard in 2010, enabling high-definition stereoscopic playback on compatible televisions using active shutter glasses that alternately block light to each eye for . Manufacturers like and launched 3D-capable HDTVs that year, bundling them with active glasses to support frame-sequential 3D content from Blu-ray discs, though adoption waned by the mid-decade due to content scarcity and viewer fatigue. In gaming, autostereoscopic displays gained traction with the , released in 2011, which featured a screen for glasses-free 3D viewing and achieved total sales of 75.94 million units worldwide as of September 2025. This handheld console supported immersive 3D gameplay in titles like , enhancing depth without accessories and appealing to portable entertainment. Virtual reality headsets, such as the launched in 2016, extended 3D immersion to console gaming with over 5 million units sold by 2019, powering stereoscopic experiences in games like and . Mobile devices explored glasses-free 3D early with the HTC Evo 3D in 2011, which employed a for autostereoscopic viewing of photos, videos, and games on its 4.3-inch qHD screen. By 2025, advancements in eye-tracking technology enabled more sophisticated implementations, such as the 9i laptop's optional 18-inch 3D using lenses to deliver real-time stereoscopic effects tailored to the user's gaze, and the DIGIERA HoloMax hybrid device's 10.95-inch 2.5K autostereoscopic screen for portable gaming and media consumption. Content creation for 3D entertainment relies on stereoscopic camera rigs, which typically consist of two synchronized cameras offset by an interaxial distance mimicking separation to capture left- and right-eye footage simultaneously. In , depth grading refines this footage by adjusting disparity maps to control perceived depth, convergence, and rounding errors, ensuring comfortable viewing and enhanced immersion across , TV, and formats. Tools like stereo compositing software allow creators to layer elements with independent depth adjustments, followed by stereo to maintain consistency between views.

Medical, Scientific, and Industrial Uses

In , 3D displays enable surgeons to interact with holographic reconstructions of MRI and scans, facilitating precise preoperative planning and reducing procedural risks. For instance, EchoPixel's True 3D system, introduced in the , converts standard medical images into interactive 3D holograms that physicians can manipulate in real-time using gestures, allowing for better visualization of complex anatomies such as cardiac structures. This technology has been particularly valuable in cardiovascular surgery, where it supports detailed assessment of patient-specific anomalies without the limitations of flat-screen views. Scientific visualization leverages volumetric displays to render intricate datasets, such as molecular structures or , providing researchers with immersive insights into multidimensional data. In molecular modeling, these displays allow for the examination of protein configurations and interactions in true 3D space, enhancing analytical accuracy over traditional 2D representations. NASA's Scientific Visualization Studio has incorporated techniques in the 2020s to depict cosmic objects and simulations, aiding in the of large-scale astrophysics data for mission planning and public outreach. Volumetric technologies also support true-scale models for collaborative scientific review, enabling multi-viewer interaction without . In , autostereoscopic displays integrate with CAD software to enable glasses-free 3D reviews of prototypes, streamlining workflows in sectors like automotive . Engineers use these systems to evaluate designs collaboratively, identifying spatial issues early and reducing the need for costly physical models by up to 50% in some development cycles. This approach accelerates in product lifecycle management, as seen in virtual prototyping for aerodynamic testing and interior layout optimization. Haptic integration with displays enhances surgical simulations by providing tactile , simulating resistance and interactions to improve . Studies show that such systems significantly boost task accuracy and reduce applied forces during procedures, with a reporting medium to large effect sizes (Hedges' g = 0.83 for average forces and g = 0.69 for peak forces) in force control compared to non-haptic setups. This combination fosters skill transfer to real operations, particularly in minimally invasive techniques. A notable case study in 2025 involves glasses for telemedicine, where devices like medical-grade smart glasses enable remote diagnostics through real-time 3D overlays of patient scans and . These tools allow specialists to guide on-site clinicians via augmented visualizations, enhancing diagnostic accuracy in underserved areas and reducing travel needs for consultations. Integration with further refines interpretations, supporting applications from response to routine follow-ups.

Challenges and Future Directions

Technical and Perceptual Limitations

One of the primary perceptual limitations in stereoscopic displays arises from the (VAC), where the eyes' vergence ( or to fixate on an object) and ( adjustment for focus) cues are mismatched due to the fixed focal plane of the display surface. This uncoupling disrupts natural , leading to , headaches, and reduced visual performance, as the brain struggles to reconcile conflicting depth signals from stereo disparity and monocular focus cues. The conflict is particularly pronounced when the angular disparity exceeds 1°, causing noticeable discomfort and difficulties beyond the natural (±0.3 diopters) or Panum's area (15–30 arcminutes). Technical constraints in 3D display hardware, such as and artifacts, further degrade image quality and perceived depth. In displays, moiré effects emerge from the between the periodic array and the subpixel structure of the underlying LCD or panel, resulting in unwanted color fringes and distorted 3D imagery that reduce spatial fidelity. These artifacts are exacerbated when subpixel sampling violates the Nyquist limit, requiring filters to prevent but often introducing blur that compromises the display's effective for multiview . Brightness and contrast are also significantly diminished in many systems, particularly those relying on for stereo separation. Polarized setups, including passive and projection systems, attenuate light by approximately 50% as only one polarization state passes through the filters to each eye, leading to dimmer images and lower contrast ratios that hinder visibility in . Prolonged viewing induces visual , with studies indicating higher dropout rates in extended sessions compared to viewing, attributed to cumulative effects of VAC, from active shutter technologies, and sustained binocular demand. Accessibility remains a key human-factor limitation, as 3D displays are unsuitable for approximately 5% of the population affected by , with broader estimates for reduced stereoacuity ranging from 1-10%, where individuals cannot perceive depth from due to conditions like or , rendering stereo content ineffective or causing additional discomfort. Recent advancements in have significantly enhanced holographic displays by enabling machine learning-based computation of fringes in (CGH). models, such as generative adversarial networks and neural holography frameworks, accelerate the generation of high-fidelity from 2D inputs, achieving latencies as low as 30 milliseconds on specialized processors like the Real-time Holography Processor (RHP) developed by ETRI. This approach reduces computational overhead compared to traditional iterative methods, making interactive 3D feasible for applications in and without perceptible delays. In (), varifocal and light field technologies are advancing toward focus-adjustable depth rendering to mitigate vergence-accommodation conflicts. researchers, in collaboration with , unveiled a holographic AR headset in 2025, featuring an ultrathin 3mm that delivers full-color 3D images with adjustable focal depths across a wide , calibrated via for optimal perceptual realism. This eyeglass-form factor supports lifelike depth cues, allowing users to naturally refocus on virtual objects at varying distances, paving the way for comfortable, all-day wearable AR experiences. Glasses-free 3D monitors are gaining traction with demonstrations of wide-angle viewing capabilities. At 2025, TCL CSOT presented a 106-inch curved glasses-free display using directional light field technology with lenticular lenses, supporting and multi-viewer experiences up to 180 degrees without headgear. These systems employ multi-layer optical structures to direct light rays precisely to each viewer's eyes, enhancing for consumer and collaborative environments. The display market is poised for substantial expansion, projected to reach USD 510.91 billion by 2032, growing at a (CAGR) of 17.1% from USD 169.69 billion in 2025. This surge is primarily driven by the metaverse's demand for immersive virtual environments and the integration of advanced head-up displays (HUDs) in automotive applications, where improves and . Sustainability trends in 3D displays emphasize low-power designs leveraging metasurfaces for efficient manipulation. Electromagnetic metasurfaces enable energy-efficient imaging by reducing power consumption in structured projection by 5–10 times compared to conventional lasers, as demonstrated in broadband achromatic arrays for field rendering. These passive, nanoscale structures support eco-friendly manufacturing via , minimizing material use and enabling compact, low-energy tensor-based displays for next-generation and volumetric systems.

References

  1. [1]
    Three-dimensional display technologies - PMC - PubMed Central
    This article provides a systematic overview of the state-of-the-art 3D display technologies. We classify the autostereoscopic 3D display technologies into ...
  2. [2]
    (PDF) 3D Display Technology - ResearchGate
    Aug 10, 2025 · This paper will review the major trends in 3D-display technology, and covers stereoscopic display, integral imaging display, head-mounted display, volumetric ...
  3. [3]
    [PDF] Stereo & 3D Display Technologies Introduction - Research
    Most 3D displays fit into one or more of three broad categories: Stereo pair, holographic, and multiplanar or volumetric. Stereo pair based technologies ...
  4. [4]
    3D Is Back. This Time, You Can Ditch the Glasses - WIRED
    May 25, 2025 · There's a new wave of 3D coming. Laptops, tablets, and even computer monitors have started embracing a new form of 3D technology that solves this problem ...<|control11|><|separator|>
  5. [5]
    XVIII. Contributions to the physiology of vision. —Part the first. On ...
    Contributions to the physiology of vision. —Part the first. On some remarkable, and hitherto unobserved, phenomena of binocular vision. Charles Wheatstone.
  6. [6]
    [PDF] The stereoscope [electronic resource] : its history, theory, and ...
    ... STEREOSCOPE. ITS HISTORY, THEORY, AND CONSTRUCTION. WITH ITS APPLICATION TO THE FINE AND USEFUL ARTS. AND TO EDUCATION. BY. SIR DAVID BREWSTER, K.H., D.C.L. ...
  7. [7]
    Stereoscopic daguerreotype depicting a portrait of a young woman
    Stereoscopic daguerreotype, hand-tinted with gilt details, depicting a portrait of a young woman by Antoine Claudet. French School, London, ca. mid 1850s.
  8. [8]
    Stereoscope, hand-held - Smithsonian Gardens
    Oliver Wendell Holmes made further contributions to the stereoscope by introducing a handheld stereoscopic viewer in 1861 that was both streamlined and more ...
  9. [9]
    Three-dimensional displays, past and present - Physics Today
    Apr 1, 2013 · ... The Power of Love, was released in 1922 in anaglyph form. The ... first anaglyph single-projector 3D film created for the IMAX dome.
  10. [10]
    Bwana Devil | Kentucky Scholarship Online | Oxford Academic - DOI
    Abstract. Chapter one describes the production and release of Bwana Devil, the 1952 3D film that launched the 3D boom in Hollywood.
  11. [11]
    3D glasses for 'Bwana Devil' | Science Museum Group Collection
    Pair of polarised 3D glasses to be worn whilst watching the film 'Bwana Devil', 1952. Bwana Devil is a 1952 American adventure B movie.Missing: polarization | Show results with:polarization
  12. [12]
    What Killed 3D Films? - 3D Film Archive
    When BWANA DEVIL premiered on November 26 1952, there had already been at least a dozen 3-D films shown in the US alone. The first feature was THE POWER OF ...
  13. [13]
    The History of 3D Movie Tech - IGN
    Apr 23, 2010 · The reasons for the decline were mostly technical. 3D projectors required two reels to be displayed in perfect synchronization. Small errors in ...
  14. [14]
    Three Dimensional Television
    Early 3D TV used two side-by-side images, viewed with crossed eyes or special viewers, using two picture tubes at 45-degree angles with polarized lenses.
  15. [15]
    The Age of IMAX, or the “Immersive Cinema,” 1986–2009 (Chapter 5)
    Oct 22, 2021 · The first permanent IMAX 3D theater was built in Vancouver, British Columbia, in 1986 for Expo 86, where it showed the first IMAX 3D film, Transitions.
  16. [16]
    James Cameron Says 3D Not Dead Yet Ahead of 'Avatar 2' Opening
    Sep 14, 2022 · The director said, “'Avatar' won the best cinematography with a 3D digital camera. No digital camera had ever won the best cinematography Oscar ...
  17. [17]
    Philips demonstrates 3DTV without 3D glasses - FlatpanelsHD
    At IFA 2010 Philips has showcased a prototype of a 3D display that does not require 3D glasses. Here are our impressions.
  18. [18]
    Nintendo 3DS draws gamers looking for 'glasses-free' 3D console
    Mar 24, 2011 · The console's screen uses technology known as a parallax barrier, which overlays an LCD display with a series of precise slits providing a ...
  19. [19]
    Oculus Rift virtual reality gaming goggles launched on Kickstarter ...
    Aug 1, 2012 · The Oculus Rift, a virtual reality head-mounted display for gaming, has been launched on Kickstarter with the purpose of building and shipping ...
  20. [20]
    Apple Vision Pro available in the U.S. on February 2
    Jan 8, 2024 · Apple Vision Pro will be available beginning Friday, February 2, at all U.S. Apple Store locations and the U.S. Apple Store online.Facetime Becomes Spatial · Breakthrough Design · Unrivaled Innovation
  21. [21]
    BOE Attends Display Week 2025 with Its Cutting-Edge New ...
    May 14, 2025 · At this exhibition, BOE introduced a series of industry-leading technologies and products based on AI + Display, injecting new momentum into the ...
  22. [22]
    Display Week > 2025 > Program > Special Topics
    This special topic covers the applications of artificial intelligence in all aspects of display technologies and manufacturing.
  23. [23]
    Normal values and standard deviations for pupil diameter ... - PubMed
    Normal values of pupil diameters and interpupillary distances (PDs) were measured in a population of 1311 subjects (in 4294 visits) ranging from 1 month of age ...
  24. [24]
    Perception Lecture Notes: Depth, Size, and Shape
    Binocular disparity is defined as the difference in the location of a feature between the right eye's and left eye's image. The amount of disparity depends ...Missing: formula | Show results with:formula
  25. [25]
    Stereopsis: How the brain sees depth - ScienceDirect.com
    If the two eyes fixate a point in space, then objects nearer or further than that point will cast images onto different locations on the two retinae (see Figure ...
  26. [26]
    Neural mechanisms underlying binocular fusion and stereopsis
    There are two possible mechanisms for encoding binocular disparity through simple cells in the striate cortex: a difference in receptive field (RF) position ...
  27. [27]
    The Role of Vertical Disparity in Distance and Depth Perception as ...
    Vertical binocular disparity is a source of distance information allowing the portrayal of the layout and 3D metrics of the visual space.
  28. [28]
    Vergence–accommodation conflicts hinder visual performance and ...
    The uncoupling of vergence and accommodation required by 3D displays frequently reduces one's ability to fuse the binocular stimulus and causes discomfort and ...
  29. [29]
    Stereoscopic perception of real depths at large distances | JOV
    For example, Howard (1919) showed that good stereoscopic observers are capable of detecting depth differences corresponding to binocular disparities of only a ...Missing: exploit | Show results with:exploit
  30. [30]
    Factors affecting depth perception and comparison of depth ...
    Sep 25, 2020 · Monocular cues consist of static information including relative size, perspective, interposition, lighting, and focus cues (image blur and ...
  31. [31]
    [PDF] A Classification of Visual Depth Cues, 3D Display Technologies and ...
    The realization that the visual system uses a number of depth cues to perceive and distinguish the distance of objects in their environment encouraged designers ...
  32. [32]
    Modeling depth from motion parallax with the motion/pursuit ratio
    The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance.
  33. [33]
    Integration time for the perception of depth from motion parallax - PMC
    Our recent quantitative model for the perception of depth from motion parallax proposes that relative object depth (d) can be determined from retinal image ...
  34. [34]
    Depth from accommodation and vergence
    At a viewing distance of 50 cm, when vergence was the only cue to depth, the standard deviation of pointing in the depth dimension was +25 arcmin of disparity.
  35. [35]
    Depth cues in human visual perception and their realization in 3D ...
    Depth cues include monocular (pictorial and motion) and binocular cues. Oculomotor cues like accommodation and convergence are also important for depth ...
  36. [36]
  37. [37]
  38. [38]
    [PDF] Anaglyphs Stereo Imagery - Lunar and Planetary Institute
    In an anaglyph, when a given color filter stops the other colors, it is called subtractive filtration. Because the red and blue images are slightly offset ...
  39. [39]
    [PDF] State of the Art in Stereoscopic and Autostereoscopic Displays - UniTS
    corresponding to each image in the stereo pair is made mutually orthogonal. Linear or circular polarization can be employed but the latter allows more head ...
  40. [40]
    Autostereoscopic Display - an overview | ScienceDirect Topics
    "Autostereoscopic displays refer to techniques that present stereoscopic imagery without the need for glasses, commonly using methods such as parallax ...
  41. [41]
    Barrier Grid Animation and Stereography | Encyclopedia MDPI
    Nov 18, 2022 · On December 5, 1901 American inventor Frederic Eugene Ives presented his "parallax stereogram" at the Franklin Institute of the State of ...
  42. [42]
    [PDF] MULTI–VIEW AUTOSTEREOSCOPIC 3D DISPLAY ... - Neil Dodgson
    Autostereoscopic displays provide 3D perception without the need for special glasses or other head gear. Three basic technologies exist to make such ...
  43. [43]
    Analysis of the viewing zone of multi-view autostereoscopic displays
    Aug 9, 2025 · Multiview autostereoscopic 3D displays can offer natural 3D images using directional pixels with optical layers such as parallax barriers or ...
  44. [44]
    Time-multiplexed three-dimensional displays based on directional ...
    May 1, 2006 · An autostereoscopic display using a directional backlight with a fast-switching liquid-crystal (LC) display was designed and fabricated to ...
  45. [45]
    Investigation of Autostereoscopic Displays Based on Various ... - NIH
    The autostereoscopic display is a promising way towards three-dimensional-display technology since it allows humans to perceive stereoscopic images with naked ...
  46. [46]
    Volumetric Displays - Survey of Alternative Displays - GitBook
    Jun 4, 2024 · Below we'll cover volumetric displays that use the principles of using mechanical or persistence of vision illusions to create images.
  47. [47]
    (PDF) The classification of volumetric display systems - ResearchGate
    Aug 9, 2025 · A bottom-up classification of swept-volume systems in terms of the image space creation and voxel generation subsystems. The voxel activiation ...<|separator|>
  48. [48]
    Voxon VX1: A Volumetric Display That Shows 3D Models ... - 80 Level
    Nov 28, 2022 · The VX1 volumetric display and how it works, and explained how the device can be used to turn your 3D renders into hologram.
  49. [49]
    Volumetric 3D Display Market Size, Growth & Trends Report 2032
    Swept volume displays, unlike static displays, allow for real-time interaction with 3D objects, making them well-suited for immersive applications like ...
  50. [50]
    Voxon's US$10,000 hologram table – no glasses required - New Atlas
    Nov 28, 2019 · Interactive 3D images that appear to float in the air, above a table that a group of people can stand around without needing any special headsets or glasses.
  51. [51]
    Volumetric 3D display provides true perception of objects - SPIE
    Jul 23, 2012 · The modulated image patterns are projected by optics onto a rotating double helix screen. We use a DLP chip set provided by Texas Instruments ( ...
  52. [52]
    Laser-projected 3D volumetric displays - NASA ADS
    Its rotating helical screen sweeps out a cylindrical envelope, providing a volumetric display medium through which scanned laser pulses are projected. The light ...
  53. [53]
    A photoswitchable handheld volumetric 3D display - ScienceDirect
    Dec 12, 2024 · Swept-volume displays typically use a screen that rapidly rotates or translates through a volume of space while a high-speed light source or ...
  54. [54]
    [PDF] Multi-Finger Gestural Interaction with 3D Volumetric Displays
    Volumetric displays provide interesting opportunities and challenges for 3D interaction and visualization, particularly when used in a highly interactive ...
  55. [55]
    [PDF] A Volumetric 3D LED display - MIT
    Dec 11, 2005 · For this project a dot-matrix volumetric display was built using a lattice of 512 LEDs, arranged in an 8x8x8 cube. The number of voxels was ...
  56. [56]
  57. [57]
    Full-color dynamic volumetric displays with tunable upconversion ...
    Jan 2, 2025 · In this study, we present a comprehensive investigation into easily scalable rare-earth (RE 3+ ) doped monolithic glasses (RE = Ho, Tm, Nd, Yb) capable of ...
  58. [58]
    Volumetric Displays: Turning 3-D Inside-Out
    A volumetric system turns traditional 3-D inside-out, to create screenless real images that place no limitations on the viewer's position.
  59. [59]
    Properties and Limitations of Hologram Recording Materials
    In the recording of a hologram, the object beam and coherent reference beam interfere within some recording medium, forming an optical standing wave pattern ...
  60. [60]
    Interference and interferometry in electron holography - PMC - NIH
    Holography requires two-step procedures: (i) recording holograms and (ii) reconstructing image data to retrieve phase information of the waves from the hologram ...
  61. [61]
  62. [62]
    Holography: Origin, Rediscovery, Development and Beyond
    There are basically two-types of holography that have been developed and they are; Leith's transmission- type holography and Denisyuk's reflection-type ...
  63. [63]
    Review of computer-generated hologram algorithms for color ... - NIH
    Jul 26, 2022 · In the same year, Denisyuk invented the reflection hologram, which can be reconstructed by white light because of the high degree of ...
  64. [64]
    Computer generated holograms from three dimensional meshes ...
    Using the proposed method only a single FFT is needed to be performed on the whole hologram plane once the angular spectra from all triangles are accumulated.
  65. [65]
    Collective matrix of spatial light modulators for increased resolution ...
    Jun 19, 2018 · Two Holoeye Pluto VIS-014 SLMs with a resolution of 1920 by 1080 pixels and the pixel pitch 8µm ( Fig. 6(a) ) were examined in the Mach ...
  66. [66]
    Angular and speckle multiplexing of photorefractive holograms by ...
    In this paper we show that the fiber speckle-referencing scheme also enhances the angular selectivity of hologram. We also show the feasibility of speckle ...
  67. [67]
    [PDF] Light Field Rendering - Stanford Computer Graphics Laboratory
    A light field is the radiance at a point in a given direction, a 4D function characterizing light flow in a static scene.
  68. [68]
    [PDF] Plenoptic Cameras - Publications
    With the 4D light field, it is very simple to simulate such a behavior: L′(u, v, s, t) = b(u, v)L(u, v, s, t) with b being a function mimicking the aperture ...
  69. [69]
    Three-dimensional volumetric object reconstruction using computational integral imaging
    ### Summary of Integral Imaging Method Using Microlens Array
  70. [70]
    Integral imaging: autostereoscopic images of 3D scenes - SPIE
    Nov 17, 2006 · In the reconstruction stage, the set of elemental images is displayed in front of a second microlens array, providing the observer with a ...
  71. [71]
    [PDF] Tensor Displays: Compressive Light Field Synthesis using Multilayer ...
    We advocate for multilinear optimization as a practical tool for compressive light field synthesis using tensor displays. ... field display using multi-layer LCDs ...
  72. [72]
    Fast and Accurate Light Field View Synthesis by Optimizing Input ...
    May 13, 2021 · In this paper, we explore the relationship between different input view selections with the angular super-resolution reconstruction results.
  73. [73]
  74. [74]
    3D Movies – The Next Generation Cinema - All About 3D TV
    After the massive success of Avatar in 2009, the early 2010s saw a surge in 3D movie production. Major studios released blockbusters in 3D, including Alice in ...
  75. [75]
    Global 3D Box Office More Than Doubled in 2010
    Aug 12, 2011 · Worldwide box office revenue from 3D screens more than doubled last year to $6.1 billion, up from $2.5 billion in 2009.Missing: adoption polarized
  76. [76]
    3D cinema hits 8-year low - FlatpanelsHD
    In 2010, 3D movies grossed a total of 2.2 billions dollars, which corresponded to 21% of total theatrical revenue. In 2017, revenue dropped to 1.3 billion ...Missing: adoption polarized
  77. [77]
    How 3D Movies Are Saving Box Office Sales - Business Insider
    Mar 10, 2010 · 3D movies get higher revenues because of premium ticket prices--up to $18 a pop in some areas. John Fithian, the chief executive of the National Association of ...Missing: adoption polarized
  78. [78]
    Resistance Forms Against Hollywood's 3-D Push
    Aug 2, 2010 · Tickets for 3-D films carry a $3 to $5 premium, and industry ... box office. Home sales for 3-D hits like “Avatar” and “Monsters vs ...
  79. [79]
    3D Blu-ray player roundup--what you need to know - CNET
    Jan 8, 2010 · The new 3D Blu-ray format uses active shutter 3D glasses and requires one of the new 3D HDTVs that have been announced at CES 2010. Some ...
  80. [80]
    CEDIA: Sharp launches first 3D TVs, 3D Blu-ray players
    Sep 27, 2010 · The TVs come with two pairs of active shutter 3D glasses. The ability to convert 3D content to 2D images on the fly by pushing a button on ...<|separator|>
  81. [81]
    3D TV: Why you'll (someday) own one whether you like it or not
    Jan 10, 2010 · ... 3D TV: active shutter glasses (vs. the passive glasses used in cinema 3D). The basic idea is that the 3D display shows video as a series of ...
  82. [82]
    IR Information : Sales Data - Dedicated Video Game Sales Units
    Dedicated Video Game Sales Units. ( As of September 30, 2025 ). Total Unit Sales ... Nintendo 3DS. Hardware:75.94million units. Software:392.29million ...
  83. [83]
    Business Data & Sales - Sony Interactive Entertainment
    PlayStation VR, 5.0 million (As of December 31, 2019). PlayStation 4, More than 113.5 million (As of September 30, 2020). PlayStation 5, 56 million+ (As of ...
  84. [84]
    PlayStation VR sales have topped 5 million units worldwide
    The VR headset had sold 4.2 million units as of March 2019, meaning another 800,000 devices have been sold in the 10 months since.
  85. [85]
    First U.S. Glasses-Free 3D Smartphone: HTC EVO 3D - NBC Bay Area
    Joining the Nintendo 3DS with its parallax barrier screen for glasses-free 3D goodness, the EVO 3D boasts a 4.3-inch qHD screen (that's 960 x ...
  86. [86]
    Lenovo Legion 9i (2025) Laptop Boasts 18” PureSight Display with ...
    May 8, 2025 · The optional 3D display uses eye-tracking technology and a lenticular lens array to produce a glasses-free 3D effect. This makes visuals ...
  87. [87]
    DIGIERA Unveils HoloMax -- World's First Glasses-free 3D Laptop ...
    Sep 5, 2025 · HoloMax pairs an immersive 10.95" 2.5K, 120Hz autostereoscopic display with real-time eye-tracking and a high-performance compute platform to ...
  88. [88]
    3D - A Beginners Guide to Stereoscopic Understanding - John Daro
    Jul 31, 2019 · Stereo color grading is an additional step, when compared to standard 2D finishing, which needs to be accomplished after the depth grade is ...
  89. [89]
    Doctors can now get a 3-D holographic look at your insides - CNBC
    May 26, 2016 · She uses EchoPixel's True 3-D software. It takes data from CT and MRI scans and transforms it into 3-D holographic images so she can view and ...
  90. [90]
    True 3D: Unlocking the full potential of medical imaging
    Nov 6, 2014 · True 3D (EchoPixel, Mountain View, CA), is an innovative medical visualization software platform that presents image data in an open, 3D space.
  91. [91]
  92. [92]
    [PDF] A Volumetric Framework for Registration, Analysis and Visualization ...
    Volumetric Framework for Nanostructure Analysis. Our work focuses on molecular models that are cumbersome to visualize as ball-and-stick and static isosurfaces.<|separator|>
  93. [93]
    NASA Scientific Visualization Studio
    NASA Scientific Visualization Studio produces visualizations, animations, and images in order to promote a greater understanding of Earth and Space Sciences. We ...The Galleries · Search · Conceptual Image Lab (CI Labs) · Perpetual Ocean 2Missing: 2020s | Show results with:2020s
  94. [94]
    “Holographic” Autostereoscopic Displays: A Perspective on Their ...
    Sep 14, 2023 · Volumetric displays create 3D images by illuminating a physical volume of space, allowing users to perceive depth and parallax without the need ...Abstract · Figure 1 · Figure 2
  95. [95]
    [PDF] New Methodologies for Automotive PLM by Integrating 3D CAD and ...
    Basically, virtual prototyping (VP) is the use of software to validate a design before committing to make a physical prototype. For example, virtual prototyping ...
  96. [96]
    New Methodologies for Automotive PLM by Integrating 3D CAD and ...
    Oct 18, 2024 · PDF | The thesis explores how 3D virtual reality methods can improve a function-oriented automotive development | Find, read and cite all ...
  97. [97]
    The benefits of haptic feedback in robot assisted surgery and their ...
    Nov 6, 2023 · Haptic feedback has also been found to lead to higher accuracy (Hedges' g = 1.50) and success rates (Hedges' g = 0.80) during surgical tasks.Methods · Results · Effects On Accuracy
  98. [98]
    Impact of haptic feedback on surgical training outcomes - NIH
    This study demonstrates better performance for an orthopaedic surgical task when using a VR-based simulation model incorporating haptic feedback.
  99. [99]
  100. [100]
    AR in Healthcare: Revolutionizing Surgery & Remote Monitoring
    Aug 22, 2025 · Explore how AR in healthcare is transforming surgical precision, remote patient monitoring, and medical training with data from 2025.
  101. [101]
    Development of an AI-powered AR glasses system for real-time first ...
    Aug 26, 2025 · When combined, AR and AI create robust, context-aware systems that enhance diagnostics, medical education, and real-time emergency response. ... ( ...
  102. [102]
    [PDF] Resolving the Vergence-Accommodation Conflict in Head-Mounted ...
    Abstract—The vergence-accommodation conflict (VAC) remains a major problem in head-mounted displays for virtual and augmented reality (VR and AR).
  103. [103]
  104. [104]
    Color moiré reduction and resolution enhancement of flat-panel ...
    Color moiré occurs owing to the subpixel structure of the display panel in the integral three-dimensional (3D) display method, deteriorating the 3D-image ...
  105. [105]
    [PDF] UNIVERSITY OF CALIFORNIA, SAN DIEGO Signal processing for ...
    the 3D display, antialias filtering is required according to Nyquist sampling theory. However, this leads to noticeable spatial blur. Also, multiview 3D ...<|separator|>
  106. [106]
    Development and clinical applications of glasses-free three ... - NIH
    In a glasses-assisted 3D system, only horizontal or vertical light can pass through the lens; as a result, the light passing through glasses is reduced by 50%, ...
  107. [107]
    Design guidelines for limiting and eliminating virtual reality-induced ...
    (2020) found a mean dropout rate of 15.6% (min = 0%, max. = 100%) based on data reported in 44 empirical studies from the 55 selected for their systematic ...
  108. [108]
    Visual fatigue caused by watching 3DTV: an fMRI study - PubMed
    The objective of this study is to observe the visual fatigue caused by watching 3DTV using the method of functional magnetic resonance imaging (fMRI).<|separator|>
  109. [109]
    A population study of binocular function - ScienceDirect.com
    Our results may help to explain why estimates of stereo blindness from population studies have varied widely from 1% to 14% (Bohr and Read, 2013, Coutant ...Missing: unsuitable | Show results with:unsuitable
  110. [110]
    Incidence of stereo blindness in a recent VR distance perception ...
    Aug 6, 2025 · Between 5% to 20% of people are observed to be more or less "stereo-blind," meaning their visual perception may not fuse stereoscopic ...
  111. [111]
    ETRI develops processor for real-time hologram generation
    Mar 24, 2025 · Korean researchers have developed a digital holography processor that converts two-dimensional (2D) videos into real-time three-dimensional (3D) holograms.
  112. [112]
    Deep-Learning Computational Holography: A Review - Frontiers
    This review focuses on computational holography, including computer-generated holograms, holographic displays, and digital holography, using deep learning.Missing: latency | Show results with:latency
  113. [113]
    A leap toward lighter, sleeker mixed reality displays - Stanford Report
    Jul 28, 2025 · The holographic image is enhanced by a new AI-calibration method that optimizes image quality and three-dimensionality. The result is a display ...Missing: varifocal depths
  114. [114]
    Synthetic aperture waveguide holography for compact mixed-reality ...
    Jul 28, 2025 · Our display architecture achieves an ultra-thin form factor of only 3 mm thickness from the SLM to the eyepiece lens, including a 0.6-mm ...Missing: tensor | Show results with:tensor
  115. [115]
    3D displays showcased at Display Week 2025 and Computex 2025
    ### Summary of Glasses-Free 3D Monitors Demoed at Computex 2025
  116. [116]
    3D Display Market Size, Share | Forecast Report [2025-2032]
    Oct 30, 2025 · The global 3D display market size is projected to grow from $169.69 billion in 2025 to $510.91 billion by 2032, at a CAGR of 17.1% during ...Missing: $500 metaverse
  117. [117]
    Applications of electromagnetic metasurfaces in Three-Dimensional ...
    This paper reviews recent progress in 3D imaging technology based on electromagnetic metasurfaces, explores their fundamental principles, and discusses ...