Fact-checked by Grok 2 weeks ago

Computational imaging

Computational imaging is an interdisciplinary field that reconstructs images from raw using advanced algorithms, often by co-designing optical and computational processing to surpass the constraints of traditional systems, such as limited resolution, , and . This approach encodes scene information through novel or illumination patterns, followed by decoding via computational methods like or , enabling the capture of richer visual in a single exposure or measurement set. The origins of computational imaging trace back to mid-20th-century experiments with coded apertures and synthetic aperture techniques, but the field gained momentum in the mid-1990s with the rise of digital sensors and powerful computing, allowing for joint optimization of and software. By the , it had evolved into a vibrant area of research, incorporating concepts from , , and , with key milestones including the development of light field cameras and single-pixel imaging systems. As of 2025, over three decades after its modern inception, computational imaging is embedded in everyday devices like smartphones for features such as (HDR) photography and computational . At its core, computational imaging relies on principles of optical coding—altering light rays at various stages (e.g., object side, pupil plane, or side)—and inverse problems solved through algorithms to recover high-fidelity images from underdetermined measurements. Notable techniques include coded aperture imaging for extended , structured illumination for , and non-line-of-sight imaging using time-of-flight data to reconstruct hidden scenes. These methods leverage and to achieve efficiencies like 90% data compression while maintaining image quality. Applications span scientific, industrial, and consumer domains, including advanced for biological research, medical diagnostics via phase , autonomous vehicle for safe , and analysis through non-invasive scanning. Recent advancements as of 2025 include AI-enhanced computational achieving unprecedented precision and in biological . Emerging synergies with meta-optics and quantum-inspired detectors promise further breakthroughs, such as compact systems and ultrafast through scattering media like or .

Overview

Definition and Principles

Computational imaging refers to the process of forming images indirectly from raw measurements, such as intensity patterns or encoded data, through the application of computational algorithms, rather than relying on direct optical onto a . This integrates optical , , and post-processing to reconstruct scenes or extract information that exceeds the limitations of traditional systems. By leveraging mathematical models and optimization techniques, computational imaging enables the recovery of high-fidelity images from seemingly incomplete or indirect data, fundamentally shifting the burden of from alone to a synergistic hardware-software design. At its core, computational imaging is framed as an , where the goal is to estimate the underlying or \mathbf{x} from observed measurements \mathbf{y}, typically modeled as \mathbf{y} = A\mathbf{x} + \mathbf{n}. Here, A represents the sensing that encodes the imaging system's forward model—such as optical or measurement —while \mathbf{n} accounts for . This formulation underscores the ill-posed nature of the task, as multiple scenes may produce similar measurements, necessitating priors and regularization to achieve stable . A key principle is the joint optimization of and software: the sensing A is deliberately designed (e.g., via coded or illumination patterns) to facilitate efficient computation, capturing data in a way that maximizes information content for algorithmic recovery. Fundamental to this approach are concepts like sparsity and redundancy in natural images, which exploit the fact that real-world scenes can be compactly represented in certain bases (e.g., wavelets or gradients) with few non-zero coefficients. Sparsity allows for the relaxation of information-theoretic limits, such as the Nyquist sampling theorem, enabling sub-Nyquist acquisition rates while still permitting accurate reconstruction through techniques like compressive sensing. in image statistics further aids this by providing inherent correlations that algorithms can leverage to fill in missing information, thus optimizing data capture and reducing sensor requirements. In contrast to conventional , which employs lenses and direct pixel-wise to form images instantaneously on a focal plane, computational uses coded measurements—such as modulated light fields or sparse samplings—that require post-processing to decode the scene. For instance, while a traditional camera captures a straightforward intensity map, computational methods might record multiplexed projections, relying on inversion algorithms to yield enhanced results like super-resolution or hyperspectral detail. This distinction highlights how compensates for simplified , enabling compact, versatile systems.

Importance and Interdisciplinary Aspects

Computational imaging has emerged as a transformative approach in optics and signal processing, enabling high-quality image acquisition under challenging conditions that traditional optical systems struggle to address. By jointly optimizing hardware and computational algorithms, it overcomes limitations such as low-light environments, where conventional sensors suffer from noise dominance, achieving superior performance through techniques like focal sweep imaging that enhance light throughput and signal-to-noise ratio (SNR). Similarly, it surpasses the diffraction limit of light, allowing sub-wavelength resolution without relying solely on advanced nanoscale optics, thus improving spatial resolution by factors exceeding classical bounds. These advancements also reduce hardware costs by simplifying optical designs, such as using single-pixel detectors instead of complex sensor arrays, while boosting overall imaging speed and data efficiency through compressive sampling that captures essential information with fewer measurements. The benefits extend to enhanced image quality and versatility, delivering higher SNR in noisy scenarios— for instance, up to % improvement in ghost imaging setups— and facilitating the capture of multidimensional data like hyperspectral or information in a single snapshot, which is crucial for applications requiring rich contextual details. In consumer devices such as smartphones, computational promotes by minimizing the need for power-intensive components like large apertures or multiple sensors, enabling compact, battery-friendly systems that support features like night mode photography without excessive computational overhead. These capabilities not only elevate everyday but also scale to scientific instruments, where they optimize resource use in data-constrained environments. At its core, computational imaging is profoundly interdisciplinary, integrating principles from for hardware design, for reconstruction algorithms, for optimization frameworks, and physics for accurate modeling of wave propagation and light-matter interactions. This synergy fosters innovations across diverse fields, including for robust prototyping, for non-invasive cellular , and astronomy for resolving faint structures through coded apertures. Such cross-pollination accelerates breakthroughs, as seen in where computational methods enhance live-cell . The societal impact of computational imaging is significant, particularly in democratizing access to advanced diagnostics in resource-limited settings by enabling low-cost, portable devices for , such as lensless microscopes for point-of-care disease detection in underserved regions. It also drives innovation in AI-enhanced vision systems, where computational priors improve for applications like autonomous and , potentially reducing errors in real-world deployments by leveraging encoded imaging data. These developments promise broader equity in technology access and efficiency gains in global health and automation sectors.

History

Early Developments (Pre-1990s)

The foundations of computational imaging trace back to mid-20th-century advancements in physics and optics, including early synthetic aperture techniques. Synthetic aperture radar (SAR), developed in the 1950s by researchers like Carl Wiley, used signal processing to synthesize a large aperture from platform motion, enabling high-resolution imaging in radar systems. Where researchers sought to overcome limitations in traditional lens-based systems for high-energy wavelengths like X-rays. In 1961, Lawrence Mertz and Norman O. Young proposed an indirect imaging method using Fresnel zone plate patterns as coded apertures, which modulated incoming radiation to produce a shadowgram that could be computationally decoded into an image, enabling diffraction-limited resolution without physical lenses. This approach was particularly suited for X-ray astronomy, where refractive lenses were impractical due to material absorption and dispersion issues. Parallel developments in during the 1950s and 1960s laid essential groundwork for computational techniques. At the National Bureau of Standards (now NIST), Russell A. and his team developed the first drum scanner in 1957, producing a 176x176 digital image by mechanically scanning a of Kirsch's infant son, marking the inaugural conversion of an analog image into a manipulable form. This innovation enabled early experiments in image enhancement and on computers like the Standards Eastern Automatic Computer (SEAC), fostering the integration of computation with imaging data. A pivotal milestone came in 1972 with Godfrey Hounsfield's invention of , which reconstructed cross-sectional images from multiple X-ray projections using the to solve the of 3D density mapping. Hounsfield's prototype, developed at Laboratories, performed the first clinical head that year, revolutionizing medical diagnostics by providing non-invasive internal visualization. For this breakthrough, shared with Allan M. Cormack's foundational mathematical contributions, Hounsfield received the Nobel Prize in Physiology or Medicine in 1979. Similarly, the 1970s saw emerge as another computational , with early reconstructions employing iterative algorithms to invert Fourier-encoded signals into anatomical images, as demonstrated in Paul Lauterbur's 1973 projection reconstruction method. Pre-1990s computational imaging faced significant constraints from limited processing power, confining methods to basic operations like for restoring blurred images in astronomical data or medical . These challenges directed focus toward targeted applications, such as enhancing resolution in via coded masks or iterative back-projection in and MRI, where hardware innovations compensated for computational bottlenecks.

Modern Era (1990s-Present)

The marked a surge in computational imaging through its integration with emerging digital cameras, enabling software-based enhancements to overcome hardware limitations in capture and processing. A pivotal contribution came from Steve Mann and , who in introduced concepts in by demonstrating how multiple differently exposed images could be combined to extend , leveraging scene priors to simulate analog film's flexibility in digital systems. This work laid foundational ideas for using computational models to infer unmeasured scene properties, influencing subsequent developments in image synthesis and enhancement. The 2000s brought theoretical breakthroughs that revolutionized data acquisition in imaging, including the development of light field cameras. In 2005, Ren Ng's Stanford thesis advanced integral photography techniques for capturing light fields, enabling computational refocusing and depth estimation from a single exposure, paving the way for commercial devices like the camera in 2011. Between 2004 and 2006, David Donoho, , and developed compressive sensing theory, proving that sparse signals could be accurately reconstructed from sub-Nyquist sampling rates using , thus enabling efficient capture of high-dimensional data like images with fewer measurements. This framework directly impacted imaging by allowing undersampled data to yield full-resolution outputs, reducing sensor requirements and bandwidth. In 2008, Marco F. Duarte and colleagues at prototyped the single-pixel camera, a hardware implementation of compressive sensing that used a to modulate light onto a single detector, successfully reconstructing images from coded measurements and demonstrating practical feasibility for compact, broadband systems. From the 2010s onward, computational imaging increasingly incorporated , accelerating reconstruction and enabling novel synthesis tasks. The 2014 introduction of generative adversarial networks (GANs) by et al. provided a framework for learning image distributions through adversarial training, facilitating realistic image synthesis and super-resolution in computational pipelines. Concurrently, advanced with systems like the Coded Aperture Snapshot Spectral Imager (CASSI), proposed in 2008 by Ashwin A. Wagadarikar et al., which captured 3D spectral data cubes via 2D compressive snapshots using a and disperser, enabling snapshot acquisition of spectral information for applications in and biomedicine. Commercialization gained momentum in consumer devices, exemplified by the 2017 Google Pixel's computational night mode, which fused multiple short-exposure frames with AI-driven alignment and denoising to produce low-light images rivaling longer exposures, as detailed in Google's research demonstrations. Key trends in the field include a shift toward end-to-end learning, where neural networks jointly optimize sensing and for improved performance over traditional modular approaches, as seen in physics-enhanced models that incorporate forward physics into training. Commercial adoption has proliferated in smartphones and cameras, embedding computational techniques for features like portrait mode and . In the 2020s, as of 2025, milestones include the widespread integration of diffusion models for generative and AI-driven low-dose , alongside edge-computing hardware enabling real-time processing in devices for applications from autonomous vehicles to medical diagnostics.

Techniques

Coded Aperture Imaging

Coded aperture imaging is a computational technique that enables in spectral regimes where traditional refractive or reflective are ineffective, such as X-rays and gamma rays. Instead of using lenses, it employs a patterned mask placed between the object and detector to modulate incoming radiation, producing a shadowgram or coded image on the detector plane. This shadowgram encodes spatial information about the object, which is then computationally decoded to reconstruct the original image through algorithms. The method originated with proposals by Mertz and Young in 1961 for using patterns and independently by Dicke in 1968 for random pinhole arrays in X- and gamma-ray imaging, laying the foundation for high-energy applications. The core principle involves replacing conventional focusing elements with a coded mask, such as a Modified Uniformly Redundant (MURA) or Uniformly Redundant (URA), which consists of an of open and opaque elements designed to optimize signal encoding. Radiation from the object passes through the mask, casting overlapping shadow patterns onto a position-sensitive detector, rather than forming a direct . reconstruction solves the of recovering the object distribution x from the measured data y, modeled as the y = H x + n where H is the system matrix (or point spread function) determined by the mask geometry and distance to the detector, and n represents noise. Common reconstruction methods include iterative deconvolution techniques like the Richardson-Lucy algorithm, which assumes Poisson noise statistics prevalent in photon-counting detectors and iteratively refines the estimate by maximizing likelihood. This approach offers significant advantages in and gamma-ray regimes, where refractive lenses suffer from high absorption or impractical fabrication, allowing wide-field imaging with moderate . In , coded apertures enable dose reduction by efficiently utilizing incident radiation through , potentially lowering patient exposure while maintaining image quality. Mask designs, such as URAs, are optimized to minimize the rank deficiency of H, ensuring better conditioning for stable reconstruction and reduced sidelobe artifacts in the decoded image. Modern implementations include astronomical observatories like the satellite's instrument, launched in 2002, which uses a coded mask for gamma-ray source localization with 12 arcminute over a 19° × 19° . Portable systems have also adopted the technique for applications in security screening and industrial inspection, where compact, lensless designs facilitate real-time without bulky .

Compressive Sensing and Spectral Imaging

Compressive sensing in spectral imaging enables the acquisition of hyperspectral data cubes at rates below the Nyquist sampling limit by exploiting the inherent sparsity of spectral signals in certain transform domains, such as the discrete cosine transform (DCT) or wavelet basis. This approach modulates the incoming light through spatial-spectral encoding mechanisms, including digital micromirror devices (DMDs) for programmable spatial patterns or tunable filters like liquid crystal tunable filters (LCTFs) for sequential spectral selection, thereby capturing multidimensional spectral information in a reduced number of measurements or snapshots. Unlike traditional hyperspectral imaging, which requires scanning across spatial or spectral dimensions to reconstruct full datacubes, compressive methods encode the 3D (two spatial, one spectral) information into 2D projections, allowing reconstruction of the original scene via optimization techniques that enforce sparsity constraints. The core mathematical framework models the measurement process as \mathbf{y} = \Phi \Psi \mathbf{x}, where \mathbf{y} is the vector of compressed measurements, \Phi is the measurement matrix encoding the spatial-spectral (e.g., via coded apertures or DMD patterns), \Psi is the sparsity basis (such as DCT), and \mathbf{x} represents the sparse coefficients of the hyperspectral datacube in that basis. Reconstruction solves the problem \min \| \mathbf{x} \|_1 subject to \mathbf{y} = \Phi \Psi \mathbf{x}, promoting the sparsest consistent with the observations, which guarantees accurate under conditions where the number of measurements is proportional to the sparsity level rather than the full signal dimension. For noisy measurements, basis pursuit denoising extends this to \min \| \mathbf{x} \|_1 subject to \| \mathbf{y} - \Phi \Psi \mathbf{x} \|_2 \leq \epsilon, where \epsilon bounds the noise level, enabling robust hyperspectral reconstruction even in low signal-to-noise environments. A prominent implementation is the Snapshot Spectral Imager (CASSI), introduced in 2008, which combines a mask with a to shift and superimpose spectral slices onto a focal plane array in a single exposure, producing a 2D encoded image from which the full datacube is computationally recovered. This architecture captures hyperspectral videos at rates up to 30 frames per second, significantly reducing data volume and acquisition time compared to scanning systems. In applications, CASSI and similar compressive techniques facilitate material identification—such as distinguishing vegetation types or minerals—by enabling high-fidelity spectral signatures without acquiring the full spectrum per pixel, thus supporting efficient onboard processing for satellites with limited bandwidth. Compressive imaging operates in two primary modes: , as in CASSI, where all information is acquired simultaneously for dynamic scenes, and scanning, which sequentially applies different encodings (e.g., via DMD patterns) over time for higher at the cost of motion sensitivity. Noise handling in both modes commonly employs basis pursuit, which balances sparsity promotion with data fidelity to suppress artifacts and improve reconstruction quality, particularly in low-light scenarios.

Single-Pixel and Ghost Imaging

Single-pixel imaging is a computational imaging technique that reconstructs images using a single detector, often referred to as a "bucket" detector, which measures the total intensity of light reflected or transmitted from the scene without . This approach employs spatial light modulators, such as digital micromirror devices (DMDs) or spatial light modulators (SLMs), to sequentially illuminate the object with structured patterns, encoding spatial information into temporal measurements. Each pattern modulates the scene, and the bucket detector captures the integrated intensity for that configuration, allowing reconstruction through computational correlation of the measurement sequence with the known patterns. A seminal demonstrated this in 2008 by researchers, who developed a terahertz single-pixel camera using a DMD to apply random masks and for image recovery from fewer measurements than the image pixel count. The reconstruction in single-pixel imaging typically relies on basis patterns like the for efficient encoding. For an N-pixel , the measurement vector \mathbf{y} relates to the \mathbf{x} via \mathbf{y} = H \mathbf{x}, where H is the whose rows are the patterns \mathbf{p}_k. The is recovered as \mathbf{x} = \frac{1}{N} \sum_{k=1}^N y_k \mathbf{p}_k, equivalent to the inverse scaled by $1/N, leveraging the of the basis for noise-robust correlation. This method avoids the need for multi-pixel detector arrays, which are costly and complex in spectral bands like (IR) and (THz). Ghost imaging extends this concept by using spatial correlations between two light paths to form an without direct detection of the object's spatial structure. Originating in with entangled pairs from , the technique illuminates the object with one beam (test arm) collected by a detector, while a reference arm provides spatially resolved intensities that correlate with the bucket signals to reconstruct the image via second-order correlation: G(\mathbf{r}) = \langle I_{\text{ref}}(\mathbf{r}) I_{\text{bucket}} \rangle - \langle I_{\text{ref}}(\mathbf{r}) \rangle \langle I_{\text{bucket}} \rangle, where the fluctuation term isolates the object's contribution. Computational variants, emerging in the post-2000s era, replaced pseudothermal or entangled sources with deterministic structured illumination and a single detector, simulating correlations computationally for non-local imaging with classical light. Both techniques offer significant advantages in cost-effectiveness for and THz applications, where high-resolution detector arrays are expensive and sensitive to environmental factors, enabling imaging in harsh conditions like high temperatures or . Differential ghost imaging further enhances performance by subtracting a reference measurement without the object from the primary , suppressing and improving (SNR) by up to orders of magnitude in noisy environments. These methods have been applied in security screening and , where single-detector simplicity reduces system complexity while maintaining high sensitivity.

Structured Illumination and Phase Retrieval

Structured illumination microscopy (SIM) is a super-resolution technique that enhances the resolution of wide-field fluorescence microscopes by projecting sinusoidal light patterns onto the sample, thereby shifting higher spatial frequencies of the object into the detectable passband of the optical system. Developed in the late 1990s and early 2000s, SIM achieves approximately a twofold improvement in lateral resolution through post-acquisition demodulation and Fourier domain reconstruction. The method was pioneered by Mats Gustafsson, whose work on linear SIM in 2000 demonstrated resolution doubling on biological samples, and later extended to nonlinear SIM in 2005 for theoretically unlimited resolution using higher-order harmonics. This innovation contributed to the 2017 Nobel Prize in Chemistry, shared by Gustafsson, Stefan Hell, and Eric Betzig for advancements in super-resolved fluorescence microscopy. In SIM, the illumination modulates the object's emission, producing images that encode sub-diffraction information. Typically, three images are acquired with the pattern phase-shifted by 120 degrees each. The observed for a phase shift θ is given by: I(\theta) = |O + P e^{i\theta}|^2 where O represents the object's , and P is the modulation . Reconstruction involves demodulating these images to extract shifted components, which are then stitched in the to form a high-resolution image, effectively extending the . This computational approach contrasts with depletion-based methods like STED by relying on patterned excitation and algorithmic processing rather than point-scanning. Phase retrieval complements structured illumination by enabling the recovery of information from intensity-only measurements, crucial for reconstructing complex-valued fields in computational imaging. The seminal Gerchberg-Saxton algorithm, introduced in 1972, iteratively enforces constraints in the object and domains to retrieve from paired intensity distributions, such as and patterns. Modern variants, developed by James Fienup in 1982, incorporate input-output and error-reduction steps to improve and handle diverse geometries, making them robust for non-periodic signals. In (), algorithms reconstruct high-resolution images from far-field intensities, bypassing the need for lenses and enabling nanoscale structural determination of non-crystalline samples. These methods support super-resolution applications by computationally compensating for limits, similar to SIM but focused on recovery in lensless configurations. For instance, iterative facilitates compact, lensless devices for portable imaging, where intensity measurements from propagated fields are processed to yield quantitative and maps.

Algorithms

Classical Reconstruction Methods

Classical reconstruction methods in computational imaging address the inverse problem of recovering an underlying signal or image from indirect, often noisy measurements, relying on mathematical optimization and linear algebra rather than learned representations. These techniques model the imaging process as a linear operator A mapping the unknown image x to measurements y = Ax + n, where n represents noise, and seek to solve for x by minimizing a data fidelity term subject to regularization priors to mitigate ill-posedness. Early approaches focused on direct analytical inverses for specific geometries, while later developments incorporated iterative solvers and sparsity-promoting regularizers to handle underdetermined systems prevalent in modalities like computed tomography (CT) and magnetic resonance imaging (MRI). Linear methods provide closed-form solutions for well-posed problems, such as filtered back-projection (FBP) in , which inverts the by applying a ramp in the domain followed by back-projection to reconstruct parallel-beam projections. The FBP algorithm computes the image as x(\mathbf{r}) = \int_0^\pi q(\theta, \mathbf{r} \cdot \hat{\theta}) d\theta, where q(\theta, s) is the filtered projection data, offering exact reconstruction under ideal conditions with computational efficiency O(N^3) for N \times N images. For in or astronomy, least-squares minimization yields the solution \hat{x} = (A^T A)^{-1} A^T y, assuming and inverting the point spread function matrix A, though it amplifies high-frequency in ill-conditioned cases. Iterative techniques extend these by sequentially updating estimates to enforce consistency with measurements, particularly useful for sparse or limited-angle data. The Algebraic Reconstruction Technique (ART), rooted in the of 1937, solves Ax = y by projecting onto hyperplanes defined by individual equations, iterating as x^{k+1} = x^k + \frac{y_i - a_i^T x^k}{\|a_i\|^2} a_i for row a_i of A, converging linearly for consistent systems and enabling incorporation of non-negativity constraints in emission tomography. (TV) minimization promotes piecewise-smooth reconstructions by solving \min_x \|x\|_{TV} + \lambda \|y - Ax\|_2^2, where \|x\|_{TV} = \int |\nabla x| penalizes discontinuities while preserving edges; introduced in 1992, this framework reduces artifacts in denoising and by exploiting image sparsity in the gradient domain. Advanced optimization frameworks employ gradient-based iterations to handle nonlinear regularizers and non-Gaussian noise. Gradient descent updates x^{k+1} = x^k - \eta \nabla f(x^k) for objective f(x) = \frac{1}{2}\|y - Ax\|_2^2 + r(x), with step size \eta tuned for in inverse problems like super-resolution. Proximal methods, such as Iterative Shrinkage-Thresholding Algorithm (ISTA), address \ell_1-regularized problems in compressive sensing via x^{k+1} = \prox_{\lambda g}(x^k - \eta A^T (Ax^k - y)), where the for g(x) = \|x\|_1 applies soft-thresholding; its accelerated variant FISTA achieves O(1/k^2) by momentum terms. Ill-posedness is managed through priors like noise models, which replace Gaussian fidelity with \|y - Ax\|_{\text{Poisson}} to account for signal-dependent variance in low-light imaging, enabling maximum-likelihood estimation in fluorescence . In MRI, the SENSE method of uses sensitivity-encoded parallel imaging with least-squares unfolding to reconstruct aliased k-space data, reducing scan times by acceleration factors of 2 to 3 (with later extensions achieving higher reductions up to 8) while incorporating TV regularization for robustness.

Machine Learning and Deep Learning Approaches

Machine learning and deep learning approaches have revolutionized computational imaging by leveraging data-driven models to solve inverse problems, such as reconstructing images from undersampled or noisy measurements, often outperforming traditional optimization-based methods in speed and adaptability. These techniques typically involve training neural networks to approximate the mapping from measurement data y to the underlying image x, incorporating physical forward models where possible to ensure physically plausible outputs. In supervised paradigms, convolutional neural networks (CNNs) are trained on pairs of simulated measurements and ground-truth images to learn denoising and reconstruction tasks, enabling end-to-end processing that bypasses explicit regularization. For instance, the architecture, originally developed for biomedical , has been adapted for reconstruction in computational imaging pipelines, achieving high-fidelity results in tasks like MRI enhancement by exploiting its encoder-decoder structure for feature extraction and upsampling. Similarly, residual learning-based CNNs like DnCNN have been employed for blind denoising, learning to remove across varying levels without task-specific retraining, thus serving as versatile priors in imaging workflows. Unsupervised and generative methods address scenarios with limited by exploiting inherent image statistics or adversarial training. Generative adversarial networks (GANs), introduced as a framework for learning data distributions through competing generator and discriminator modules, have been applied to produce plausible reconstructions from incomplete measurements in computational imaging, particularly when clean training pairs are scarce. A notable unsupervised approach is the deep image prior (DIP), which uses a randomly initialized as an implicit regularizer fitted solely to a single noisy measurement, incorporating the forward imaging model A to guide optimization toward low-level image structures without external datasets; this method excels in and super-resolution by capturing natural image priors through alone. Physics-informed variants extend these by embedding measurement operators directly into the network, enhancing robustness for tasks like unmixing. Key advancements include end-to-end learning frameworks and hybrid techniques that unroll classical algorithms into trainable modules. The AUTOMAP method exemplifies end-to-end reconstruction by training a to directly map raw data to MRI images, learning domain transforms that handle hardware inconsistencies and acceleration factors, thereby reducing reconstruction times from hours to seconds compared to iterative classical solvers. Plug-and-play (PnP) priors integrate deep denoisers, such as DnCNN, into optimization loops by replacing hand-crafted regularizers with learned ones, enabling flexible adaptation to diverse inverse problems like compressive sensing while maintaining guarantees under proper training. Variational networks further advance this by unrolling proximal iterations into layered CNNs, where each layer learns a variational update tailored to the physics, as demonstrated in accelerated MRI where they achieve performance with superior artifact suppression over traditional methods. Recent developments as of 2025 include diffusion models, which iteratively denoise data to generate high-quality reconstructions from undersampled measurements, particularly effective in tasks like MRI and , and transformer-based architectures that leverage self-attention mechanisms to capture long-range spatial dependencies, improving performance in segmentation, reconstruction, and . These approaches yield significant speed benefits, often enabling (e.g., milliseconds per frame) versus hours for classical optimization, facilitating applications in dynamic scenarios like MRI. However, challenges persist in to unseen noise levels or measurement conditions, as models trained on simulated data may overfit to specific distributions, leading to degraded performance on real-world variations without strategies.

Applications

Biomedical and Medical Imaging

Computational imaging has revolutionized biomedical and medical imaging by enabling high-quality reconstructions from limited data, thereby reducing patient radiation exposure, shortening scan times, and facilitating portable diagnostics. In and , techniques, such as GE Healthcare's Adaptive Statistical (ASiR) introduced in the mid-2000s, allow for significant dose reductions—up to 50-60% in clinical protocols—while preserving image quality and diagnostic accuracy. Similarly, in MRI, as demonstrated in seminal work by Lustig et al., exploits signal sparsity to reconstruct images from undersampled data, achieving scan time reductions of 4-8 times compared to traditional methods without substantial loss in . In and optical modalities, algorithms correct for aberrations caused by tissue inhomogeneities, improving focus and contrast in deep-tissue visualization. For instance, complex-valued convolutional neural networks have been applied to retrieve aberrations in ultrasound localization microscopy, enhancing precision. Photoacoustic benefits from (TV) minimization regularization, which promotes sparse reconstructions of vascular structures from limited-view data, enabling clear delineation of vessels in hybrid optical-acoustic setups. This approach suppresses and artifacts effectively in sparse sampling scenarios, supporting vascular applications. Deep learning methods have further advanced low-dose by denoising images from reduced radiation protocols; for example, FDA-cleared reconstruction algorithms, such as those approved around 2018-2019, improve noise reduction and spatial resolution in clinical scans. , leveraging computational unmixing of spectral signatures, maps tissue oxygenation levels non-invasively, providing quantitative maps of (StO₂) to assess in wounds or tumors. These computational advances enable portable medical devices for point-of-care diagnostics in resource-limited settings. Moreover, super-resolution techniques in computational imaging enhance early cancer detection by resolving sub-cellular details, as seen in structure analysis that identifies precancerous changes in samples. Such improvements in resolution and specificity boost diagnostic accuracy for malignancies like in MRI scans.

Astronomy and Remote Sensing

In astronomy, computational imaging techniques enable the capture and reconstruction of signals from distant celestial objects under challenging conditions, such as low photon counts and wide fields of view. Coded aperture imaging, a foundational method, uses a patterned mask to modulate incoming radiation, allowing reconstruction of images without traditional focusing optics, which is particularly useful for high-energy regimes. The Burst Alert Telescope (BAT) on NASA's Swift spacecraft, launched in 2004, exemplifies this approach by employing a coded mask to detect and localize gamma-ray bursts (GRBs) with a wide field of view spanning 1.4 steradians at 50% coding fraction, achieving arcminute-level precision within seconds of detection. This design facilitates rapid follow-up observations across multiple wavelengths, contributing to over 1,000 GRB localizations since launch. Compressive hyperspectral imaging extends these capabilities by acquiring spectral data efficiently from sparse measurements, reconstructing full datacubes through optimization algorithms that exploit signal sparsity. In exoplanet studies, this method supports atmospheric characterization by enabling high-spectral-resolution of faint signals, such as molecular absorption features in transiting planets. For instance, multiplexed Bragg gratings integrated with have been proposed for direct imaging missions, allowing simultaneous capture of broadband spectra to detect biosignatures like or with reduced instrumental complexity compared to traditional dispersive spectrometers. Remote sensing applications leverage similar computational strategies for Earth observation and planetary surfaces, where platforms must handle atmospheric interference and large-scale coverage. Snapshot spectral imagers, which capture full hyperspectral datacubes in a single exposure, are integral to satellite missions for real-time environmental monitoring. NASA's Hyperspectral Infrared Imager (HyspIRI) concept, proposed in the 2010s, incorporates a visible-to-shortwave infrared (VSWIR) pushbroom spectrometer with 60-meter spatial resolution, enabling detection of ecosystem dynamics, volcanic activity, and wildfire emissions through compressive sampling of thermal and spectral data. Ghost imaging further enhances penetration through turbulent atmospheres by correlating patterns from a reference beam with bucket-detector signals, reconstructing scenes without direct line-of-sight imaging; this has been demonstrated in remote sensing prototypes for target detection over kilometer-scale paths with turbulence, achieving sub-centimeter resolution in simulations. Specific advancements include Fourier ptychography, introduced in , which computationally combines overlapping low-resolution images under angled illuminations to synthesize gigapixel-scale, diffraction-limited views, adaptable to astronomical contexts for wide-field surveys. In astronomy, inverse synthetic aperture variants exploit orbital motion for far-field reconstruction, yielding resolutions beyond single-aperture limits in sparse-array telescopes. Deep learning-based has also transformed ground-based observations by mitigating atmospheric blurring to approach space-telescope quality. These techniques offer key benefits in handling sparse data regimes, such as few-photon events in deep-space , where compressive methods reduce data volume by up to 90% while preserving fidelity through sparsity priors. For missions like the (JWST), launched in 2021, computational post-processing pipelines apply and to raw interferograms, enabling real-time artifact removal and spectral extraction from faint sources, as seen in early atmosphere analyses.

Computational Photography and Consumer Devices

Computational photography has transformed consumer devices, particularly smartphones and digital cameras, by leveraging algorithms to enhance image quality beyond traditional optical limitations. Key features include high dynamic range (HDR) merging through multi-exposure fusion, which combines multiple images captured at different exposure levels to preserve details in both bright and dark areas; this technique was pioneered in the late 1990s by Paul Debevec and Jitendra Malik, who developed a method to recover high dynamic range radiance maps from ordinary photographs. Super-resolution from burst shots further improves detail by aligning and fusing a sequence of handheld images, exploiting sub-pixel shifts due to natural hand motion; Google's algorithm, introduced around 2014 and refined in subsequent implementations, enables higher resolution outputs from standard sensors. Portrait mode achieves artificial bokeh effects via depth estimation, where dual-camera systems or machine learning infer scene depth to selectively blur backgrounds while keeping subjects sharp. Specific implementations highlight these capabilities in popular devices. Google's Night Sight, launched on phones in 2018, employs multi-frame denoising by capturing and merging up to 15 short-exposure raw images to reduce noise and enhance low-light details without visible blur, even in near-darkness conditions. Apple's Deep Fusion, introduced with the in 2019, uses the A13 Bionic to perform pixel-level fusion of nine images—four short, four long exposures, and one ambient—optimizing texture, , and edge enhancement through for medium-light scenes. Earlier innovations like the light field camera, released in , captured full light field data via a microlens array, allowing post-capture refocusing and depth-based effects such as variable , though its consumer adoption was limited by resolution constraints. Integration of on-device accelerates these processes for real-time performance. Frameworks like Lite enable efficient inference on mobile hardware, powering features such as computational zoom that combines optical and digital elements with AI-driven to surpass physical lens limits, producing sharp images at 2x or higher magnifications. These advancements improve accessibility by enabling high-quality low-light video and photography for everyday users without specialized equipment, while driving market growth; the global smartphone camera market reached approximately $46.8 billion by 2025, fueled by demand for advanced computational features.

Software and Tools

Open-Source Libraries and Frameworks

Open-source libraries and frameworks are essential for prototyping and developing computational imaging pipelines, enabling researchers to implement algorithms for , optimization, and simulation without proprietary constraints. These tools facilitate rapid experimentation, from basic image processing to advanced integrations, and support across diverse applications like single-pixel and . The ecosystem dominates open-source computational imaging due to its accessibility and integration with scientific computing stacks. Scikit-image provides algorithms for fundamental operations such as , thresholding, and geometric transformations, serving as a core library for preprocessing in imaging pipelines. HyperSpy specializes in for hyperspectral and multidimensional datasets, offering tools for signal , , and that exploit data dimensionality in computational setups. For deep learning-based reconstruction, and underpin modern approaches, with 2020s extensions like the OpenICS toolbox implementing compressive sensing algorithms in a unified framework for benchmarking and evaluation. Specialized tools target optimization and domain-specific challenges. SPGL1 solves large-scale sparse least-squares problems via basis pursuit, commonly applied to compressive sensing in problems. SPORCO, a library, handles sparse coding and dictionary learning with convolutional priors, supporting tasks like denoising and super-resolution in computational . OpenCV enables real-time camera pipelines through its high-performance modules for , calibration, and feature tracking, ideal for live structured illumination experiments. In , and plugins such as ANKAphase facilitate by processing propagation-based phase contrast data, including holographic and quantitative phase mapping. Frameworks often serve as MATLAB Image Processing Toolbox alternatives, with scikit-learn extensions enhancing scikit-image for on image features, such as clustering in segmentation tasks. GitHub repositories like provide simulation environments for hardware-software co-design, implementing to reconstruct images from single-pixel measurements. These resources thrive in community-driven ecosystems, exemplified by OpenCV's over 84,000 GitHub stars, which promote collaborative development and ensure reproducible results in research pipelines.

Commercial Software and Hardware Integrations

In the medical imaging domain, General Electric's Adaptive Statistical Iterative Reconstruction () software, introduced in 2008, integrates with scanners to enable dose reduction by modeling noise in and applying statistical iterative techniques, achieving up to 40% lower radiation exposure while maintaining image quality. received initial FDA 510(k) clearance in 2008 under numbers K081105 and K082761, with subsequent approvals confirming its safety and efficacy for clinical use across various patient sizes and anatomies. Similarly, ' syngo.via platform supports -accelerated MRI workflows through features like automated lesion segmentation, 3D volume computation, and fast analysis, streamlining diagnostics in and by reducing processing time and enhancing accuracy. These tools in syngo.via, including deep neural networks for prostate lesion detection, have obtained FDA clearance as part of broader updates, ensuring regulatory compliance for hospital integration. In computational photography, Adobe Lightroom incorporates neural network-based raw processing via its Super Resolution feature, which uses machine learning to upscale images by a factor of four while preserving details, with enhancements in 2023 improving neural denoising for high-ISO shots. This proprietary engine processes computational raw files from modern sensors, enabling professional photographers to achieve higher resolution outputs directly in the workflow. Qualcomm's Snapdragon processors feature Spectra ISP with integrated ML engines, such as the Cognitive ISP in the Snapdragon 8 Gen 2, which applies AI for real-time computational imaging tasks like multi-frame noise reduction and semantic segmentation in mobile cameras. For industrial applications, provides commercial software for camera quality testing, including Simatest, a simulator that models ISP pipelines and computational effects like and to evaluate in imaging devices. Teledyne FLIR's thermal cameras, such as the series, embed reconstruction algorithms in their image signal processors, supporting AI-driven computational imaging for target tracking and super-resolution in uncrewed s. Commercial integrations dominate clinical computational imaging, with major vendors like and holding leading positions in the CT market as of 2025, where proprietary reconstruction software is standard in installed scanners due to FDA-mandated certifications and subscription-based pricing models that bundle hardware with ongoing software updates.

References

  1. [1]
    [PDF] Computational Cameras: Convergence of Optics and Processing
    In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding ...
  2. [2]
    What is a Computational Camera? - Columbia CAVE
    A computational camera (Figure 1(b)) uses a combination of novel optics and computations to produce the final image.
  3. [3]
    [PDF] Computational imaging with meta-optics - cs.Princeton
    May 30, 2025 · COMPUTATIONAL IMAGING. We will start with a brief tutorial on the fundamentals of meta- optics design and computational imaging. A primer on ...
  4. [4]
    Computational imaging
    ### Summary of Computational Imaging Content
  5. [5]
    The future of computational imaging - Stanford Engineering
    Oct 27, 2023 · [00:03:02] Um, computational imaging is a field that is very close to the hardware, but also sort of between software and hardware. What we like ...
  6. [6]
    Quantum-inspired computational imaging - Science
    Aug 17, 2018 · Altmann et al. review some of the most recent developments in the field of computational imaging, including full three-dimensional imaging of scenes that are ...
  7. [7]
    Microscopy revolution: 25 years of computational imaging | UCLA
    Jan 8, 2025 · Computational microscopy unifies microscopy and crystallography to achieve powerful imaging capabilities for addressing challenges across the ...
  8. [8]
    [PDF] Physics-based Learning for Large-scale Computational Imaging
    Aug 14, 2020 · In this chapter, I outline the basic concepts in microscopy, physics for phase imaging, and computational reconstruction required to understand ...
  9. [9]
    [PDF] Download Free PDF - Computational Imaging Book
    ... Computational Imaging. Digital Copy of First Edition - Accepted for Printing by MIT Press. Ayush Bhandari. Imperial College. London. Achuta Kadambi. University ...
  10. [10]
  11. [11]
    Computational optical imaging: challenges, opportunities, new ...
    2.2 CIT history and development. The concept of computational imaging has existed since the beginning of image processing; however, it was not until the 1990s ...Abstract · Introduction · Concept, components, and... · Further (imaging distance)Missing: origins | Show results with:origins<|control11|><|separator|>
  12. [12]
    Improving the performance of computational ghost imaging by using ...
    Mar 11, 2019 · Our proposed imaging system is capable of reconstructing images 4 times faster and with ~33% higher SNR than a conventional single-element computational ghost ...
  13. [13]
    Snapshot multi-dimensional computational imaging through a liquid ...
    In this paper, we propose and demonstrate the concept of multi-dimensional computational imaging (MCI) by combining the principles of both lensless ...Missing: benefits | Show results with:benefits
  14. [14]
    Computational imaging with meta-optics - Optica Publishing Group
    May 30, 2025 · Computational imaging encompasses techniques that exploit attributes of the imaging system itself to enhance, augment, or extract additional ...
  15. [15]
    Computational Imaging: The Next Revolution for Biophotonics and ...
    It can extract useful biological information from complex optical signals by processing and analyzing large amounts of data. This technology has a wide range of ...
  16. [16]
    Computational Imaging, Sensing and Diagnostics for Global Health ...
    In this Review, we summarize some of the recent work in emerging computational imaging, sensing and diagnostics techniques, along with some of the complementary ...
  17. [17]
    [2509.08712] Computational Imaging for Enhanced Computer Vision
    Sep 10, 2025 · This paper presents a comprehensive survey of computational imaging (CI) techniques and their transformative impact on computer vision (CV) ...
  18. [18]
    [PDF] Fresnel Transformations of Images - People | MIT CSAIL
    Summary-An indirect procedure for image measurement is presented, with applica- tions ranging from X-rays to the infra-red. The procedure is based on ...
  19. [19]
    Coded aperture imaging for fluorescent x-rays - AIP Publishing
    Jun 19, 2014 · Coded aperture imaging is an imaging technique without conventional lenses and was first suggested in 1961 for use in x-ray astronomy cameras.
  20. [20]
    Black & white photo of baby
    The first digital image made on a computer in 1957 showed researcher Russell Kirsch's baby son. Download full image. Credit. NIST.
  21. [21]
    Image of infant son of Russell A. Kirsch, first picture fed into SEAC in ...
    This is one of the earliest digital images ever produced. It was created by NIST computer pioneer Russell Kirsch and his colleagues who developed the ...
  22. [22]
    The Nobel Prize in Physiology or Medicine 1979 - NobelPrize.org
    The Nobel Prize in Physiology or Medicine 1979 was awarded jointly to Allan M. Cormack and Godfrey N. Hounsfield for the development of computer assisted ...
  23. [23]
    [PDF] Nobel Lecture - Computed Medical Imaging
    In 1972 the first patient was scanned by this machine. She was a woman who had a ... Hounsfield GN: Computerised transverse axial scanning (tomography) I.
  24. [24]
    The Nobel Prize in Physiology or Medicine 1979 - Press release
    Godfrey Hounsfield ... He has made the really decisive contributions for introducing computed tomography in medicine by constructing the first computed tomography ...
  25. [25]
    The evolution of image reconstruction for CT—from filtered back ...
    Oct 30, 2018 · The first CT scanners in the early 1970s already used iterative reconstruction algorithms; however, lack of computational power prevented ...
  26. [26]
    Computing and Data Processing | Working Papers: Astronomy and ...
    Powerful deconvolution algorithms have been developed which can greatly enhance the power of both radio and optical imaging telescopes. As discussed in the " ...
  27. [27]
    on being `undigital' with digital cameras: extending dynamic range ...
    Computational Photography: High Dynamic Range and Light Fields · The high dynamic range imaging pipeline: Tone-mapping, distribution, and single-exposure ...
  28. [28]
    [1406.2661] Generative Adversarial Networks - arXiv
    Jun 10, 2014 · We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models.
  29. [29]
    Experimental Nighttime Photography with Nexus and Pixel
    Apr 25, 2017 · Google's Nexus 6P and Pixel phones support exposure times of 4 and 2 seconds respectively. As long as the scene is static, we should be able to ...
  30. [30]
    Deep learning for computational imaging: from data-driven to ...
    It has found widespread applications in medical diagnosis and astronomy, among others. Recently, deep learning (DL) has changed the paradigm of CI by harnessing ...<|separator|>
  31. [31]
    [PDF] A Fast Alternating Minimization Algorithm for Coded Aperture ... - arXiv
    Jun 12, 2022 · y = Hx + ω,. (1) where H ∈ RM(N+L−1)×MNL and ω ... Arce, “Higher- order computational model for coded aperture spectral imaging,” Appl.
  32. [32]
    Image and Depth from a Conventional Camera with a Coded Aperture
    This deblurring algorithm significantly outperforms the traditional Richardson-Lucy deconvolution algorithm. A document describing the algorithm in detail can ...
  33. [33]
  34. [34]
    Reduction in Irradiation Dose in Aperture Coded Enhanced ... - MDPI
    This dramatic dose reduction allows a safer imaging process for the patient and for the medical staff. It also simplifies the system, allows a smaller pixel ...
  35. [35]
    (PDF) Portable X-ray and gamma-ray imager with coded mask
    Aug 10, 2025 · The hexagonal URA masks of ranks 6 and 9 are used for coded mask imaging; transparent elements of mask are cylindrical holes of diameter 1.3–2.2 ...
  36. [36]
    [PDF] Compressive Coded Aperture Spectral Imaging - ECE/CIS
    Dec 5, 2013 · This article surveys compressive coded aperture spectral imagers, also known as coded aperture snapshot spectral imagers (CASSI) [1], [3], [4],.
  37. [37]
  38. [38]
    [PDF] Compressive sampling - Emmanuel Candès
    This paper surveys an emerging theory which goes by the name of “compressive sampling” or. “compressed sensing,” and which says that this conventional wisdom is ...
  39. [39]
  40. [40]
    Video rate spectral imaging using a coded aperture snapshot ...
    This paper reports on a direct view CASSI design. The design uses only one relay lens, thus requiring one less optical element than the previous one.
  41. [41]
    Compressive spectral imaging for accurate remote sensing - SPIE
    Aug 27, 2013 · A new imaging device based on compressive sensing accurately captures remote hyperspectral images with significantly fewer measurements than ...
  42. [42]
  43. [43]
  44. [44]
  45. [45]
    Nonlinear structured-illumination microscopy: Wide-field ... - PNAS
    This article demonstrates an alternative approach that brings theoretically unlimited resolution to a wide-field (nonscanning) microscope by using a nonlinear ...
  46. [46]
  47. [47]
    [PDF] Proximal Algorithms - Stanford University
    Proximal algorithms are a class of optimization algorithms for nonsmooth, constrained, large-scale problems. Their base operation involves evaluating the  ...
  48. [48]
    Theoretically Exact Filtered Backprojection-Type Inversion Algorithm ...
    Proposed is a theoretically exact formula for inversion of data obtained by a spiral computed tomography (CT) scan with a two-dimensional detector array.<|separator|>
  49. [49]
    Blind deconvolution using least squares minimisation - ScienceDirect
    We consider in detail two algorithms based on a least squares optimisation approach. The performance of these two algorithms is also discussed.Missing: minimization formula<|separator|>
  50. [50]
    [PDF] Kaczmarz algorithm - Repozytorium PK
    In 1937, Stefan Kaczmarz proposed a simple method, called the Kaczmarz algorithm, to solve iteratively systems of linear equations Ax = b in Euclidean spaces.
  51. [51]
    ART (algebraic reconstruction technique)
    ART (algebraic reconstruction technique). This is an extension of the Kaczmarz (1937) method for solving linear systems. It has been introduced in imaging by ...
  52. [52]
    Nonlinear total variation based noise removal algorithms
    The total variation of the image is minimized subject to constraints involving the statistics of the noise. ... Rudin. Images, numerical analysis of ...
  53. [53]
    [PDF] Noise, Denoising, and Image Reconstruction with Noise (lecture 10)
    Therefore, we derive an ADMM-based formulation of the Poisson deconvolution problem to computer better solutions and to make the solver more flexible w.r.t. ...
  54. [54]
    SENSE: sensitivity encoding for fast MRI - PubMed
    Sensitivity encoding (SENSE) is based on the fact that receiver sensitivity generally has an encoding effect complementary to Fourier preparation by linear ...
  55. [55]
    U-Net: Convolutional Networks for Biomedical Image Segmentation
    May 18, 2015 · In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more ...
  56. [56]
    Residual Learning of Deep CNN for Image Denoising - arXiv
    Aug 13, 2016 · In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs)
  57. [57]
    [1711.10925] Deep Image Prior - arXiv
    Nov 29, 2017 · In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image ...
  58. [58]
    Image reconstruction by domain transform manifold learning - arXiv
    Apr 28, 2017 · We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for a variety of MRI ...
  59. [59]
  60. [60]
    Estimated Radiation Dose Reduction Using Adaptive Statistical ...
    ASIR enabled reduced tube current and lower radiation dose in comparison with FBP, with preserved signal, noise, and study interpretability, in a large ...
  61. [61]
  62. [62]
    Sparse MRI: The application of compressed sensing for rapid MR ...
    Oct 29, 2007 · Images with a sparse representation can be recovered from randomly undersampled k-space data, provided an appropriate nonlinear recovery scheme is used.
  63. [63]
    Phase Aberration Correction for In Vivo Ultrasound Localization ...
    Sep 18, 2023 · We propose a deep learning approach based on recently introduced complex-valued convolutional neural networks (CV-CNNs) to retrieve the aberration function.
  64. [64]
    A photoacoustic image reconstruction method using total variation ...
    In photoacoustic imaging (PAI), the reduction of scanning time is a major concern for PAI in practice. A popular strategy is to reconstruct the image from ...
  65. [65]
    A photoacoustic imaging reconstruction method based on ...
    May 30, 2017 · In photoacoustic tomography (PAT), total variation (TV) based iteration algorithm is reported to have a good performance in PAT image ...
  66. [66]
    Deep Learning Image Reconstruction for CT: Technical Principles ...
    Jan 31, 2023 · This review presents an overview of the principles, technical approaches, and clinical applications of DLR, including metal artifact reduction algorithms.<|separator|>
  67. [67]
    Algorithm for mapping cutaneous tissue oxygen concentration using ...
    Aug 18, 2015 · In this paper we proposed the use of hyperspectral imaging as new method for the assessment of the tissue oxygenation level.
  68. [68]
  69. [69]
    Super-resolution imaging reveals the evolution of higher-order ...
    Apr 20, 2020 · Super-resolution imaging reveals significant fragmentation in DNA folding and heterochromatin decompaction in early stage of carcinogenesis even ...
  70. [70]
    Super-resolution reconstruction for early cervical cancer magnetic ...
    Aug 22, 2024 · This study aims to develop a super-resolution (SR) algorithm tailored specifically for enhancing the image quality and resolution of early cervical cancer (CC) ...
  71. [71]
    [PDF] The Burst Alert Telescope (BAT) on the Swift MIDEX mission - arXiv
    The Burst Alert Telescope (BAT) is one of 3 instruments on the Swift MIDEX spacecraft to study gamma-ray bursts (GRBs). The BAT first detects the GRB and ...
  72. [72]
    About Swift - BAT Instrument Description
    Feb 14, 2012 · The Burst Alert Telescope (BAT) is a highly sensitive, large FOV instrument designed to provide critical GRB triggers and 4-arcmin positions.
  73. [73]
    [PDF] NASA 2014 The Hyperspectral Infrared Imager (HyspIRI)
    Jul 24, 2014 · The HyspIRI mission includes two instruments: a visible shortwave infrared (VSWIR) imaging spectrometer operating between 0.38 and 2.5 µm at a ...
  74. [74]
    [PDF] Computational Ghost Imaging for Remote Sensing Applications
    May 15, 2011 · Here we report on the rigorous analysis of a ghost-imaging remote-sensing architecture that acquires the 2D spatial Fourier transform of the ...
  75. [75]
    Far‐Field Synthetic Aperture Imaging via Fourier Ptychography with ...
    Aug 1, 2023 · Acquiring high-resolution images is essential in far-field imaging application such as astronomy, remote sensing, and geological exploration.<|separator|>
  76. [76]
    JWST Post-Pipeline Data Analysis
    There are a variety of JWST post-pipeline data analysis tools that can assist observers in viewing and analyzing their JWST data.
  77. [77]
    JWST MIRI Imaging Data Post-Processing Preliminary Study ... - arXiv
    Apr 3, 2023 · This manuscript reports a part of a dedicated study aiming to disentangle sources of signals from James Webb Space Telescope (JWST) Mid-Infrared Instrument ( ...Missing: computational | Show results with:computational
  78. [78]
    [PDF] Recovering High Dynamic Range Radiance Maps from Photographs
    Paul E. Debevec. Jitendra Malik. University of California at Berkeley i. ABSTRACT. We present a method of recovering high dynamic range radiance maps from ...
  79. [79]
    Handheld Multi-Frame Super-Resolution - Google Sites
    We present a multi-frame super-resolution algorithm that supplants the need for demosaicing in a camera pipeline by merging a burst of raw images.Missing: 2014 | Show results with:2014
  80. [80]
    Portrait mode on the Pixel 2 and Pixel 2 XL smartphones
    Oct 17, 2017 · Portrait mode, a major feature of the new Pixel 2 and Pixel 2 XL smartphones, allows anyone to take professional-looking shallow depth-of-field images.
  81. [81]
    Night Sight: Seeing in the Dark on Pixel Phones - Google Research
    Nov 14, 2018 · Night Sight is a new feature of the Pixel Camera app that lets you take sharp, clean photographs in very low light, even in light so dim you can't see much ...Missing: denoising | Show results with:denoising
  82. [82]
    Apple launches Deep Fusion feature in beta on iPhone 11 and ...
    Oct 1, 2019 · Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible ...Missing: enhancement | Show results with:enhancement
  83. [83]
    Lytro announces Light Field Camera: Digital Photography Review
    Unlike conventional cameras, the Lytro light field camera captures all the rays of light in a scene, providing new capabilities never before ...
  84. [84]
  85. [85]
    scikit-image: Image processing in Python — scikit-image
    scikit-image is a collection of algorithms for image processing. It is available free of charge and free of restriction.1. Installing scikit-image · Scikit-image 0.25.2 (2025-02-18) · Examples · User guideMissing: computational | Show results with:computational
  86. [86]
    OpenICS: Open image compressive sensing toolbox and benchmark
    OpenICS is an open-source, open image compressive sensing toolbox with a unified framework and a benchmark for multiple algorithms.
  87. [87]
    bwohlberg/sporco: Sparse Optimisation Research Code - GitHub
    SPORCO is a Python package for solving optimisation problems with sparsity-inducing regularisation. These consist primarily of sparse coding and dictionary ...Missing: SPGL1 computational<|separator|>
  88. [88]
    aky91/Single-Pixel-Camera: Low cost camera using just one pixel.
    The Single Pixel Camera is a setup that uses the compressed sensing algorithm to reconstruct an image from a sparse matrix.
  89. [89]
    opencv/opencv: Open Source Computer Vision Library - GitHub
    Fork 56.4k · Star 84.9k. Open Source Computer Vision Library. opencv.org. License. Apache-2.0 license · 84.9k stars 56.4k forks Branches Tags Activity · Star.OpenCV · Releases · OpenCV_Contrib · Issues
  90. [90]
    [PDF] MAR 2 52014 /'1 F4 4 - accessdata.fda.gov
    Mar 2, 2025 · Image quality improvements and dose reduction depend on the clinical task, patient size, anatomical location, and clinical practice. The GE ASiR ...
  91. [91]
    syngo.via - Siemens Healthineers USA
    syngo.via, a powerful imaging software for planning, processing, reading, and sharing MR images faster and easier, from anywhere on your network.Missing: AI- | Show results with:AI-
  92. [92]
    Prostate MR on syngo.via - Siemens Healthineers - Health AI Register
    The deep neural networks of Prostate MR are designed to aid the detection and classification of prostate lesions and generate a pre-populated report. The ...Missing: accelerated | Show results with:accelerated
  93. [93]
    Denoise Demystified - the Adobe Blog
    Apr 18, 2023 · Denoise is our third Enhance feature in Lightroom and Camera Raw, following Raw Details and Super Resolution. It's also by far our most ...Introducing Denoise · How Does It Work? · Examples
  94. [94]
    Qualcomm's Snapdragon 8 Gen 2 mobile platform has ... - DPReview
    Nov 15, 2022 · The Snapdragon has a new image signal processing (ISP) technology Qualcomm is calling Cognitive ISP, which uses Direct Link technology to connect the Hexagon ...
  95. [95]
    Simatest – Camera/ISP Simulator | Imatest
    Simatest is Imatest's Camera and ISP simulator that simulates complete camera systems, including lens degradations, image sensor noise, and ISP pipelines.
  96. [96]
    AI Detection, Target Tracking, and Computational Imaging on ...
    May 14, 2024 · We describe the state of the art in embedded processors and the integration of Teledyne FLIR's Prism™ thermal and multispectral image signal ...
  97. [97]
    North America CT Market - Size, Share & Industry Trends Analysis
    Sep 5, 2025 · The North America CT market is USD 3.33 billion in 2025, projected to USD 4.53 billion by 2030, with a 6.38% CAGR. Mid-slice systems held 44.16 ...